Will There be a US Federal Privacy Standard?

There’s a good commentary by Daniel Barber, published today, about the various data privacy bills that are being considered by Congress. Here’s a synopsis:

  • Consumer Online Privacy Rights Act (COPRA). A Senate bill introduced in November 2019, this is a consumer-friendly act focused on data privacy, would impose large fines on violators, and would create a new federal bureaucracy, the Bureau for Privacy.
  • Privacy Bill of Rights Act. A Senate bill from April 2019 that is quite similar to the California Consumer Privacy Act (CCPA).
  • Consumer Data Protection Act. A Senate bill from November 2018, COPRA closely matches the European Union’s General Data Protection Regulation (GDPR) and would target companies with at least $50 million in annual revenue and that manage more than one million records. Like the most aggressive penalty under GDPR, it would impose a fine of four percent on violators.
  • Online Privacy Act. This act would enable consumers to access their data and have it deleted, much like the GDPR, and would impose regulations on algorithmic processes that many are using to target prospective customers.

The two big questions surrounding a GDPR- or CCPA-like bill at the federal level are:

  • Is it a good idea to preempt state data privacy legislation?
  • Should stricter state regulations on data privacy supercede weaker federal provisions?

Mr. Barber’s take on the first question is clear: “While I generally favor the states’ role in being the so-called laboratories of democracy, only a uniform federal piece of legislation will solve the problem and create order.” I agree with him to an extent, but federal legislation tends to get watered down in committee. That, combined with an administration that is not favorable to enacting new regulations, could result in a weakened version of these bills that would do relatively little in addressing problems with data privacy.

With regard to the second question, I believe that states should be permitted to enact stricter legislation if their citizens and their elected representatives choose to do so. Yes, it makes things more onerous for business, but it enables states to have the freedom to implement rules that are a better fit for their citizens (not that that always happens, of course).

Perhaps the best course of action is for companies to adopt the CCPA as a de facto standard for all of their US domestic operations. Microsoft and ISP Starry have already done so, pledging to honor the provisions of the CCPA in all 50 states. In the absence of federal regulation to protect data privacy, it will be interesting to see if consumer demand for privacy is sufficient to motivate other companies to follow the example of Microsoft and Starry.

The Mixed Bag Influence of Twitter

This is not a tirade against Twitter. Twitter is a thing. Like cars, guns, a printing press, the Internet or any other thing, it’s inanimate and, by definition, cannot be either good or bad. Only the use to which it is put can be good or bad. So, when you read “Twitter” in the following, read it as “the use of Twitter”.

On the positive side, Twitter is a good thing because it enables distribution of news, ideas, etc. to a wide audience. It enables learning from bright people in a way that probably would not be possible in any other way. In my role as an industry analyst, I find Twitter to be incredibly useful for discovering ideas, learning about news, and following smart people that would be more difficult to do in other ways, and to be able share news and other information with an audience that would be almost impossible to reach some through other media.

But there are three fundamentally negative aspects to Twitter that largely negate much of the positive that it brings:

  • Almost every problem is multi-dimensional. Whether it’s homelessness, Hong Kong, the national debt, armed conflict, data breaches or any other issue, it’s rarely one thing that can be identified as the cause. Instead, problems normally are the result of many causes, each of which contributes to the problem in varying degrees. However, when someone takes to Twitter to discuss a problem or convey information, they’re limited to a maximum of 280 characters and so can rarely discuss more than one thing. If we assume that the average word is just five characters plus the following space, that’s a maximum of about 47 words to discuss the issue – and very few issues can be discussed with any degree of depth in 47 words. The result is that discussion of important issues gets reduced to sound bites, not substantive discussion or analysis. That fits nicely with the decreasing length of the typical attention span, but it makes for poor decision making.
  • Like any form of electronic communication, the remote nature of correspondence on Twitter eliminates the consequences associated with rude behavior. Hurl an insult in-person and you run the risk of getting punched in the nose – do so on Twitter and there will rarely be a consequence other than receiving an insult in return. In short, the social consequences of rudeness all but disappear in the Twittersphere.
  • Finally, and perhaps most dangerous, is the strong tendency for decision makers to assume that the most vocal people on Twitter actually represent many more of the same mindset than they actually do. For example, in June 2018, Twitter’s CEO Jack Dorsey ordered food from Chick-fil-A®. He was called out for doing so by a number of people on Twitter and apologized for his behavior. An article in USA Today cited three tweets as part of the backlash – these tweets had a combined 318 “likes” in the nearly 19 months since they were published. By contrast, I estimate that Chick-fil-A serves approximately 4,600 customers per minute. Those who “like” a tweet – or care about the issue in any way – rarely are even the tiniest fraction of those who could not care less about the issue or disagree with it.

The last point is the most dangerous aspect of Twitter because it has enabled the rapid expansion of bullying. Bullying requires a) a bully who thinks they can harm their victims (of which there is no shortage on social media platforms, including Twitter) and b) someone who considers themselves vulnerable to harm. Consequently, it’s easy for tweeters to seem like they’re representing more people than they really are. However, only 22 percent of Americans use Twitter and only 10 percent of its users account for 80 percent of tweets. Tweeters really don’t represent much of the population, but decision makers – including those who apologize for having lunch at the “wrong” place – seem to think they do, and so fall victim to bullying tactics on a regular basis.

In short, Twitter is a fantastic platform for sharing information and learning, but it has serious downsides that negate much, if not all, of its positives.

 

The Value of Threat Intelligence

Cyber security is an ongoing battle between sophisticated and well-funded bad actors and those who must defend corporate networks against their attacks. The bad news is that the latter are typically not winning. A recent Osterman Research survey found that while most organizations self-report that they are doing “well” or “very well” against ransomware, other types of malware infections, and thwarting account takeovers because of the significant emphasis placed on these threats, they are not doing well against just about every other type of threat. These include protecting data sought by attackers, preventing users from reaching malicious sites after they respond to a phishing message, eliminating business email compromise (BEC) attacks, eliminating phishing attempts before they reach end users, and preventing infections on mobile devices.

This missing component for most organizations is the addition of robust and actionable threat intelligence to their existing security defenses, which can be segmented into four subcategories:

  1. Strategic (non-technical information about an organization’s threat landscape)
  2. Tactical (details of threat actors’ tactics, techniques and procedures)
  3. Operational (actionable information about specific, incoming attacks)
  4. Technical (technical threat indicators, e.g., malware hashes)

The use of good threat intelligence can enable security analysts, threat researchers and others to gain the upper hand in dealing with cyber criminals by giving them the information they need to better understand current and past attacks, and it can give them the tools they need to predict and thwart future attacks. Moreover, good threat intelligence can bolster existing security defenses like SIEMs and firewalls and make them more effective against attacks. Threat intelligence plays a key role in proactive defense to ensure that all security programs are relevant to the fast-evolving threat landscape. This is particularly valuable in security awareness training to ensure users are familiar with known threats.

Existing security defenses provide some measure of protection against increasingly sophisticated threats, but the enormous number of data breaches and related problems experienced by many organizations reveals that current security practices are not adequate. Good threat intelligence capabilities can provide a great deal of information about the domains and IP addresses that are attempting to gain access to a network. It can enable threat researchers to better understand the source of current and past attacks and better deal with future attacks.

We have just published a white paper on threat intelligence that you can download here.

 

Part of Your Security Posture is Making Sure Your Managers Aren’t Jerks

According to the Ponemon Institute’s 2018 Cost of Insider Threats: Global report, of the 3,269 insider incidents that Ponemon investigated, 23 percent were caused by “criminal insiders” (as opposed to careless/negligent employees or contractors, or credential thieves). These malicious insiders can wreak all sorts of havoc, including theft of customer records, trade secrets or competitive information; and they can create enormous liabilities for their employer in the wake of their departure, such as triggering regulatory audits or fines for violating customer privacy.

So, why do employees become malicious and what can be done about it? Reviewing advice from a variety of sources reveals that most of that advice focuses on checking employees: check their background before they’re hired, monitor their behavior for signs that they might become malicious, and so forth. However, Osterman Research believes that companies should also focus heavily on their managers and monitor their behavior. For example, do managers in your company berate employees in front of their peers? Do they give them poor performance evaluations that are not justified? Do they demonstrate that they have “favorites” among their subordinates? Do they enforce company policies differently for some employees than they do for others? Do they insult their employees? In short, how well do your managers treat those that they manage?

Understanding management behavior is key. A study from several years ago by the law firm Drinker Biddle and Reath found that employees who are treated poorly by their managers will be more likely to commit fraud, intentionally breach data, and otherwise violate corporate policies.

What should employers do? There are several things:

  • Monitor managers’ email and collaboration accounts to uncover instances of morale-destroying behavior.
  • Monitor their personal social media accounts to uncover posts that undermine employees, the company or others.
  • Conduct anonymous employee surveys to get some honest opinions about how managers are treating their subordinates.
  • Monitor employee accounts for signs that their managers are treating them badly.

Of course, the goal is not to conduct a witch hunt or to undermine the morale of corporate managers. But bad managers create bad employees, and that significantly increases a company’s risk profile.

How Do You Decide on a Cybersecurity Vendor?

Kevin Simzer, Chief Operating Officer at Trend Micro, wrote an interesting blog post entitled My Takeaways from Black Hat ’19. Among the good points he makes is this one:

“With some ~3,000 vendors, the [cybersecurity] industry is making it so hard for decision makers to keep a clear view of the problem they are out to solve.”

That’s almost an understatement. At a show like Black Hat, RSA or InfoSec, for example, no more than about 20 percent of cybersecurity vendors exhibit, and so there are another 80 percent of the available solutions that just aren’t available for evaluation by attendees. And, at a show like RSA (which had 624 vendors exhibit in San Francisco earlier this year), spending just five minutes at each booth to learn what was on offer would mean you’d spend 52 hours on the show floor — and the expo isn’t open anywhere near that long.

So, as a security professional, what do you do? You can learn as much about security solutions as you can through conferences, vendor briefings, webinars, analyst reports and the like. But even then, you’ll just be scratching the surface of what’s available. Another response is to consolidate on a much smaller number of vendors to avoid the problems associated with evaluating large numbers of solutions and figuring out how to integrate and manage them. For example, at one of the briefings I had at Black Hat, a leading vendor told me that one of their clients is attempting to consolidate their current crop of 40 security vendors down to just two. That carries with it its own set of difficulties, since a consolidation project like this — and finding just the right two vendors — could be tougher than having too many.

Compounding the problem is that many security vendors offer somewhat contradictory messages based on different philosophical approaches to security.

So, as a security professional, what do you do? I’d like to hear how you approach the problem for your organization. Please email me at michael@ostermanresearch.com, or text or call me at +1 206 683 5683.

Are You Paying Attention to SOT and HOT?

Everyone in the cybersecurity space is very familiar with Information Technology (IT), but far fewer are as familiar with Operational Technology (OT) – software and hardware that focuses on control and management of physical devices like process controllers, lighting, access control systems, HVAC systems and the like.

However, cybersecurity professionals should familiarize themselves with OT because it is having an increasingly serious impact on their IT solutions and on their corporate data. Here are two of the several aspects of OT to consider:

Shadow OT (SOT)

Most of us are familiar with “Shadow IT” – individual users or departments employing their own mobile devices, mobile applications, cloud apps, laptops and other personally managed solutions to access corporate resources like email and databases. This phenomenon/scourge/blessing/reality has been with us for more than a decade and is generally well accepted by the IT community. But relatively new on the scene is “Shadow OT” – the use of Internet of Things (IoT) solutions in the workplace. For example, some businesses will employ consumer-grade solutions like routers, security cameras and lights in a work environment, introducing a number of vulnerabilities that are more common in consumer-focused IoT solutions than they are in industrial-grade solutions. Because consumer-grade IoT products are developed by manufacturers who are under enormous price pressure and will sometimes employ temporarily contracted teams to create these devices, the consideration of security in the design process, not to mention the ability to upgrade and patch these devices, is not common.

Because consumer-focused IoT solutions often will have vulnerabilities, they can create enormous security holes when used in the workplace. For example, as discussed at Trend Micro’s Directions ’19 conference earlier this week in a session hosted by Bill Malik (@WilliamMalikTM), a New Jersey hospital installed Bluetooth-enabled monitoring pads in its 2,000 beds to detect patient movement and dampness that would signal a patient needing a nurse’s attention. Doing so makes sense – using technology like this frees nurses from the task of going room-to-room to check patients who needed no help, allowing nurses to spend more time on other, more critical tasks. And, they were able to implement the solution for about $120,000 instead of the $16 million that would have been required to use FDA-approved beds that offered the same functionality. But these consumer-oriented devices very likely have major security vulnerabilities that could allow an attacker to access critical medical systems like insulin pumps and patient monitors, not to mention the hospital’s patient records that are valuable to bad actors.

Home OT (HOT)

Another important issue to consider is the use of OT in the home. Many employees work from home either occasionally or full time and they often do so in an environment populated by Internet-connected thermostats, baby monitors, game systems, voice-enabled home automation systems, security cameras, lights, alarm systems, wearables, refrigerators and the like. Here again, these often insecure solutions typically have numerous security vulnerabilities and access the home Wi-Fi network – the same one the employees use to connect their laptop and desktop computers to enterprise email and other corporate data sources. And, because all of these devices in the home connect through the same gateway, a bad actor’s access to one device exposes everything else on the network – including corporate devices – to unauthorized access and control.

The solutions to these issues won’t be easy. It’s tough to convince decision makers, as in the case of the hospital noted above, to spend 100+ times more on secure technology when they barely have the budget for what they can afford now. And it’s virtually impossible to require employees to disconnect the IoT devices in their homes while they’re working there. However, there are some things that can be done, such as using firewalls, monitoring solutions, VPNs and the like to make things more secure in the short term. Longer term security will require a change in design focus, as well as user education focused on being careful about using an ever-expanding array of OT devices, among other things.

A Shift from Public to Private Clouds?

At Dell Technologies World, Jeff Clarke made the point that 40 percent of workloads in the cloud today will migrate back to on-premises, private clouds in the future. On a related note, Pat Gelsinger made an interesting point in the Tuesday keynote that hybrid is not the future, but is the present and for three simple reasons:

  1. The law of physics: if you need 50-millisecond latency, you can’t afford a public cloud experience that provides 250-millisecond performance.
  2. The law of economics: hybrid cloud will often be cheaper than public cloud.
  3. The law of the land: compliance regulations will dictate that at least some data and infrastructure must remain on-premises.

They make a good point. Lots of companies went to the public cloud because it was easier, not because it was cheaper. For example, moving workloads to the public cloud is easier than evaluating, funding, deploying, configuring and maintaining on-premises infrastructure to support these workloads on-premises. That’s especially true in organizations that have a difficult time finding and/or affording the IT, security and other staff members who need to be involved in on-premises deployments.

In the short run, the public cloud is much cheaper than on-premises solutions and it can be cheaper in many cases over the long run, as well. Plus, if you need tremendous flexibility and need to spin up and take down capacity quickly, the public cloud is a great option. But here are some things to consider when using the public cloud:

  • While in the short run the public cloud is cheaper, it might not be in the long run. Good cost modeling is essential as part of the decision-making process.
  • If you’re using public cloud applications (e.g., Office 365) you can’t avoid upgrades. In the days when Exchange Server was the norm for business-grade email, many organizations skipped an upgrade because of the difficulty and cost associated with doing so. That doesn’t happen with public cloud applications.
  • Many public clouds offer great performance, but the laws of physics still apply. Connecting to a cloud 500 feet away from your office will (almost) always be faster than one 500 miles away.
  • Most leading public cloud providers do a good job at protecting data. And many of the biggest data breaches over the past several years have been from on-premises infrastructure. However, there is still something to be said for having your critical data assets, backups, etc. held on your own premises.
  • Bandwidth considerations are important today and will be more so tomorrow, and so should always factor into the decision about where data and solutions will reside.

None of this means that the public cloud should go or is going away. It plays an increasingly essential role for most organizations and will continue to be important moving forward, not least of the reasons being its tremendous flexibility for a wide range of use cases. But consider everything related to the use of public clouds versus private clouds, not just the simplicity of deployment or the initial cost.

Archiving as a Customer Service Tool

We live in a suburb of Seattle and, like most of us who live in Western Washington, we have lots of trees in our neighborhood. One of the consequences of our winter storms is that our trees lose a number of limbs. To get rid of the tree debris each winter, about 16 years ago we and our neighbors purchased a gas-powered chipper from a company in northwestern Vermont called Country Home Products.

A pulley on the chipper shattered and I needed to order a new one. I tried to purchase a replacement part locally, but was told to contact Country Home Products directly, which I did. I didn’t remember the model number of the chipper and I didn’t have a part number for the broken pulley. However, I told the rep our address and that the broken pulley “was the larger one on the right as you face the housing.” He quickly brought up our purchase record from their database, knew the exact model of chipper we had purchased, and knew exactly what part we needed. The part was shipped and it was the right one.

We hear lots about archiving for purposes of regulatory compliance, litigation support, eDiscovery and the like — mostly defensive reasons just in case we need old data to satisfy a regulatory audit or address a legal action. But archiving can also be used as a customer service tool. In my case, a vendor’s customer service rep was able to immediately access my records from 16 years earlier and he knew more about my purchase and the specific replacement part I needed than I did.

That’s the kind of service that satisfies customers and builds brand loyalty — enabled because someone opted to keep their customer records in an easily accessible archive.

The Demise of the A380

The Airbus A380 is an amazing airplane and an engineering marvel – it’s the largest commercial aircraft currently flying, able to carry up to 868 passengers in a one-class configuration (although the typical three-class configuration carries 544 passengers). The plane is quiet, it’s comfortable and passengers like it. It can reduce airport congestion, since one A380 with 544 passengers will require less airport footprint and fewer resources than the three A320s that would carry the same number of passengers.

And yet, Airbus announced this week that it will cease production of its flagship A380 in 2021, just 14 years after its first commercial flight in 2007. Contrast this with the Boeing 747, which flew its first commercial flight in January 1970 and is still in production (albeit now only as a freighter and as two new Special Air Mission/Air Force One aircraft to be delivered in 2024), giving it a production life of at least 54 years.

So, why the demise of the A380? There are a number of reasons, including the logistical difficulties associated with producing the aircraft’s components in four countries across Europe and transporting them for final assembly in Toulouse, France; the high cost of the aircraft (~$445 million); the limited number of airlines that have purchased it (only 16 have ordered, and only 13 fly); the high cost of modifying airport terminals to accommodate it; and the introduction of highly fuel-efficient aircraft like the Boeing 787 and Airbus A350.

The A380 was designed to accommodate the hub-and-spoke model of air travel: fly large numbers of passengers to a central hub like London or Dubai, and then put those passengers on several smaller planes to their final destination. In contrast, aircraft like the 787 and A350 were designed more for point-to-point flights, making routes like Minneapolis to Lisbon financially viable. To be fair, the A380 was conceived before the 787, A350 and other, more fuel-efficient aircraft were available, but Airbus simply made the wrong decision about the future of air travel and was woefully optimistic in its forecasts: the company predicted in 2000 that 1,235 “very large aircraft” would be delivered from 2000 to 2019, but orders and deliveries of the A380 have been just 313 and 234, respectively, through last month. That’s a revenue miss of roughly $410 billion!

In my opinion, the A380’s demise boils down fundamentally to a single question: as a passenger, would you rather take one flight or two to get to your destination? Airbus seems to have answered that question with “two”, while a large proportion of the flying public and most airlines answered “one”.

In my own case, I would rather not make a connection through a large and busy airport if it’s at all possible to avoid it and I will go out of my way – and pay more – to take a flight without connections. I realize that many people will opt for cheaper, connecting flights, but they carry with them some fairly high costs: for example, a dated study commissioned by the FAA found that in 2010, missed connections cost passengers $1.5 billion each year.

The inconvenience of needing to make connections, as well as the lost productivity and opportunities that sometimes result, is not something that most business travelers, and many leisure travelers, are willing to accept. It’s one of the key reasons that we will see no new A380s produced after 2021.

Should You Rent or Buy Your Email and Productivity Apps?

Microsoft dominates the business email and desktop productivity markets. Over the past few years, the company has been pushing hard to move its user base for both to Office 365 and away from Exchange Server and desktop versions of Office. The push has intensified in recent months to the point where the company is now telling customers not just to adopt Office 365, but also not to use non-Office 365 solutions. For example, as noted in this article, the Microsoft corporate VP for the Office and Windows group said that the various applications in Office 2019 are “frozen in time. They don’t ever get updated with new features”. By contrast, Office 365 keeps “getting better over time, with new capabilities delivered every month.” It makes one wonder why Microsoft bothered to produce Office 2019, but that’s a subject for a different post.

Perhaps telling people not to use your products is the natural consequence of having such a dominant market share that the only competition left for your new and shiny products is your old and dull ones.

The key for decision makers, then, is to determine if the “new capabilities delivered every month” in Office, coupled with the reduced IT labor required to manage corporate email, is worth becoming a renter in perpetuity rather than a buyer.

To compare the costs of renting versus buying for a 50-person company, we compared the cost of two competing systems:

  1. MDaemon Server (including MDaemon AntiVirus, MDaemon Connector for Outlook, MDaemon ActiveSync and MailStore email archiving) and Office 2019 Home & Business.
  2. Various flavors of Office 365 (Office 365 Business Premium, Office 365 Enterprise E3 and Office 365 Enterprise E5).

Using only publicly available pricing on the MDaemonOffice 365 and Amazon.com web sites, here’s the annual pricing to support 50 users with business email and productivity applications over a three-year period:

  • MDaemon and Office 2019: $114.68 per user per year
  • Office 365 Business Premium: $150.00 per user per year
  • Office 365 Enterprise E3: $240.00 per user per year
  • Office 365 Enterprise E5: $420.00 per user per year

Of course, the primary advantage of any cloud-based solution is the reduction in IT labor realized from not having to manage on-premises infrastructure. But productivity applications don’t need significant levels of IT support, and most on-premises email solutions for small companies, as in our 50-user example, don’t either.

Please understand that this is not meant to disparage cloud-based solutions. Osterman Research is a strong proponent of the cloud for productivity solutions, CRM, security, archiving and a wide variety of other capabilities, and we are also a strong proponent of Office 365. But when making decisions, it’s important to understand where to rent and where to buy — buying is still not a bad business decision in some cases.