The Impact of the GDPR on Cloud Providers

We just published a new white paper on the European Union’s (EU’s) General Data Protection Regulation (GDPR) and will soon be publishing the results of the two surveys we conducted for that white paper.

In the second of the two surveys we conducted, we asked the following question: “Will your organization increase or decrease use of cloud technology as a result of the GDPR?” We found that 50 percent of respondents indicated they would do so, 39 percent said there will be no change, six percent said they didn’t yet know, and only five percent said that use of the cloud will decrease. That tells us a few things:

  • Many decision makers are still unsure about how they’ll deal with the GDPR. A thorough reading of the regulation, as with most government rules, leaves room for interpretation. For example, if data on an EU resident is subject to a litigation hold in the United States and the EU resident exercises his or her right to be forgotten, should the data controller violate its obligations to retain the data or violate the GDPR? That uncertainty will lead many to seek the assistance of third parties, many of which will be cloud providers that have more expertise in dealing with these kinds of issues.
  • Many organizations will pass the buck to their cloud providers. Because many organizations are simply not sure about how to deal with the GDPR, particularly smaller ones that can’t afford a team of GDPR-focused legal and compliance experts, they will rely increasingly on cloud providers who they anticipate/expect/hope will navigate the intracacies of the GDPR on their behalf. We believe that will accelerate the replacement of on-premises solutions with those based in the cloud.
  • Consequently, the choice of cloud providers will become extremely important. Since a cloud provider that inadvertently violates key provisions of the GDPR while working on behalf of their clients will not be a shield from prosecution, GDPR savvy will become a top priority when selecting new, or staying with existing, cloud providers.
  • The new ePrivacy Regulation that will supplement or replace key provisions of the GDPR will impose significant usability restrictions on even simple activities like web surfing. For example, it is very likely that web site visitors will need to grant permission for each and every cookie dropped into their browser when visiting a web site, yet that web site operator will not be able simply to block content for those users who do not grant permission. This will make the choice of a web host extremely important in order to comply with both the GDPR and the ePrivacy Regulation.

In short, while the GDPR increases privacy protections for individual users in the EU, it is increasing the risk for those that wish to provide content to them. Many companies, particularly smaller ones, will seek to mitigate that risk by handing it off to cloud providers.

You can download our newest GDPR white paper here, and get more information on the ePrivacy Regulation here and here.

BYOD OK?

We have recently completed a survey of IT decision makers that are knowledgeable about security issues in their organizations, and we found something surprising: the concern about “shadow IT” — employee use of unauthorized cloud apps or services — is significantly lower in this year’s survey than it was just over a year ago. While there can be variability between surveys because of sampling and other issues, the difference we found is not explained by sampling variability, but instead represents a significant shift of concern away from the problem of shadow IT and BYOD/C/A (Bring Your Own Devices/Cloud/Applications).

Why?

Three theories:

  • First, we have not seen big, headline-grabbing data breaches result from the use of personally owned smartphones, tablets, laptops and other employee-owned and managed devices, cloud applications and mobile applications. While these breaches occur and clearly are a problem, the horror stories that were anticipated from the use of these devices have been few and far between.
  • Second, senior management — both in IT and in lines of business — have seemingly acquiesced to the notion of employees using their own devices. They realize that stopping employees from using their own devices to access work-related resources is a bit like controlling ocean surf with a broom.
  • Third, there are some advantages that businesses can realize from employees using their own devices. While lower business costs are an important advantage because IT doesn’t have to purchase devices for some employees, another important benefit is that IT doesn’t have to manage them either. For example, when an employee leaves a company and company-supplied devices need to be deactivated, some organizations aren’t exactly sure who’s responsible for doing so — IT, the employee’s manager, HR or someone else. A survey we conducted some time back asked, “when an employee who had a company-supplied mobile phone leaves your employment, how confident are you that you are not still paying for their mobile service?” We found that only 43 percent of respondents were “completely confident” that the mobile service was deactivated, and 11 percent either were “not really sure” or just didn’t know. Employees using their own devices and plans gets around this problem nicely.

To be sure, unfettered and unmanaged use of employee devices in the workplace is not a good idea. It can lead to a number of problems, such as the inability for IT to know where all of a company’s data is stored, the inability to properly archive that data, the inability to produce all of it during an eDiscovery effort or a regulatory audit, lots of duplicate data, a failure to establish an authoritative record for corporate data, a greater likelihood of data breaches if a device is lost, and the potential for not being able to satisfy regulatory obligations.

That last point is particularly important, especially in the context of the European Union’s General Data Protection Regulation (GDPR). A key element of the GDPR is a data subject’s “right to be forgotten”, which translates to a data holder’s obligation to find and expunge all data it has on a data subject. If an organization cannot first determine all of the data it holds on a data subject and then cannot find all of that data, it runs the risk of violating the GDPR and can pay an enormous penalty as a result.

In short, BYOD/C/A offers a number of important advantages, but it carries with it some serious risks and should be addressed as a high priority issue in any organization.

 

Internal Combustion Engines, Critical Thinking and Making Good IT Decisions

Germany’s Spiegel magazine has reported that the German Bundesrat (Germany’s federal council that has representatives from all 16 German states) will ban the internal combustion engine beginning in 2030. Consequently, the only way to achieve this goal would be en masse adoption of electric cars to replace today’s cars that are powered almost exclusively by internal combustion engines. This is a bigger issue in Germany than it would be in the United States, since there are significantly more cars per person in Germany than in the US.

Sounds like a good idea, but edicts passed down from senior managers are not always feasible, particularly when those managers might not have done the math to determine if their ideas can actually be implemented by those in the trenches. For example, here’s the math on the Bundesrat’s edict:

  • As of the beginning of 2015, there were 44.4 million cars in Germany. If we assume that the average German car is driven 8,900 miles per year and gets 30 miles to the gallon, each car consumes the equivalent of just under 10 megawatt-hours of electricity per year (based on one gallon of gasoline = 33.7 kWh).
  • Replacing all 44.4 million cars with electric vehicles would require generation of 443.9 terawatt-hours of electricity per year solely for consumption by automobiles (9.998 mWh per car x 44.4 million cars).
  • In 2015, Germany produced 559.2 terawatt-hours of electricity from all sources. That means that Germany would need to produce or import about 79% more electricity during the next 14 years than it does today. However, during the 13-year period from 2002 to 2015, German production of electricity increased by only 12%.
  • If the additional electricity needed for use by cars came from wind generators, it would require 64.5 million square miles of wind farms (based on an average of 93.0 acres per megawatt of electricity generated), an area that is 468 times larger than Germany’s footprint of 137,903 square miles.
  • If the additional energy came from solar, it would require 1.22 million square miles of solar panels (based on an optimistic assumption of 13 watts of electricity generated per square foot), an area about nine times larger than Germany.
  • If the additional energy came from nuclear power, Germany would need to build the equivalent of 13 high-capacity plants (assuming they have the capacity of the largest US nuclear plant, operating at Palo Verde, AZ).
  • Germany could use all of the oil it currently imports for automobiles for the production of electricity, but that would defeat the purpose of switching to electric cars.
  • Consequently, the only logical options to achieve a complete ban on the internal combustion engine by 2030 are a) build lots of new nuclear power plants that will generate the electricity needed for electric cars, or b) reduce driving in Germany by at least 85%. But even the last option would requires substantially greater production of electricity in order to power the additional rail-based and other transportation systems that would be required to transport Germans who are no longer driving cars. Even if we assume the German government would phase in the abolition of the internal combustion engine over, say, 10-15 years following the 2030 deadline, there’s still the problem of producing 79% more electricity between now and 2040-2045.

So, while converting to electric cars is a good idea in theory, in practice it is highly unlikely to happen in the timeframe mandated by the Bundesrat. In short, edicts from senior managers often can’t happen because these managers never did the math or spoke to anyone in the trenches who would be responsible for trying to make it happen.

The point of this post is not to criticize the German government or the notion of reducing the consumption of fossil fuels, but instead to suggest that critical thinking is needed in all facets of life. When someone proposes a new idea, be skeptical until you’ve done the math and thought about the consequences and considered the various ramifications of the proposal. For example, when senior management suggests your company move the email system completely to the cloud, think through all of the potential ramifications of that decision. Are there regulatory obligations we will no longer be able to satisfy? How much will it cost to re-write all of the legacy, email-generating applications on which we currently rely? What will happen to our bandwidth requirements? How will we deal with disaster recovery? How do we manage security? What is the complete cost of managing email in the cloud versus the way we do it now?

Senior managers or boards of directors will sometimes implement policy or make other important decisions without first consulting those who actually need to make it happen. This means that senior management teams, task forces, boards of directors, etc. need to a) stop doing that, b) do the math for any decision they’re considering and c) consult with the people who will be charged with implementing their decisions.

Posted on Tagged automobiles, Bundesrat, , electric cars, electricity, Germany, internal combustion engine, IT, nuclear, on-premises, solar, wind power. Leave a comment

The Future of Computing is 40 Years Ago

The history of computing can be oversimplified as follows:

  • 1950s through the 1970s: Mainframes, in which massive computing and data storage resources were managed remotely in highly controlled data centers. Intelligence and data were highly centralized, accessed through dumb terminals.
  • 1980s through the 1990s: Client-server computing, in which intelligence and data moved to the endpoints of the network as CPU power and storage became dramatically less expensive.
  • 2000s: Cloud computing, in which much of the intelligence and data storage is moving back to highly controlled data centers, but with lots of intelligence and data still at the endpoints.

I believe the fourth major shift in computing will be to revert back to something approaching the mainframe model, in which the vast majority of computing power and data will reside in data centers that are under the tight control of cloud operators using both public and private cloud models.

Smartphones now have more computing power than most PCs did just a few years ago, albeit with much less storage capacity. While the smartphone does not provide corporate users with the form factor necessary to do writing, spreadsheets, presentations, etc. with the same ease that a desktop or laptop computer does, the combination of a smartphone’s CPU horsepower coupled with a monitor and keyboard that serves as a dumb terminal would provide the same experience as a desktop or laptop. As proposed by Robert X. Cringely a couple of years ago, I believe that the corporate PC of the future will be a completely dumb terminal with no Internet connection or local storage. Instead, it will have only a monitor and keyboard and will use the smartphone in the corporate user’s pocket as its CPU and connectivity.

Why? Three reasons:

  • It will be more secure. Data breaches are an unfortunate and increasingly common fact of life for virtually every organization. Many data breaches are the result of simple mistakes, such as laptops being stolen out of cars or left behind at TSA checkpoints, but many data breaches are the result of hacking into on-premises, corporate servers that are insufficiently protected. A review of the most serious data breaches reveals that the vast majority of data breaches have occurred from on-premises servers and other endpoints, not cloud providers. Yahoo!’s recent and massive data breach is more exception than rule, since cloud data centers are typically more secure than those on-premises behind a corporate firewall.
  • It will be cheaper. Instead of providing a laptop and/or desktop computer to individual users, companies will be able to provide a much less expensive dumb terminal to their users that will use a smartphone’s intelligence and computing horsepower to provide the laptop or desktop computing experience transparently. Users will be able to sit down at any dumb terminal, authenticate themselves, and enjoy a laptop or desktop experience. Because storage will be in the cloud, there will be no local storage of data, reducing cost and enhancing security. And, if the dumb terminal is stolen, a company is out only a few hundred dollars, not the millions of dollars for which it might be liable if data is breached from a stolen or otherwise compromised device.
  • It will be more controllable. Instead of users having access to two, three or more computing devices, users can be equipped with just one corporate device, a smartphone, that will enable all of their computing experiences. When the employee leaves the company or loses their device, disabling access to corporate data will be easier and more reliable.

In short, the future of computing will be conceptually similar to what our parents and grandparents experienced: computing intelligence and data storage in some remote, secure location accessed by dumb devices (other than our smartphone).

Posted on Tagged breach, , computing, data breach, future, hacking, mainframe, smartphone, storage. Leave a comment

Dealing With Phishing and Next-Generation Malware (Part 2)

This is a continuation of my last post focused on ways that decision makers can address problems with phishing and next-generation malware:

Establish detailed and thorough policies: Most organizations have not yet established sufficiently detailed and thorough policies for the various types of email, Web and social media tools that their IT departments have deployed or that they allow to be used. Consequently, we recommend that an early step for any organization should be the development of detailed and thorough policies that are focused on all of the tools that are or probably will be used in the foreseeable future. These policies should focus on legal, regulatory and other obligations to:

  • Encrypt emails and other content if they contain sensitive or confidential data.
  • Monitor all communication for malware that is sent to blogs, social media, and other venues.
  • Control the use of personally owned devices that access corporate resources.
  • Creating detailed and thorough policies will help decision makers not only to determine how and why each tool is being and should be used, but it also will help decision makers determine which capabilities can or cannot be migrated to cloud-based security solutions and which should be retained in-house.

Implement best practices for user behavior: The next step is to implement a variety of best practices to address the security gaps that have been identified. For example:

  • Employees need to employ passwords that match the sensitivity and risk associated with their corporate data assets. These passwords should be changed on an enforced schedule, and should be managed by IT.
  • Employees should be strongly encouraged and continually reminded to keep software and operating systems up-to-date to minimize a known exploit from infecting a system with malware.
  • Employees should receive thorough training about phishing and other security risks in order to understand how to detect phishing attempts and to become more skeptical about suspicious emails and content. It is important to invest sufficiently in employee training so that the “human “firewall” can provide the best possible initial line of defense against increasingly sophisticated phishing and other social engineering attacks.
  • Employees should be tested periodically to determine if their anti-phishing training has been effective.
  • Employees should be given training about best practices when connecting remotely, including the dangers of connecting to public Wi-Fi hot spots or other unprotected access points.
  • Employees need to be trained on why not to extract potentially suspicious content from spam quarantines that might end up being phishing emails.
  • Employees need to be given a list of acceptable and unacceptable tools to employ for file sync and share, social media and other capabilities as part of the overall acceptable use policies in place.
  • Ensure that all employees maintain robust anti-virus defenses on their personally managed platforms if access to any corporate content will take place on them.
  • Employees should be reminded continually about the dangers of oversharing content on social media. The world will not be a better place if it knows that you had breakfast in Cancun this morning, but it could give cybercriminals a piece of information they need to craft a spearphishing email.

Deploy alternatives to solutions that employees use today: Decision makers should seriously consider implementing tools that will replace many of the employee-managed solutions in place today, but that will provide users with the same convenience and ease of use. For example, IT may want to deploy an enterprise-grade grade file sync and share alternative for the consumer version of Dropbox that is so widely used today. They may want to implement a business continuity solution that will enable corporate email to be used during outages instead of users falling back on their personal Webmail accounts. They may want to consider deploying an enterprise-grade file-sharing system that accommodates very large files if the corporate email system does not allow these files to be sent.

Implement robust and layered security solutions based on good threat intelligence: It almost goes without saying that it is essential to implement a layered security infrastructure that is based on good threat intelligence. Doing so will minimize the likelihood that malware, hacking attempts, phishing attempts and the like will be able to penetrate corporate defenses.

An essential element of good security is starting with the human component. As we discussed above, users are the initial line of defense in any security system because they can thwart some potential incursions like phishing attempts before technology-based solutions have detected them. Consequently, we cannot overemphasize the importance of good and frequent user training to bolster this initial line of defense, the goal of which is to heighten users’ sensitivity to phishing and related threats, and to help users to be less gullible. By no means are we suggesting that users can be the only line of defense, but they should be incorporated into the overall security mix.

Determine if and how the cloud should be used: A critical issue for decision makers to address is whether or not internal management of security, as well as other part of the IT infrastructure, is a core competency that is central to the success of the organization. Key questions that decision makers must answer are these:

  • Will our security improve if solutions remain on-premises?
  • Will managing security on-premises and managed by in-house IT staff contribute more to the bottom line than using a cloud-based provider?
  • Should a hybrid security approach with both on-premises and cloud-based solutions be use? If so, for which systems?

An important requirement in accurately evaluating the use of cloud-based security solutions is for decision makers to understand the actual and complete total cost of ownership for managing the current, on-premises infrastructure. Osterman Research has found consistently that many decision makers do not fully count all of these costs and are not confident in their estimates. If decision makers do not understand accurately what it costs their organization to provide a particular service to their users, this leads to poorly informed decision-making, as well as an inability to determine the potential cost savings and the return-on-investment from competing security solutions.

If you’d like to download our recently published white paper that explores these issues, you’re welcome to do so here.

Why Aren’t Cloud Vendors Pushing Encryption More?

Microsoft is currently embroiled in a major legal dispute with the US government. US prosecutors, seeking to gather evidence from a Microsoft cloud customer in a drug-related case, are asking for Microsoft to turn over various customer records even though the data in question is held in an Irish data center. Microsoft has argued that the US government has gone too far with this request because the data is held in a foreign country and that authorities in that country are not involved in gathering the data. The government has argued that this case does not violate the sovereignty of a foreign state, since Microsoft can produce the requested data remotely without use of its staff members in another country. The case, which started in 2013, has been escalating: Microsoft has refused, thus far, to turn over the data and a number of companies (including AT&T and Apple) and others have filed friend-of-the-court briefs in support of Microsoft’s position.

Aside from a number of legal, ethical and political issues – as well as the big issue of how successful cloud computing can be in the future if any government can demand information from a data center in any other nation – this case raises the importance of encrypting data in the cloud. For example, if Microsoft’s customers could encrypt data before it ever got to the company’s data centers, and if Microsoft did not have access to the keys to be able to decrypt this content, requests for data from government or anyone else would be rendered moot. Of course, the US government in this case could have pushed the party whose data is being requested to provide the keys, but the important point for Microsoft is that they would have been only minimally involved in this case, if at all, since they would not have had the ability to produce the data. This presupposes that the US government could not crack the encryption that was employed, but that’s another matter.

Moreover, if the customers of cloud providers encrypted their data before it ever reached a provider’s data center, this would offer the latter the quite significant benefit of not being culpable if their customers’ data was hacked in a Sony-style incursion. Unlike the Sony situation, which has resulted in the publication of confidential emails, pre-release films and other confidential material, well encrypted content could probably not be accessed by bad guys even if they had free run of the network. This would help cloud providers not only to avoid the substantial embarrassment of such a hacking incident (which, I believe, is inevitable for at least one or two major cloud providers during 2015), but it would also help them to avoid the consequences of violating the data breach laws that today exist in 92% of US states.

Cloud providers should be pushing hard for their customers to encrypt data, if for no other reason than it gets the providers off the hook for having to deal with subpoenas and the like for their customers’ content. In this case, for example, Microsoft could have avoided the brouhaha simply by being unable to turn over meaningful data to the government.

The bottom line: cloud providers should push hard for their customers to encrypt data where it’s possible to do so, and customers should be working to encrypt their content where they can.