Internal Combustion Engines, Critical Thinking and Making Good IT Decisions

Germany’s Spiegel magazine has reported that the German Bundesrat (Germany’s federal council that has representatives from all 16 German states) will ban the internal combustion engine beginning in 2030. Consequently, the only way to achieve this goal would be en masse adoption of electric cars to replace today’s cars that are powered almost exclusively by internal combustion engines. This is a bigger issue in Germany than it would be in the United States, since there are significantly more cars per person in Germany than in the US.

Sounds like a good idea, but edicts passed down from senior managers are not always feasible, particularly when those managers might not have done the math to determine if their ideas can actually be implemented by those in the trenches. For example, here’s the math on the Bundesrat’s edict:

  • As of the beginning of 2015, there were 44.4 million cars in Germany. If we assume that the average German car is driven 8,900 miles per year and gets 30 miles to the gallon, each car consumes the equivalent of just under 10 megawatt-hours of electricity per year (based on one gallon of gasoline = 33.7 kWh).
  • Replacing all 44.4 million cars with electric vehicles would require generation of 443.9 terawatt-hours of electricity per year solely for consumption by automobiles (9.998 mWh per car x 44.4 million cars).
  • In 2015, Germany produced 559.2 terawatt-hours of electricity from all sources. That means that Germany would need to produce or import about 79% more electricity during the next 14 years than it does today. However, during the 13-year period from 2002 to 2015, German production of electricity increased by only 12%.
  • If the additional electricity needed for use by cars came from wind generators, it would require 64.5 million square miles of wind farms (based on an average of 93.0 acres per megawatt of electricity generated), an area that is 468 times larger than Germany’s footprint of 137,903 square miles.
  • If the additional energy came from solar, it would require 1.22 million square miles of solar panels (based on an optimistic assumption of 13 watts of electricity generated per square foot), an area about nine times larger than Germany.
  • If the additional energy came from nuclear power, Germany would need to build the equivalent of 13 high-capacity plants (assuming they have the capacity of the largest US nuclear plant, operating at Palo Verde, AZ).
  • Germany could use all of the oil it currently imports for automobiles for the production of electricity, but that would defeat the purpose of switching to electric cars.
  • Consequently, the only logical options to achieve a complete ban on the internal combustion engine by 2030 are a) build lots of new nuclear power plants that will generate the electricity needed for electric cars, or b) reduce driving in Germany by at least 85%. But even the last option would requires substantially greater production of electricity in order to power the additional rail-based and other transportation systems that would be required to transport Germans who are no longer driving cars. Even if we assume the German government would phase in the abolition of the internal combustion engine over, say, 10-15 years following the 2030 deadline, there’s still the problem of producing 79% more electricity between now and 2040-2045.

So, while converting to electric cars is a good idea in theory, in practice it is highly unlikely to happen in the timeframe mandated by the Bundesrat. In short, edicts from senior managers often can’t happen because these managers never did the math or spoke to anyone in the trenches who would be responsible for trying to make it happen.

The point of this post is not to criticize the German government or the notion of reducing the consumption of fossil fuels, but instead to suggest that critical thinking is needed in all facets of life. When someone proposes a new idea, be skeptical until you’ve done the math and thought about the consequences and considered the various ramifications of the proposal. For example, when senior management suggests your company move the email system completely to the cloud, think through all of the potential ramifications of that decision. Are there regulatory obligations we will no longer be able to satisfy? How much will it cost to re-write all of the legacy, email-generating applications on which we currently rely? What will happen to our bandwidth requirements? How will we deal with disaster recovery? How do we manage security? What is the complete cost of managing email in the cloud versus the way we do it now?

Senior managers or boards of directors will sometimes implement policy or make other important decisions without first consulting those who actually need to make it happen. This means that senior management teams, task forces, boards of directors, etc. need to a) stop doing that, b) do the math for any decision they’re considering and c) consult with the people who will be charged with implementing their decisions.

The Future of Computing is 40 Years Ago

The history of computing can be oversimplified as follows:

  • 1950s through the 1970s: Mainframes, in which massive computing and data storage resources were managed remotely in highly controlled data centers. Intelligence and data were highly centralized, accessed through dumb terminals.
  • 1980s through the 1990s: Client-server computing, in which intelligence and data moved to the endpoints of the network as CPU power and storage became dramatically less expensive.
  • 2000s: Cloud computing, in which much of the intelligence and data storage is moving back to highly controlled data centers, but with lots of intelligence and data still at the endpoints.

I believe the fourth major shift in computing will be to revert back to something approaching the mainframe model, in which the vast majority of computing power and data will reside in data centers that are under the tight control of cloud operators using both public and private cloud models.

Smartphones now have more computing power than most PCs did just a few years ago, albeit with much less storage capacity. While the smartphone does not provide corporate users with the form factor necessary to do writing, spreadsheets, presentations, etc. with the same ease that a desktop or laptop computer does, the combination of a smartphone’s CPU horsepower coupled with a monitor and keyboard that serves as a dumb terminal would provide the same experience as a desktop or laptop. As proposed by Robert X. Cringely a couple of years ago, I believe that the corporate PC of the future will be a completely dumb terminal with no Internet connection or local storage. Instead, it will have only a monitor and keyboard and will use the smartphone in the corporate user’s pocket as its CPU and connectivity.

Why? Three reasons:

  • It will be more secure. Data breaches are an unfortunate and increasingly common fact of life for virtually every organization. Many data breaches are the result of simple mistakes, such as laptops being stolen out of cars or left behind at TSA checkpoints, but many data breaches are the result of hacking into on-premises, corporate servers that are insufficiently protected. A review of the most serious data breaches reveals that the vast majority of data breaches have occurred from on-premises servers and other endpoints, not cloud providers. Yahoo!’s recent and massive data breach is more exception than rule, since cloud data centers are typically more secure than those on-premises behind a corporate firewall.
  • It will be cheaper. Instead of providing a laptop and/or desktop computer to individual users, companies will be able to provide a much less expensive dumb terminal to their users that will use a smartphone’s intelligence and computing horsepower to provide the laptop or desktop computing experience transparently. Users will be able to sit down at any dumb terminal, authenticate themselves, and enjoy a laptop or desktop experience. Because storage will be in the cloud, there will be no local storage of data, reducing cost and enhancing security. And, if the dumb terminal is stolen, a company is out only a few hundred dollars, not the millions of dollars for which it might be liable if data is breached from a stolen or otherwise compromised device.
  • It will be more controllable. Instead of users having access to two, three or more computing devices, users can be equipped with just one corporate device, a smartphone, that will enable all of their computing experiences. When the employee leaves the company or loses their device, disabling access to corporate data will be easier and more reliable.

In short, the future of computing will be conceptually similar to what our parents and grandparents experienced: computing intelligence and data storage in some remote, secure location accessed by dumb devices (other than our smartphone).

Dealing With Phishing and Next-Generation Malware (Part 2)

This is a continuation of my last post focused on ways that decision makers can address problems with phishing and next-generation malware:

Establish detailed and thorough policies: Most organizations have not yet established sufficiently detailed and thorough policies for the various types of email, Web and social media tools that their IT departments have deployed or that they allow to be used. Consequently, we recommend that an early step for any organization should be the development of detailed and thorough policies that are focused on all of the tools that are or probably will be used in the foreseeable future. These policies should focus on legal, regulatory and other obligations to:

  • Encrypt emails and other content if they contain sensitive or confidential data.
  • Monitor all communication for malware that is sent to blogs, social media, and other venues.
  • Control the use of personally owned devices that access corporate resources.
  • Creating detailed and thorough policies will help decision makers not only to determine how and why each tool is being and should be used, but it also will help decision makers determine which capabilities can or cannot be migrated to cloud-based security solutions and which should be retained in-house.

Implement best practices for user behavior: The next step is to implement a variety of best practices to address the security gaps that have been identified. For example:

  • Employees need to employ passwords that match the sensitivity and risk associated with their corporate data assets. These passwords should be changed on an enforced schedule, and should be managed by IT.
  • Employees should be strongly encouraged and continually reminded to keep software and operating systems up-to-date to minimize a known exploit from infecting a system with malware.
  • Employees should receive thorough training about phishing and other security risks in order to understand how to detect phishing attempts and to become more skeptical about suspicious emails and content. It is important to invest sufficiently in employee training so that the “human “firewall” can provide the best possible initial line of defense against increasingly sophisticated phishing and other social engineering attacks.
  • Employees should be tested periodically to determine if their anti-phishing training has been effective.
  • Employees should be given training about best practices when connecting remotely, including the dangers of connecting to public Wi-Fi hot spots or other unprotected access points.
  • Employees need to be trained on why not to extract potentially suspicious content from spam quarantines that might end up being phishing emails.
  • Employees need to be given a list of acceptable and unacceptable tools to employ for file sync and share, social media and other capabilities as part of the overall acceptable use policies in place.
  • Ensure that all employees maintain robust anti-virus defenses on their personally managed platforms if access to any corporate content will take place on them.
  • Employees should be reminded continually about the dangers of oversharing content on social media. The world will not be a better place if it knows that you had breakfast in Cancun this morning, but it could give cybercriminals a piece of information they need to craft a spearphishing email.

Deploy alternatives to solutions that employees use today: Decision makers should seriously consider implementing tools that will replace many of the employee-managed solutions in place today, but that will provide users with the same convenience and ease of use. For example, IT may want to deploy an enterprise-grade grade file sync and share alternative for the consumer version of Dropbox that is so widely used today. They may want to implement a business continuity solution that will enable corporate email to be used during outages instead of users falling back on their personal Webmail accounts. They may want to consider deploying an enterprise-grade file-sharing system that accommodates very large files if the corporate email system does not allow these files to be sent.

Implement robust and layered security solutions based on good threat intelligence: It almost goes without saying that it is essential to implement a layered security infrastructure that is based on good threat intelligence. Doing so will minimize the likelihood that malware, hacking attempts, phishing attempts and the like will be able to penetrate corporate defenses.

An essential element of good security is starting with the human component. As we discussed above, users are the initial line of defense in any security system because they can thwart some potential incursions like phishing attempts before technology-based solutions have detected them. Consequently, we cannot overemphasize the importance of good and frequent user training to bolster this initial line of defense, the goal of which is to heighten users’ sensitivity to phishing and related threats, and to help users to be less gullible. By no means are we suggesting that users can be the only line of defense, but they should be incorporated into the overall security mix.

Determine if and how the cloud should be used: A critical issue for decision makers to address is whether or not internal management of security, as well as other part of the IT infrastructure, is a core competency that is central to the success of the organization. Key questions that decision makers must answer are these:

  • Will our security improve if solutions remain on-premises?
  • Will managing security on-premises and managed by in-house IT staff contribute more to the bottom line than using a cloud-based provider?
  • Should a hybrid security approach with both on-premises and cloud-based solutions be use? If so, for which systems?

An important requirement in accurately evaluating the use of cloud-based security solutions is for decision makers to understand the actual and complete total cost of ownership for managing the current, on-premises infrastructure. Osterman Research has found consistently that many decision makers do not fully count all of these costs and are not confident in their estimates. If decision makers do not understand accurately what it costs their organization to provide a particular service to their users, this leads to poorly informed decision-making, as well as an inability to determine the potential cost savings and the return-on-investment from competing security solutions.

If you’d like to download our recently published white paper that explores these issues, you’re welcome to do so here.

Why Aren’t Cloud Vendors Pushing Encryption More?

Microsoft is currently embroiled in a major legal dispute with the US government. US prosecutors, seeking to gather evidence from a Microsoft cloud customer in a drug-related case, are asking for Microsoft to turn over various customer records even though the data in question is held in an Irish data center. Microsoft has argued that the US government has gone too far with this request because the data is held in a foreign country and that authorities in that country are not involved in gathering the data. The government has argued that this case does not violate the sovereignty of a foreign state, since Microsoft can produce the requested data remotely without use of its staff members in another country. The case, which started in 2013, has been escalating: Microsoft has refused, thus far, to turn over the data and a number of companies (including AT&T and Apple) and others have filed friend-of-the-court briefs in support of Microsoft’s position.

Aside from a number of legal, ethical and political issues – as well as the big issue of how successful cloud computing can be in the future if any government can demand information from a data center in any other nation – this case raises the importance of encrypting data in the cloud. For example, if Microsoft’s customers could encrypt data before it ever got to the company’s data centers, and if Microsoft did not have access to the keys to be able to decrypt this content, requests for data from government or anyone else would be rendered moot. Of course, the US government in this case could have pushed the party whose data is being requested to provide the keys, but the important point for Microsoft is that they would have been only minimally involved in this case, if at all, since they would not have had the ability to produce the data. This presupposes that the US government could not crack the encryption that was employed, but that’s another matter.

Moreover, if the customers of cloud providers encrypted their data before it ever reached a provider’s data center, this would offer the latter the quite significant benefit of not being culpable if their customers’ data was hacked in a Sony-style incursion. Unlike the Sony situation, which has resulted in the publication of confidential emails, pre-release films and other confidential material, well encrypted content could probably not be accessed by bad guys even if they had free run of the network. This would help cloud providers not only to avoid the substantial embarrassment of such a hacking incident (which, I believe, is inevitable for at least one or two major cloud providers during 2015), but it would also help them to avoid the consequences of violating the data breach laws that today exist in 92% of US states.

Cloud providers should be pushing hard for their customers to encrypt data, if for no other reason than it gets the providers off the hook for having to deal with subpoenas and the like for their customers’ content. In this case, for example, Microsoft could have avoided the brouhaha simply by being unable to turn over meaningful data to the government.

The bottom line: cloud providers should push hard for their customers to encrypt data where it’s possible to do so, and customers should be working to encrypt their content where they can.