The Future of Computing is 40 Years Ago

The history of computing can be oversimplified as follows:

  • 1950s through the 1970s: Mainframes, in which massive computing and data storage resources were managed remotely in highly controlled data centers. Intelligence and data were highly centralized, accessed through dumb terminals.
  • 1980s through the 1990s: Client-server computing, in which intelligence and data moved to the endpoints of the network as CPU power and storage became dramatically less expensive.
  • 2000s: Cloud computing, in which much of the intelligence and data storage is moving back to highly controlled data centers, but with lots of intelligence and data still at the endpoints.

I believe the fourth major shift in computing will be to revert back to something approaching the mainframe model, in which the vast majority of computing power and data will reside in data centers that are under the tight control of cloud operators using both public and private cloud models.

Smartphones now have more computing power than most PCs did just a few years ago, albeit with much less storage capacity. While the smartphone does not provide corporate users with the form factor necessary to do writing, spreadsheets, presentations, etc. with the same ease that a desktop or laptop computer does, the combination of a smartphone’s CPU horsepower coupled with a monitor and keyboard that serves as a dumb terminal would provide the same experience as a desktop or laptop. As proposed by Robert X. Cringely a couple of years ago, I believe that the corporate PC of the future will be a completely dumb terminal with no Internet connection or local storage. Instead, it will have only a monitor and keyboard and will use the smartphone in the corporate user’s pocket as its CPU and connectivity.

Why? Three reasons:

  • It will be more secure. Data breaches are an unfortunate and increasingly common fact of life for virtually every organization. Many data breaches are the result of simple mistakes, such as laptops being stolen out of cars or left behind at TSA checkpoints, but many data breaches are the result of hacking into on-premises, corporate servers that are insufficiently protected. A review of the most serious data breaches reveals that the vast majority of data breaches have occurred from on-premises servers and other endpoints, not cloud providers. Yahoo!’s recent and massive data breach is more exception than rule, since cloud data centers are typically more secure than those on-premises behind a corporate firewall.
  • It will be cheaper. Instead of providing a laptop and/or desktop computer to individual users, companies will be able to provide a much less expensive dumb terminal to their users that will use a smartphone’s intelligence and computing horsepower to provide the laptop or desktop computing experience transparently. Users will be able to sit down at any dumb terminal, authenticate themselves, and enjoy a laptop or desktop experience. Because storage will be in the cloud, there will be no local storage of data, reducing cost and enhancing security. And, if the dumb terminal is stolen, a company is out only a few hundred dollars, not the millions of dollars for which it might be liable if data is breached from a stolen or otherwise compromised device.
  • It will be more controllable. Instead of users having access to two, three or more computing devices, users can be equipped with just one corporate device, a smartphone, that will enable all of their computing experiences. When the employee leaves the company or loses their device, disabling access to corporate data will be easier and more reliable.

In short, the future of computing will be conceptually similar to what our parents and grandparents experienced: computing intelligence and data storage in some remote, secure location accessed by dumb devices (other than our smartphone).

What if North Korea had…

The recent cyberattack on Sony Pictures has been definitively linked to the government of North Korea, presumably in response to Sony’s upcoming release of the comedy The Interview. The US government said that North Korea was “centrally involved” in the attack, which has resulted in the leakage of several pre-release films, lots of embarrassing emails, and a variety of other content that Sony Pictures would rather not have had released – in total, up to 100 terabytes of data. North Korea upped the stakes following this cyberattack, threatening to create what amounted to another 9/11 if theatres showed the film. Clearly, Kim Jong-un does not have a sense of humor (or a good hair stylist).

The most recent result of this cyberattack, other than lots of apologies and hand-wringing from Sony executives, was the announcement by several major US theatre chains that they would not show The Interview, followed shortly thereafter by Sony’s cancellation of the $42 million film.

An attack on any major company is bad enough, even if the primary result is the cancellation of something as innocuous as a film. But what if North Korea had decided its target was the IT infrastructure of a major US utility, including its nuclear facilities? Black & Veatch published a report this year indicating that fewer than one-third of the electric utilities it surveyed have appropriate security systems with the “proper segmentation, monitoring and redundancies” necessary to deal with cybersecurity threats. How about if North Korea had decided to attack a major hospital network? One of the largest US hospital groups, Community Health Systems, was the victim of a Chinese cyberattack earlier this year, resulting in “only” the loss of data on 4.5 million patients. What about a North Korean cyberattack on the military? An investigation by the US Senate revealed that there were 50 successful hacking attempts against the US Transportation Command between May 2012 and May 2013. Serious and debilitating cyberattacks on utilities, healthcare providers and the military could make us long for “the good old days” when the result of a cyberattack was just the cancellation of a film.

What if it was your company? Have you taken precautions to prevent ransomware from infecting your users? 500,000 victims of Cryptolocker weren’t so lucky. Are your users trained to detect phishing attempts and take appropriate action when they encounter them? Is your security infrastructure sufficient to detect and weed out malware, phishing attempts and other threats that could make you a Sony-like victim? Is your vendor’s threat intelligence protecting your organization sufficiently?

We have done a lot of research on security issues and will be launching another major survey just after the first of the year to find out just how prepared organizations really are.

The Importance of Good Authentication and Data Asset Management

Stories about the use of easy-to-guess passwords based on common words, consecutive numerical strings, or simply the use of “password” are fairly common. Millions of users, in an effort to make their passwords easy to remember, fall prey to this problem, or they will write their passwords down on sticky notes, not change them periodically, or use the same password for multiple applications.

I wanted to see how just the strength of a password would affect its ability to be guessed by brute force using a PC, so I went to I am not affiliated with the host of this site or its sponsor, and so cannot vouch for the security of any content they manage. So, as a precaution, don’t use any site like this to test your actual passwords.

For the test, I chose five passwords: rabbit, rabbit9, rabbit99, rabbit99K and rabbit99K). I ran each password through their checker and found the following lengths of time that would be required to guess each one:

  • rabbit: a desktop PC could guess this password more or less instantly
  • rabbit9: 19 seconds
  • rabbit99: 11 minutes
  • rabbit99K: 39 days
  • rabbit99K): 58 years

Obviously, the longer and more complex the password, the longer it will take to guess it through brute force. Yhn-P9q9Km4-9UtQw)7*, for example, would require 425 quintillion years according to

But strong passwords are just part of the security story. Organizations should undertake other steps, as well:

  • Use multi-factor authentication that will require, for example, the entry of a password and a code that a user receives on his or her smartphone.
  • Impose password expiration requirements at regular intervals that will require users to create a new password every so often. The more sensitive or critical the data asset or application that is being accessed, the more frequently that IT might want passwords to change.
  • Lockout inactive users after a certain number of days.
  • Implement strict strikeout limits for sensitive data assets or applications that will allow only a small number of authentication errors.
  • Don’t allow passwords to be reused.
  • Implement self-service password functionality, but only if two-factor authentication or similar controls are in place.
  • Employ risk-based authentication that imposes stricter requirements based on the sensitivity of the data assets being accessed, the location of those accessing them, the time of day they are being accessed, etc.
  • Finally, establish policies for the data assets that really need to be accessible online and what can/should be disconnected from the Internet.

These are all fairly simple steps that would go a long way toward improving corporate security.