Years ago I worked for Dr. John Ryan, a very bright man who is now a senior manager at Google. He would periodically mention in talks that fighter jets are getting lighter and faster over time, so much so that if you extrapolated their weight and speed far enough into the future, they would eventually weigh nothing and fly infinitely fast. He would then ask what that described…the answer was software.
I attended EMC World this week and came away reminded of that story. One of the key themes at this annual conference was “the third platform” – the growing movement toward lightweight applications and rapid application development focused on the needs of an increasingly mobile workforce and society. Although the first platform (mainframes) and second platform (client/server and Web) are still quite relevant, the third platform, characterized by increasingly rapid development and lighter applications as we migrate toward the Internet of Things, represents the direction that computing is moving, and rather quickly at that.
What are the implications?
- It means very rapid application development that integrates data, analytics and applications in a continuously evolving loop to generate applications and updates to them very quickly, sometimes in just a matter of hours instead of the months or years that traditional software development requires.
- It means a zero-tolerance for downtime, since applications are updated on-the-fly instead of the traditional model of bringing down a server, installing the update and then bringing it back up – or worse, having the server or the application break (this point was driven home in one session that showed the healthcare.gov Web site and its downtime message seen and enjoyed by millions). That doesn’t mean that servers won’t ever go down in the third platform, only that the third platform is designed to operate with no scheduled downtime.
- It means that every company becomes a software company (sort of) in the model of Google or Facebook, designing applications for customers to use as an interface to services instead of the traditional customer service model.
- It means that data volumes increase exponentially as large volumes of rich data replace the text-based systems of the first and second platforms.
- It means a continued shift toward massive amounts of CPU power and very cheap storage, all of which is allocated dynamically based on the workloads that need to be addressed at the moment.
I was impressed by EMC’s approach at the conference in a couple of ways. First, the company today derives at least 95% of its business from the second platform. Some companies might wait until they were bleeding profusely before entertaining a shift to a new business model, but EMC seems to be reasonably proactive about shifting their business away from their bread and butter. There’s something to be said for management that can not only read the handwriting on the wall, but to heed its advice before it’s too late. Second, EMC were quite frank about where they have not done a good job. That may have been because they were talking to an analyst community that would have seen through fluffy platitudes anyway, but I got the impression that there is a new level of frankness on the part of the company’s management – quite refreshing for such a large company.
Also impressed by EMC’s acquisition of DSSD, a seemingly well-funded, very stealthy, four-year-old startup focused on developing very high-speed flash memory arrays. Don’t know much about them, and EMC was not overly forthcoming on the specs for their technology, but this certainly bears watching. GigaOM had a good article on DSSD last year that you can view here.
EMC, like all hardware companies, is making a somewhat painful set of transitions: most notably to the third platform and to a cloud-delivery model that often just means customers want to pay less for what they already have. On balance, EMC seems to be making the transition fairly well.