Is the Cloud Always Cheaper?

Office 365 and Exchange Online are good offerings – they provide useful functionality, a growing feature set, pretty decent uptime, and they’re relatively inexpensive. Microsoft, in this third major iteration of cloud services, has done a good job at offering a comprehensive set of applications and services. (We use Exchange Online internally and are quite pleased with it.)

From Microsoft’s perspective, the primary reason to move their customers to the cloud is to make more money. In 2015, Microsoft told Wall Street financial analysts that moving its customers from a “buy” model to a “rent” model will generate anywhere from 20 percent to 80 percent more revenue for the company. As evidence of how right Microsoft was, the company’s Office 365 revenue for the fourth quarter of 2017 is now greater than its revenue generated from traditional licensing models.

From a customer perspective, one of the key reasons for migrating to Office 365 is to reduce the cost of ownership for email, applications and other functionality. Our cost modeling has demonstrated that this actually is the case.

So, Microsoft makes more money from the cloud, but its customers spend less when migrating to the cloud. On the surface, that doesn’t seem to make much sense until you realize that the cost savings for customers are coming primarily from the labor that you no longer have to pay to manage an on-premises system, and from the stuff you no longer have to buy to maintain it, especially when considering hardware and software refresh cycles.

But what if you’re a small organization that wasn’t spending much on labor because you have an easy-to-manage email server, for example, and your hardware requirements to run it are not significant? Let’s go through an example comparing Exchange Online Plan 1 with Alt-N Technologies’ MDaemon Messaging Server for a three-year period for a 50-user organization:

Exchange Online Plan 1

  • $4.00 per user per month
  • $7,200 for 50 users for three years

MDaemon Messaging Server (with priority support)

  • $2,433.04 initial cost, or $1.35 per user per month for three years

MDaemon Messaging Server (with priority support, Outlook Connector and ActiveSync)

  • $4,678.43 initial cost, or $2.60 per user per month for three years

So, the on-premises platform will save a 50-seat organization anywhere from $2,522 to $4,767 over a three-year period. If we assume that an on-premises email system like MDaemon could be managed by an IT tech making $35,839 per year (the national average for that position according to Glassdoor), that means the tech could work anywhere from 4.1 to 7.7 hours per month on the MDaemon infrastructure to bring its cost up to that of Exchange Online Plan 1, although it’s unlikely that much of a time investment would be required. Of course, I have not factored in the cost of the hardware necessary to implement an on-premises email system, but most organizations already have that hardware on-hand already.

The point here is not to abandon consideration for Exchange Online or other cloud platforms, since they offer a number of important benefits and there are good reasons to go that route. But for organizations that need to get the most bang for their buck, they will be well served to consider using on-premises solutions, especially if their hardware and software refresh cycles are longer than three to four years. That’s especially true for things like desktop productivity platforms like Word, Excel and PowerPoint, where the average refresh cycle is quite long (one survey found that Office 2010 remained the most popular version of Office in use five-and-a-half years after its release.)

Posted on Tagged Alt-N, , Exchange Online, MDaemon, messaging, Microsoft, Office 365, TCO. Leave a comment

Automatic Monitoring of Key Systems

One of the problems that IT often has with business systems — especially those on which users or customers are dependent for real-time or near real-time interactions or transactions, such as email or eCommerce systems — is that users are often the “canary in the coal mine” in determining when a problem has occurred. For example, IT will often learn about an email downtime only when there’s a spike in traffic to the corporate help desk, or calls to a help line will be the trigger that notifies IT that a customer-facing system has gone down or is providing unacceptable performance.

dinCloud has introduced an interesting offering called “James“, what they’re touting as a virtual robot designed to monitor systems on a 24×7 basis. James is designed to monitor a wide variety of systems, such as eCommerce platforms, corporate email, databases and a variety of other systems that support business processes and workflows. The basic goal of James is to monitor systems continually for events like outages, system errors or performance that drops below a predetermined threshold, and then alert IT about the problem so that the issue can be rectified as quickly as possible. The example below, from dinCloud’s web site, is a basic example of how James works.

james-login-example

Although James can be used in any environment, it seems especially well-suited to smaller organizations that may not have the technical expertise or other resources needed to monitor key systems on a continual basis. dinCloud offers a turnkey approach for customers, helping them determine what to test and providing services around configuration and deployment of the system. James also supports a real-time dashboard that enables decision makers to keep an eye on system performance and receive alerts when problems are discovered.

While I’m not crazy about the name “James” as it applies to this offering (perhaps something like “Virtual System Monitoring Robot” might be more descriptive), I really do like what dinCloud is doing here. Downtime and poor system performance are the bane of online systems because even small glitches can create major problems. For example, an older study found that about 40 percent of US consumers will give up on a mobile shopping site that won’t load in just three seconds, and a 2016 study found that the cost of unplanned downtime for a large organization will cost an average of nearly $8,900 per minute. Our own research finds that email outages of even just 10 minutes can create problems.

In an era of ransomware, DDoS attacks, hacking and other threats that can create significant levels of downtime in addition to the more traditional causes like server crashes or application faults, system monitoring should be high on every IT manager’s priority list.

Open Questions About the GDPR

The European Union’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018. In short, the GDPR will provide data subjects (i.e., anyone who resides in the EU) with new and enhanced rights over the way in which their personal data is collected, processed and transferred by data controllers and processors (i.e., anyone who possesses or manages data on EU residents). The GDPR demands significant data protection safeguards to be implemented by organizations, regardless of their size or their geographic location. You can read the full text of the GDPR here, as well as our recently published white paper and survey report on the subject here and here.

The goal of the GDPR is quite clear: to protect the privacy rights of EU residents and to ensure that they have a right to be forgotten by any organization that possesses data about them. However, there are some situations in which legal jurisdictions and whose rights should prevail are not yet clear. For example:

  • US organizations have an obligation to apply a legal hold on relevant data if they have a reasonable expectation that a legal action may be forthcoming. But what happens if some of the data that a company is obligated to hold includes data on an EU resident that has asked for that data to be expunged?
  • Broker-dealers and others under the jurisdiction of FINRA must retain various types of communications, such as communications between registered representatives and their clients. What if a client of that representative ends the relationship, but immediately wants his or her data to be deleted?
  • Manufacturers routinely keep customer information in support of warranties that they offer on their products. If a customer in the EU asks that all of their data be forgotten, does that relieve the manufacturer from their obligations to honor the warranty?
  • Will governments be permitted to retain data on visitors from the EU, such as the data provided on the embarkation forms that visitors are obligated to complete upon entry to a country, if those visitors ask that the data be deleted?

As with any new regulation there are always unanswered questions, unique situations that had not been contemplated when the regulation was written, and various unintended consequences — the GDPR is no different in that respect. What is different are the consequences of getting things wrong, which can include fines as high as €20 million ($23.7 million), or four percent of an organization’s annual revenue, whichever is higher. For a company with $1 billion in annual revenue, that would be a $40 million fine!

Will the EU impose such large fines shortly after the May 25, 2018 implementation of the GDPR? That’s an open question, but given the EU’s aggressive stance toward companies like Google and Facebook, my guess is that they will seek a test case to let everyone know that they mean business.