Is BlackBerry Dead in the Water?

A blog post from yesterday asks the question, “Would you say that BlackBerry is pretty much dead in the water at this point or is there hope left for the struggling Canadian company?”

The question is a good one. In the first quarter of 2009, BlackBerry had  55.3 percent of the US smartphone market and 20.1 percent of the global smartphone OS market; as of the last quarter of 2016, BlackBerry’s share of global smartphone sales had fallen to 0.048 percent. The company’s revenues fell from a peak of $19.91 billion in FY2011 to $2.16 billion in FY2016. It’s operating income and net income have been in negative territory since FY2013. It’s stock price went from $138.87 on April 30, 2008 to $7.45 as of today. In September of last year, BlackBerry stopped making its own phones.

So, yes, a case can be made that BlackBerry is “dead in the water” or very nearly so.

However, I believe that 2017 and 2018 will see a modest resurgence of the company, albeit not to levels that we saw before the iPhone and Android devices began eating BlackBerrys for lunch. Here’s why:

  • BlackBerry isn’t really a smartphone company anymore, but is transforming itself into a software and cyber security company. If they’re successful in doing so, that will turn their 30-something margins into 70-something margins. The company’s financial results are at least hinting that margins are going in the right direction.
  • BlackBerry still has a very good security architecture for mobile devices, one that many decision makers should (and, I believe, will) seriously consider as mobile devices increasingly access sensitive corporate applications and data repositories. BlackBerry’s DTEK technology offers robust user control over privacy and that’s going to be important for many enterprise decision makers.
  • While BlackBerry’s market share in the US and many other markets is really, really poor, the company is still doing reasonably well in places like Indonesia and in some key verticals, such as financial services. For example, a major US bank is standardized on BlackBerry mobile technology, as is HSBC, among others.
  • BlackBerry is increasingly focused on markets that are quite far afield from its traditional phone business. For example, BlackBerry Radar is the company’s first IoT application and is designed for asset tracking, currently in use by a major Canadian trucking firm. BlackBerry QNX, a real-time operating system focused on the embedded systems market, is currently used in 60 million cars worldwide (and replaced Microsoft Sync at Ford). BlackBerry has some interesting and innovative solutions focused on addressing enterprise BYOD/C/A concerns.

The bottom line is that BlackBerry is nowhere near out of the woods, but is definitely showing signs of life. John Chen has done a good job at starting to turn the company around, there is promise in several of BlackBerry’s key markets, and the company has a decent base of working capital. I have some confidence that in a couple of years BlackBerry will see something of a resurgence.

The (Sometimes Dangerous) Power of Perception

I had a conversation with someone this morning that suggested I join a customer advisory board. He recommended it, in part, over a board of directors because, as he put it, the latter takes more in-person time and “it’s difficult to get to other places from Seattle. For example, it would be difficult to get to a place like Omaha.”

This individual’s perception about getting to and from Seattle was right — perhaps 15 to 20 years ago — but that’s no longer the case. For example, I fly Alaska Airlines for most of my business travel and to about 98 percent of the places I travel in the US, Alaska has a direct flight. Plus, in the 26+ years I have been flying Alaska, I have had only three connecting flights — twice to Orlando and once coming back from Las Vegas. That’s three flights out of my too-numerous-to-count flights on Alaska in more than 26 years!

The perception of Seattle as a distant outpost is shared by many, particularly NFL commentators who will periodically tell viewers about the difficulty encountered by teams coming “all the way out” to Seattle. But looking at actual data reveals that for the Jets or Giants to visit the Seahawks they would fly 146 fewer miles than if they were visiting the 49ers. If the Patriots visited the Seahawks, they’d fly 115 fewer miles than if they visited the Rams.

So, perception is often wrong and it has consequences. Much more seriously than the misperception of Seattle as out somewhere past Siberia is the perception by many that the cloud is less secure than on-premises solutions. For example, you can read about the “insecurity” of the cloud, or decision makers’ perception of its insecurity, here, here, here, here and here. However, an examination of the biggest and most damaging breaches of highly sensitive or confidential data over the past several years reveals that the vast majority of these were exfiltrations of data from on-premises systems, not those in the cloud. Even as far back as 2012 the Alert Logic Fall 2012 State of Cloud Security Report noted that users of service provider solutions experienced less than half the number of security incidents than users of on-premises systems. More recently, Infor concluded that, “Cloud vendors typically offer a much higher level of data center and virtual system security than most organizations can or will build out on their own.”

While on-premises solutions can be highly secure, data stored in the cloud is generally. more so. Cloud providers enjoy economies of scale in rolling out security capabilities that most organizations with on-premises systems cannot achieve. The cost of security for cloud providers is generally much lower on a per-customer basis than it is for those that manage security in-house, allowing cloud providers to do more on a dollar-for-dollar basis. Cloud providers suffer from insider threats much less often than do their on-premises counterparts. And, the very existence of cloud providers is much more dependent on maintaining the security of their customers’ data than it is for companies that maintain their own systems on-premises, giving cloud providers the stronger incentive to get security right.

Within the next few weeks we will be publishing a white paper focused on cloud security in which we will be exploring the key issues that decision makers should understand as they consider security in the cloud vs. on-premises.

And, Alaska offers a daily non-stop to and from Omaha.

The Impact of the GDPR on Your Business

We have just published a white paper on the General Data Protection Regulation (GDPR), the European Union (EU)’s new data protection regulation, released in May 2016 and with an implementation date of May 25, 2018. Every organization that collects or process personal data on EU residents must comply with the new regulation, regardless of where they are located, or they will face significant financial penalties (up to four percent of their annual revenue) and reputational damage.

Complying with the GDPR requires any organization with personal data on EU residents to implement both organizational and technology measures to remain in compliance. Organizational measures include appointing a Data Protection Officer, developing policies and training on handling personal and sensitive personal data, and an approach for executing a Data Protection Impact Assessment (DPIA). Technological measures for protecting data include capabilities like data classification, data loss prevention, encryption, managing consent more explicitly, data transfer limitations, and technologies that enable data subjects to exercise their rights to access, rectify, and erase personal data held by data controllers.

It is important to note that the GDPR is focused on the protection of personal data, not just its privacy. Complying with the protection mandate requires a higher degree of proactive and far-reaching effort on the behalf of organizations that control or process personal data.

The survey we conducted for this white paper among mid-sized and large organizations that will be subject to the GDPR found that the majority (58 percent) are not sufficiently familiar with the wide scope of the regulation and the penalties it includes. Only 10 percent believe their organizations are “completely ready” to comply with the requirements of the GDPR. That’s a serious problem, since the penalty for failure to comply with the GDPR could cost a large organization many millions or tens of millions of dollars.

You can download our just published white paper here.

What Happens to Your Data When Employees Leave Your Company?

When employees leave a company, whether voluntarily or involuntarily, it is quite common for them to take sensitive and confidential data with them. This paper examines this problem in detail and provides solutions for employers to mitigate the risks. For example:

  • A survey published by Biscom in late 2015 found that 87 percent of employees who leave a job take with them data that they created in that job, and 28 percent take data that others had created. Among the majority who took company data with them, 88 percent took corporate presentations and/or strategy documents, 31 percent took customer lists, and 25 percent took intellectual property.
  • A survey of 1,000 employees in the United States and Europe found that one in five had uploaded sensitive and confidential corporate data to an external cloud service specifically for the purpose of sharing it with others.

As just one example of data theft by departing employees, in September 2016 the US Office of the Comptroller of the Currency (OCC) detected the November 2015 theft of more than 10,000 records by a retiring employee that may have exposed personal information about OCC employees.

Here are some of the important takeaways from a white paper we recently published on this topic:

  • Employee turnover is a fact of life: the typical organization in the United States, for example, can expect that 24 percent of its employees will leave each year, although some companies in the Fortune 500 experience much higher turnover[i].
  • Employees who leave their employers, regardless of the reason for their departure, often take with them sensitive and confidential information, such as intellectual property or trade secrets, that belongs solely to their employer.
  • The theft of this information can damage a company in a variety of ways, including putting them at risk of a regulatory violation, forcing them to take legal action against former employees, harming their competitive position, and negatively impacting their revenue.
  • To reduce the risk of employees taking information with them when they leave, employers should establish detailed and thorough policies and procedures focused on ensuring visibility into employee practices, limiting employee access to data, requiring encryption of sensitive data, managing devices properly, ensuring that data is backed up and archived properly, requiring the use of enterprise apps (since these apps and any associated offline content can be remotely wiped, even on personally managed devices), and ensuring that IT has access to all corporate data to which it should have access (some confidential data, such as HR data, should not be available to IT in all cases.)

To support these policies and procedures, organizations should evaluate and deploy various technology solutions. Technologies that should be considered, but not all of which need to be deployed, include content archiving, backup and recovery, file sharing and collaboration, encryption, mobile device management, employee activity monitoring, data loss prevention, logging and reporting, virtual desktops and other solutions that will minimize the possibility of employees misappropriating corporate data upon their departure.

You can download the white paper here.

Internal Combustion Engines, Critical Thinking and Making Good IT Decisions

Germany’s Spiegel magazine has reported that the German Bundesrat (Germany’s federal council that has representatives from all 16 German states) will ban the internal combustion engine beginning in 2030. Consequently, the only way to achieve this goal would be en masse adoption of electric cars to replace today’s cars that are powered almost exclusively by internal combustion engines. This is a bigger issue in Germany than it would be in the United States, since there are significantly more cars per person in Germany than in the US.

Sounds like a good idea, but edicts passed down from senior managers are not always feasible, particularly when those managers might not have done the math to determine if their ideas can actually be implemented by those in the trenches. For example, here’s the math on the Bundesrat’s edict:

  • As of the beginning of 2015, there were 44.4 million cars in Germany. If we assume that the average German car is driven 8,900 miles per year and gets 30 miles to the gallon, each car consumes the equivalent of just under 10 megawatt-hours of electricity per year (based on one gallon of gasoline = 33.7 kWh).
  • Replacing all 44.4 million cars with electric vehicles would require generation of 443.9 terawatt-hours of electricity per year solely for consumption by automobiles (9.998 mWh per car x 44.4 million cars).
  • In 2015, Germany produced 559.2 terawatt-hours of electricity from all sources. That means that Germany would need to produce or import about 79% more electricity during the next 14 years than it does today. However, during the 13-year period from 2002 to 2015, German production of electricity increased by only 12%.
  • If the additional electricity needed for use by cars came from wind generators, it would require 64.5 million square miles of wind farms (based on an average of 93.0 acres per megawatt of electricity generated), an area that is 468 times larger than Germany’s footprint of 137,903 square miles.
  • If the additional energy came from solar, it would require 1.22 million square miles of solar panels (based on an optimistic assumption of 13 watts of electricity generated per square foot), an area about nine times larger than Germany.
  • If the additional energy came from nuclear power, Germany would need to build the equivalent of 13 high-capacity plants (assuming they have the capacity of the largest US nuclear plant, operating at Palo Verde, AZ).
  • Germany could use all of the oil it currently imports for automobiles for the production of electricity, but that would defeat the purpose of switching to electric cars.
  • Consequently, the only logical options to achieve a complete ban on the internal combustion engine by 2030 are a) build lots of new nuclear power plants that will generate the electricity needed for electric cars, or b) reduce driving in Germany by at least 85%. But even the last option would requires substantially greater production of electricity in order to power the additional rail-based and other transportation systems that would be required to transport Germans who are no longer driving cars. Even if we assume the German government would phase in the abolition of the internal combustion engine over, say, 10-15 years following the 2030 deadline, there’s still the problem of producing 79% more electricity between now and 2040-2045.

So, while converting to electric cars is a good idea in theory, in practice it is highly unlikely to happen in the timeframe mandated by the Bundesrat. In short, edicts from senior managers often can’t happen because these managers never did the math or spoke to anyone in the trenches who would be responsible for trying to make it happen.

The point of this post is not to criticize the German government or the notion of reducing the consumption of fossil fuels, but instead to suggest that critical thinking is needed in all facets of life. When someone proposes a new idea, be skeptical until you’ve done the math and thought about the consequences and considered the various ramifications of the proposal. For example, when senior management suggests your company move the email system completely to the cloud, think through all of the potential ramifications of that decision. Are there regulatory obligations we will no longer be able to satisfy? How much will it cost to re-write all of the legacy, email-generating applications on which we currently rely? What will happen to our bandwidth requirements? How will we deal with disaster recovery? How do we manage security? What is the complete cost of managing email in the cloud versus the way we do it now?

Senior managers or boards of directors will sometimes implement policy or make other important decisions without first consulting those who actually need to make it happen. This means that senior management teams, task forces, boards of directors, etc. need to a) stop doing that, b) do the math for any decision they’re considering and c) consult with the people who will be charged with implementing their decisions.

The Future of Computing is 40 Years Ago

The history of computing can be oversimplified as follows:

  • 1950s through the 1970s: Mainframes, in which massive computing and data storage resources were managed remotely in highly controlled data centers. Intelligence and data were highly centralized, accessed through dumb terminals.
  • 1980s through the 1990s: Client-server computing, in which intelligence and data moved to the endpoints of the network as CPU power and storage became dramatically less expensive.
  • 2000s: Cloud computing, in which much of the intelligence and data storage is moving back to highly controlled data centers, but with lots of intelligence and data still at the endpoints.

I believe the fourth major shift in computing will be to revert back to something approaching the mainframe model, in which the vast majority of computing power and data will reside in data centers that are under the tight control of cloud operators using both public and private cloud models.

Smartphones now have more computing power than most PCs did just a few years ago, albeit with much less storage capacity. While the smartphone does not provide corporate users with the form factor necessary to do writing, spreadsheets, presentations, etc. with the same ease that a desktop or laptop computer does, the combination of a smartphone’s CPU horsepower coupled with a monitor and keyboard that serves as a dumb terminal would provide the same experience as a desktop or laptop. As proposed by Robert X. Cringely a couple of years ago, I believe that the corporate PC of the future will be a completely dumb terminal with no Internet connection or local storage. Instead, it will have only a monitor and keyboard and will use the smartphone in the corporate user’s pocket as its CPU and connectivity.

Why? Three reasons:

  • It will be more secure. Data breaches are an unfortunate and increasingly common fact of life for virtually every organization. Many data breaches are the result of simple mistakes, such as laptops being stolen out of cars or left behind at TSA checkpoints, but many data breaches are the result of hacking into on-premises, corporate servers that are insufficiently protected. A review of the most serious data breaches reveals that the vast majority of data breaches have occurred from on-premises servers and other endpoints, not cloud providers. Yahoo!’s recent and massive data breach is more exception than rule, since cloud data centers are typically more secure than those on-premises behind a corporate firewall.
  • It will be cheaper. Instead of providing a laptop and/or desktop computer to individual users, companies will be able to provide a much less expensive dumb terminal to their users that will use a smartphone’s intelligence and computing horsepower to provide the laptop or desktop computing experience transparently. Users will be able to sit down at any dumb terminal, authenticate themselves, and enjoy a laptop or desktop experience. Because storage will be in the cloud, there will be no local storage of data, reducing cost and enhancing security. And, if the dumb terminal is stolen, a company is out only a few hundred dollars, not the millions of dollars for which it might be liable if data is breached from a stolen or otherwise compromised device.
  • It will be more controllable. Instead of users having access to two, three or more computing devices, users can be equipped with just one corporate device, a smartphone, that will enable all of their computing experiences. When the employee leaves the company or loses their device, disabling access to corporate data will be easier and more reliable.

In short, the future of computing will be conceptually similar to what our parents and grandparents experienced: computing intelligence and data storage in some remote, secure location accessed by dumb devices (other than our smartphone).

Best Practices for Dealing With Phishing and Ransomware

We have just published a white paper on phishing and ransomware that we welcome you to download and review. Here are some of the key takeaways from the paper:

  • Both phishing and crypto  ransomware are increasing at the rate of several hundred percent per quarter, a trend that Osterman Research believes will continue for at least the next 18 to 24 months.
  • The vast majority of organizations have been victimized by phishing, ransomware and a variety of security-related attacks during the past 12 months. In fact, phishing and ransomware are among the four leading concerns expressed by security-focused decision makers as discovered by Osterman Research in the survey conducted for this white paper.
  • Security spending will increase significantly in 2017 as organizations realize they need to protect against phishing, ransomware and the growing variety of other threats they face.
  • Most organizations are not seeing improvements in the security solutions they have deployed and in the security practices they follow. While many of these solutions are effective, most are not improving over time, in many cases because internal staff may not have the expertise to improve the performance of these solutions over time. On balance, only two in five of these solutions and practices are considered “excellent”.
  • Security awareness training is a key area for improvement in protecting organizations against phishing and ransomware, since our research found that organizations with well-trained employees are less likely to be infected.
  • There are a variety of best practices that organizations should follow in order to minimize their potential for becoming victims of phishing and ransomware. Among these best practices are implementing security awareness training, deploying systems that can detect and eliminate phishing and ransomware attempts, searching for and remediating security vulnerabilities in corporate systems, maintaining good backups, and using good threat intelligence.

You can download the paper here.

As an aside, I will be attending the Virus Bulletin International Conference next week in Denver and encourage you to do likewise if you’re at all focused on security. I have been to this event before and can vouch for its tremendous value as a place to learn about trends in cyber security and to advance your education about all things security.

Phishing and Ransomware are the Logical Evolution of Cybercrime

Phishing, which can be considered the delivery mechanism for various types of malware and cybercrime attempts; and ransomware, which is a specialized form of malware that is designed for the sole purpose of extorting money from victims, are critical problems that every organization must address and through a variety of means: user education, security solutions, vulnerability analysis, threat intelligence, good backup processes, and even common sense. The good news is that there is much that organizations can do to protect themselves, their data, their employees and their customers.

Phishing, particularly highly targeted forms of phishing like spearphishing and CEO Fraud/Business Email Compromise (BEC), as well as ransomware, are the logical evolution of cybercrime. Because there have been so many data breaches over the past few years that have resulted in the theft of hundreds of millions of records, there is a glut of this information on the market. The result, as there would be in any other business driven by the economics of supply and demand, is that prices for stolen records are dropping precipitously: a leading security firm estimates that the price of a stolen payment-card record has decreased from $25 in 2011 to just $6 in 2016.

Consequently, cybercriminals are turning increasingly to more direct means of theft. For example, ransomware will extort money directly from victims without requiring stolen data to be sold on the open market where it is subject to economic forces that can reduce its value. CEO Fraud/BEC can net hundreds of thousands or millions of dollars in a short period of time by getting victims to wire funds directly.

We are in the process of writing a white paper on phishing and ransomware, and will be publishing the results of an in-depth survey on these problems. Let us know if you have any questions or would like copy of the white paper when it is published next week.

The Department of Labor’s New Overtime Rule

I originally posted this in September 2015, but have updated it now that the new Labor Department overtime rules are going to become effective in the near future.

**********

In March 2014, the president directed the US Department of Labor to update key regulations for white-collar workers who are covered by the overtime and minimum wage standards under the Fair Labor Standards Act (FLSA) Act. In July 2015, a Notice of Proposed Rulemaking was published in the Federal Register for the purpose of soliciting public comments on the rule. The 98-page (!) document is available for review here.

The result of the proposed rule change will be to require employers to pay workers for after-hours activities that they are required to perform, such as checking email, being available to deal with company emergencies, or responding to a manager’s inquiries. Currently, employees who earn more than $23,660 per year (about $11.38 per hour) are exempt from these rules and can be required to work after-hours for no additional overtime pay. The current rule, last updated in 2004, would raise the exemption level to $47,476 (about $22.83 per hour). The new rule will add more than four million additional employees to those already covered.

Here is what I believe will be some of the implications of this new rule:

  • There will be a need to block employee access to a variety of corporate systems for employees whose salaries are below the Labor Department-imposed threshold. These systems include email, SharePoint, CRM systems, corporate social media, corporate instant messaging, VoIP, and any other communication or collaboration system that could possibly be used to respond to a manager’s inquiry, a customer request, a server alert, or that can be used for any type of work activity. One email server vendor, Alt-N, has already implemented a “Do Not Disturb” feature that will allow companies to turn off email during non-working hours so that they can be compliant with the new overtime rules.
  • The alternative, of course, is to simply pay employees for the additional time they work beyond 40 hours each week, but that creates problems that many organizations may not want to address, and it could add dramatically to labor costs. For example, if an employee checks email after work hours, will they be required to log their time spent doing so? Would this include informally checking email if they wake up in the middle of the night?
  • Access control will have to be appropriately linked between HR and IT so that employees who are below the Labor Department-mandated threshold will be prevented from accessing corporate systems during non-work hours. When an employee’s salary reaches the government-mandated level, however, then access can be turned on for these individuals.
  • There will be instances in which an employee whose salary is below the threshold will temporarily be required to work after-hours (such as an administrative assistant covering for his or her manager when he or she is out sick) and so access management capabilities will have to be in place to turn these capabilities on and off quickly to ensure that the employee can fulfill his or her job requirements. This will necessitate a tie-in to HR systems to guarantee that the employee is compensated appropriately for after-hours work.
  • Larger companies will have to maintain even tighter controls to prevent violations of the law for the same employee roles if compensation for these roles differs. For example, according to Indeed.com a customer service representative in New York City makes $60,000 per year and so will have the Labor Department’s permission to access email and other corporate systems after-hours without the need to be paid extra. However, the same job title in Wichita, Kansas makes $40,000 per year and so will not be allowed to do so without receiving overtime pay. What this means is that employees in more expensive labor markets will have freedoms that their counterparts in less expensive labor markets will not have. It also means that employees with more experience and who are paid a higher salary could have access to corporate systems while their less experienced and lower paid counterparts could not.
  • While some employers abuse their employees’ time and expect them to work after-hours for no additional pay or other compensation, there are employees who actually want to work after-hours. For example, some enterprising employees looking to impress their boss or their clients might want to catch up on email before going to bed simply to get a jump on the next day. Some might want to respond to a European or Asian customer’s inquiry in the middle of the night to satisfy that customer as quickly as possible. However, only employees whose salary is above the Labor Department’s threshold will be permitted to do these things on their own time.
  • IT will need to make special accommodations for traveling employees. For example, an employee based in California who travels to Virginia might want to check his or her email at 7:00am local time. However, because his or her email access is restricted until working hours begin in California, accessing email could be impossible until 11:00am local time (8:00am California time) unless the employee has pre-arranged with IT to implement a temporary rule change to accommodate his or her presence on the east coast.

In my opinion, employees should have the right to access corporate systems whenever they want to do so. And employees in Wichita should have the same options available to them as their counterparts in New York, as should less experienced/lower paid employees who work alongside their more experienced/better paid co-workers.

All of that said, it will be essential for employers to be able to turn email and other corporate systems on and off based on this ruling. Not to do so could end up being very expensive.

The Danger of Juxtaposition and Social Media

Henry David Thoreau: “The question is not what you look at, but what you see.”

The law in the United States includes your reputation as a component of your personal property. Just like you’d protect your personal property from damage, so too should you protect your reputation from harm, even harm on social media.

One of the ways that your reputation can be damaged is through juxtaposition, which Dictionary.com defines as “the state of being close together or side by side.” For example, in the context of social media, juxtaposition can occur when someone sees offensive content in close proximity to your name, such as a comment to one of your posts on Facebook. If you’re a company and your employees post offensive content AND indicate that you’re their employer, that can harm your corporate reputation simply by being associated with the offender.

This was highlighted for me recently when a friend on Facebook posted some photos about someone burning an American flag and one of her friends responded, “Yes, beat the **** out of him.” Unfortunately, for her employer, she noted in her profile that she’s an assistant vice president for a bank located here in the Northwest. In another example, the Facebook friend of a Facebook friend has posted a long string of very offensive and personally demeaning comments on his Facebook page — and he too took the time to prominently display his employer’s name on his Facebook profile.

Clearly, the employers in these examples did not authorize the juxtaposition of their corporate identity and the offensive content published by their employees. They very likely don’t hold the views that their employees express. Moreover, the vast majority of people will never consciously blame the employer for the offensive views of their employees. But, the juxtaposition of an employer’s identity and stuff that would clearly offend a large proportion of their current or prospective customers has been posted for all the world to see. Like it or not, some people will inadvertently associate that company with that content. Most will not choose to do so, but when some see the company name again, they will remember the offense they took at what they saw.

As an employer, you really can’t control what employees post on their personal social media accounts. However, you can remind employees about the importance of appropriate decorum when using social media, even if it’s their own. You can ask employees not to post your company’s identity on their personal social media profiles. You can have a policy that prevents the use of personal social media using company-owned facilities. And, you can hire people who restrain themselves just a bit before posting to their personal social media accounts, because if they choose to be racially, sexually or politically offensive on their own time, you can bet that it’s probably going to spill over into their behavior as an employee at some point.