The Department of Labor’s New Overtime Rule

I originally posted this in September 2015, but have updated it now that the new Labor Department overtime rules are going to become effective in the near future.

**********

In March 2014, the president directed the US Department of Labor to update key regulations for white-collar workers who are covered by the overtime and minimum wage standards under the Fair Labor Standards Act (FLSA) Act. In July 2015, a Notice of Proposed Rulemaking was published in the Federal Register for the purpose of soliciting public comments on the rule. The 98-page (!) document is available for review here.

The result of the proposed rule change will be to require employers to pay workers for after-hours activities that they are required to perform, such as checking email, being available to deal with company emergencies, or responding to a manager’s inquiries. Currently, employees who earn more than $23,660 per year (about $11.38 per hour) are exempt from these rules and can be required to work after-hours for no additional overtime pay. The current rule, last updated in 2004, would raise the exemption level to $47,476 (about $22.83 per hour). The new rule will add more than four million additional employees to those already covered.

Here is what I believe will be some of the implications of this new rule:

  • There will be a need to block employee access to a variety of corporate systems for employees whose salaries are below the Labor Department-imposed threshold. These systems include email, SharePoint, CRM systems, corporate social media, corporate instant messaging, VoIP, and any other communication or collaboration system that could possibly be used to respond to a manager’s inquiry, a customer request, a server alert, or that can be used for any type of work activity. One email server vendor, Alt-N, has already implemented a “Do Not Disturb” feature that will allow companies to turn off email during non-working hours so that they can be compliant with the new overtime rules.
  • The alternative, of course, is to simply pay employees for the additional time they work beyond 40 hours each week, but that creates problems that many organizations may not want to address, and it could add dramatically to labor costs. For example, if an employee checks email after work hours, will they be required to log their time spent doing so? Would this include informally checking email if they wake up in the middle of the night?
  • Access control will have to be appropriately linked between HR and IT so that employees who are below the Labor Department-mandated threshold will be prevented from accessing corporate systems during non-work hours. When an employee’s salary reaches the government-mandated level, however, then access can be turned on for these individuals.
  • There will be instances in which an employee whose salary is below the threshold will temporarily be required to work after-hours (such as an administrative assistant covering for his or her manager when he or she is out sick) and so access management capabilities will have to be in place to turn these capabilities on and off quickly to ensure that the employee can fulfill his or her job requirements. This will necessitate a tie-in to HR systems to guarantee that the employee is compensated appropriately for after-hours work.
  • Larger companies will have to maintain even tighter controls to prevent violations of the law for the same employee roles if compensation for these roles differs. For example, according to Indeed.com a customer service representative in New York City makes $60,000 per year and so will have the Labor Department’s permission to access email and other corporate systems after-hours without the need to be paid extra. However, the same job title in Wichita, Kansas makes $40,000 per year and so will not be allowed to do so without receiving overtime pay. What this means is that employees in more expensive labor markets will have freedoms that their counterparts in less expensive labor markets will not have. It also means that employees with more experience and who are paid a higher salary could have access to corporate systems while their less experienced and lower paid counterparts could not.
  • While some employers abuse their employees’ time and expect them to work after-hours for no additional pay or other compensation, there are employees who actually want to work after-hours. For example, some enterprising employees looking to impress their boss or their clients might want to catch up on email before going to bed simply to get a jump on the next day. Some might want to respond to a European or Asian customer’s inquiry in the middle of the night to satisfy that customer as quickly as possible. However, only employees whose salary is above the Labor Department’s threshold will be permitted to do these things on their own time.
  • IT will need to make special accommodations for traveling employees. For example, an employee based in California who travels to Virginia might want to check his or her email at 7:00am local time. However, because his or her email access is restricted until working hours begin in California, accessing email could be impossible until 11:00am local time (8:00am California time) unless the employee has pre-arranged with IT to implement a temporary rule change to accommodate his or her presence on the east coast.

In my opinion, employees should have the right to access corporate systems whenever they want to do so. And employees in Wichita should have the same options available to them as their counterparts in New York, as should less experienced/lower paid employees who work alongside their more experienced/better paid co-workers.

All of that said, it will be essential for employers to be able to turn email and other corporate systems on and off based on this ruling. Not to do so could end up being very expensive.

The Danger of Juxtaposition and Social Media

Henry David Thoreau: “The question is not what you look at, but what you see.”

The law in the United States includes your reputation as a component of your personal property. Just like you’d protect your personal property from damage, so too should you protect your reputation from harm, even harm on social media.

One of the ways that your reputation can be damaged is through juxtaposition, which Dictionary.com defines as “the state of being close together or side by side.” For example, in the context of social media, juxtaposition can occur when someone sees offensive content in close proximity to your name, such as a comment to one of your posts on Facebook. If you’re a company and your employees post offensive content AND indicate that you’re their employer, that can harm your corporate reputation simply by being associated with the offender.

This was highlighted for me recently when a friend on Facebook posted some photos about someone burning an American flag and one of her friends responded, “Yes, beat the **** out of him.” Unfortunately, for her employer, she noted in her profile that she’s an assistant vice president for a bank located here in the Northwest. In another example, the Facebook friend of a Facebook friend has posted a long string of very offensive and personally demeaning comments on his Facebook page — and he too took the time to prominently display his employer’s name on his Facebook profile.

Clearly, the employers in these examples did not authorize the juxtaposition of their corporate identity and the offensive content published by their employees. They very likely don’t hold the views that their employees express. Moreover, the vast majority of people will never consciously blame the employer for the offensive views of their employees. But, the juxtaposition of an employer’s identity and stuff that would clearly offend a large proportion of their current or prospective customers has been posted for all the world to see. Like it or not, some people will inadvertently associate that company with that content. Most will not choose to do so, but when some see the company name again, they will remember the offense they took at what they saw.

As an employer, you really can’t control what employees post on their personal social media accounts. However, you can remind employees about the importance of appropriate decorum when using social media, even if it’s their own. You can ask employees not to post your company’s identity on their personal social media profiles. You can have a policy that prevents the use of personal social media using company-owned facilities. And, you can hire people who restrain themselves just a bit before posting to their personal social media accounts, because if they choose to be racially, sexually or politically offensive on their own time, you can bet that it’s probably going to spill over into their behavior as an employee at some point.

The Need to Manage Social Media Properly

Social media is pervasive in the workplace, not only by employees for their personal use, but also for business purposes. For example, 73% of the organizations surveyed for a white paper that we recently published employ Facebook for business reasons, 64% use LinkedIn, and 56% use Twitter, in addition to a variety of other social media platforms. Moreover, a large and growing proportion of organizations use enterprise social media platforms, such Microsoft SharePoint, various Cisco social media tools, Microsoft Yammer, Salesforce Chatter and IBM Connections, among many others.

The use of social media provides a number of important benefits that help organizations to become more efficient, that help users speed the decision-making process, and that allow information sharing in a way that is not possible or practical otherwise. However, the use of social media – whether consumer-focused or enterprise-grade – comes with several risks and costs:

  • The increased likelihood that malicious content can enter an organization through a social media channel. Our research found that 18% of organizations have experienced malware infiltration through social media, although a substantially larger proportion simply don’t know how malware entered.
  • The greater likelihood of breaching sensitive or confidential data, either through inadvertent actions on the part of employees, such as unmanaged sharing of geolocation data, or malicious employee activities.
  • The inability to retain the relevant business records and other information that organizations are obligated to preserve. Our research found that 43% of organizations that have deployed an enterprise social platform do not archive information from it, yet 26% have had to produce content for eDiscovery from the platform.

To address these issues and mitigate the risks associated with the use of social media, every organization that permits social media use (as well as those that permit it but don’t block it) should implement a variety of best practices:

  • Conduct an internal audit of social media use to determine which tools are being used, why they are in use, and the business value that organizations are deriving or potentially can derive from them. The analysis that flows from this audit should also consider the consequences of forbidding certain social media tools, if they decide that’s warranted, including the impact it will have on customer relationships and employee morale.
  • Implement appropriate policies that will address employees’ acceptable use of social media tools, identify which roles in the organization should have rights to specific social media features and functions, and clearly spell out the rights of the organization to monitor, manage and archive social media use and content.
  • Ensure that employees are trained on corporate social media policies and that they are kept up-to-date on policy changes.
  • Deploy the appropriate technologies that will mitigate risks from malware and other threats delivered through social media and corporate social networks.
  • Deploy solutions that will archive business records and other content contained in social media and corporate social networks.
  • Implement an enterprise social media solution that will not only mitigate the risks associated with use of consumer-focused social media tools, but that will also provide enhanced communication, collaboration and information-sharing capabilities.

You can download our most recent white paper on enterprise social media here.

Some Thoughts on IBM Connect

This was my tenth IBM Lotusphere/ConnectED/Connect and, arguably, one of the best. A somewhat new focus, a new venue and a substantial number of people (2,400?) made for a very good event. The expo floor continues to shrink each year, but was still fairly busy most of the times I was there or passed by. Plus, holding the event in a new venue helps to minimize comparisons with past events that had 10,000 or more attendees.

IBM is pushing hard on its social message, integrating social collaboration across every aspect of its offerings: Notes, Domino, Verse, Connections, et al. Even more pronounced was the “cognitive” message – namely applying Watson technology to just about every aspect of the user experience, from identifying those emails that users need to address first to simplifying the calendar experience.

What was interesting is that the keynotes stressed capabilities – communicating more effectively, setting up meetings, and having better access to files – not product names. For example, while I would have expected Verse to take center stage as the hub of the user experience, the name “Verse” was surprisingly underemphasized (at least in the keynotes, although not so much in the breakout sessions). Apparently, according to the IBMers with whom I spoke about this, it was by design. IBM wants to emphasize what people can do, not the tools they use to do it. For example, the company emphasized its dashboard that is automatically populated for each user with content from Verse, Connections and other tools depending on how people work, but minimizes the identity of the specific platforms that host this information.

While I understand the capabilities-not-products approach, I’m not sure the market will agree. Microsoft’s success in the business communication space is attributable, in part, to the fact that it pushes hard on product identity: Exchange, Outlook, Office 365, Yammer and, more recently, Skype for Business. For example, there are many non-IT decision makers that tell IT they want “Outlook” as their corporate email system (when they really mean Exchange), not “the ability to manage email, calendars and tasks from a single thick or thin client interface”. I could be wrong and IBM’s research may indicate that people think in terms of capabilities and not products, but I don’t think so.

Moreover, when comparing Verse to Exchange Online or Gmail, Verse wins hands down in my opinion. The interface in Verse is cleaner, and the integration with Watson to apply analytics to email makes it the superior offering. Yet, many – even in the analyst community – have never heard of Verse. I don’t believe a strategy that deemphasizes the identity of this very good email platform is the right choice.

With regard to Verse, IBM is making headway here, although the company’s policy is not to reveal numbers from its customer base. All of IBM’s several hundred thousand users have been migrated to Verse and there are some useful new features and functions coming down the road. For example, an offline capability will be available at the end of March that will allow access to five days of email and 30 days of calendar (a future version will permit users to adjust the amount of content available offline). Two hundred IBMers are already using offline Verse. Given that the offline version using HTML 5 will suffice for the non-connected experience, there will not be a Verse client anytime soon, if ever. An on-premises version of Verse will be coming later this year. There are other developments to be made available soon, such as the ability to use Gmail and Verse simultaneously in trial accounts, that I will write about when they’re ready.

With regard to other vendors at Connect, I was quite impressed with Trustsphere’s LinksWithin offering that enables analysis of relationships within email, as well as Riva International’s server-side CRM integration capabilities that allow CRM data from a variety of leading platforms to be accessed within Notes, Exchange and other email clients and Webmail.

What You Can Do With “Records”

Some questions about your taxes:

  • Do you file a tax return?
  • Do you make a copy of that tax return?
  • Do you put that copy into a filing cabinet or some other place where you’ll be able to find it quickly?
  • Do you pull that copy out of the filing cabinet and shred it after 30 days instead of keeping it for the next several years?

Hopefully, your answers are Yes, Yes, Yes and No. If that’s the case, you already get the concept of Archiving 1.0 because you’ve a) determined what constitutes an important record, b) you understand the importance of making a copy of it, c) you know that you need to have it readily available in the future, and d) you realize that you have to keep important records for a long time.

That’s where Archiving 1.0 pretty much ends: making copies of important stuff, putting it away for long periods, and being able to find it when needed. But what’s next — what many are calling Archiving 2.0? Consider those multiple years of tax returns for a moment. They include records of your earnings, deductions and other important information that you need to defend yourself in case you’re audited by your tax authority, apply for a loan, or otherwise need to prove how much you earn and deduct each year. But they also contain lots of other information — data on those you support, where you spend your money,  how much you invest, your financial gains and losses, how your income changes each year, charities to which you donate, the amount you pay in property taxes, who you employ to do your taxes, changes in your family structure, and a great deal of other information that would allow someone to understand your decision-making, your success in business, the nature of your key relationships, etc.

Now, think about Archiving 2.0 in the context of your business. Let’s say that you archive just your corporate email. Doing so would preserve all of the business records sent and received through email that you might need to defend yourself to satisfy your Archiving 1.0 obligations. However, here’s what else it would contain:

  • Every customer inquiry delivered through email, who responded to it, the amount of time that it took to respond, the customer’s response in return, and whether or not the inquiry was resolved to the customer’s satisfaction.
  • Every prospect inquiry delivered through email and how it was satisfied (or not).
  • What managers tell employees in email.
  • What employees tell each other in email.
  • How employees deal with sensitive information.
  • Information about rumors that might be spreading in the company.
  • How employees are using corporate email after hours.
  • The recipients of every email and attachment sent through email, including information that was sent to competitors.
  • Information about employees that might be considering or committing fraud.
  • How people in your company interact with one another.
  • The actual management hierarchy in your company that may or may not coincide with your org chart.

This is just the tip of the iceberg in terms of what you might be able to do with this information given the right archiving platform, the right analytics tools, and the ability to sell management on the idea that your information archives contain a wealth of untapped information about your company that won’t be available anywhere else. Now, add in other data types, such as social media posts, instant messages, voicemails, collaborative session discussions, files, etc. and dramatically more information is now available for investigations, analysis of customer interactions, employee behavior, helping employees find the expertise they need, and building better connections between your employees, business partners, customers, prospects and others.

We’re about to produce a white paper on this topic, including an in-depth survey of where organizations are going with Archiving 2.0. We’ll report back on the key findings when they’re available.

Check out StealthChat

Most of the communications we send or receive can be accessed by unauthorized parties: email is typically sent in clear text, voice communications can be intercepted, and instant messages or Facebook Messenger posts are typically not secure. Plus, our communications can live forever on a server or on the recipients’ devices, increasing the potential data leaks or some other form of unauthorized access.

Enter StealthChat, a free service provided by Rockliffe Systems. StealthChat provides a number of important capabilities, including instant/chat messaging, the ability to place VoIP calls, and the ability to share images, all with robust encryption to ensure that unauthorized parties cannot gain access to your content. All content is encrypted both on the device and in transit. Plus, senders can establish a “burn” time for each message, making it disappear a set amount of time after it has been read. Moreover, content sent via StealthChat never gets written to a server, but instead resides only on the senders’ and recipients’ devices.

StealthChat competes with a number of offerings, including WhatsApp, Skype, Facebook Messenger, SnapChat and others. In fact, Rockliffe unofficially calls StealthChat “SnapChat for Professionals”. What sets StealthChat apart is that, unlike some of its competitors, it provides encryption at rest, provides VoIP capability that is as secure as chat messages, and nothing is stored on Rockliff’s or anyone else’s servers.

While much has been made of these types of encrypted, ephemeral communications for illicit activities, including terrorist operations, they have tremendous value for legitimate purposes. For example, traders sending information to one another, healthcare professionals sharing patient information, or business people sending confidential information, all can make use of StealthChat. Of course, any information that should be archived for long periods needs to be sent via more traditional communications channels and should not be sent via StealthChat for obvious reasons, but much of our communications doesn’t fall into this category and can make good use of the security that StealthChat provides.

The Importance of Good Collection During eDiscovery

In the case of Green v. Blitz USA, Inc. – a wrongful death case in which the plaintiff’s husband was killed by an exploding gas can produced by the defendant – the jury ruled unanimously in favor of the defendant. Because of a high-low agreement into which the parties had entered during jury deliberations, the plaintiff received a relatively small payment from the defendant. However, a year after this case was settled, the plaintiff determined that poor data collection practices by the defendant led to non-production of key documents that should have been presented during eDiscovery. Although the statute of limitations under the Federal Rules of Civil Procedure (FRCP) prevented a new trial in this case, the court ordered that:

  • The defendant must pay $250,000 in civil contempt sanctions to the plaintiff.
  • The defendant had 30 days to provide a copy of the court’s ruling about its poor collection practices to every plaintiff that had a case against the company during the past two years.
  • The defendant was ordered to pay a sanction of an additional $500,000 until the court’s orders in this case had been carried out. If Blitz complied with the court’s order, this particular sanction would be terminated.
  • For the next five years, the defendant was required to provide a copy of the court’s order as part of its initial pleading or filing to every party in every lawsuit in every court in which it might be involved.

Clearly, improper data collection can result in potentially severe sanctions.

Should You Be Paid Overtime for Checking Email?

In March 2014, the president directed the US Department of Labor to update key regulations for white-collar workers who are covered by the overtime and minimum wage standards under the Fair Labor Standards Act (FLSA) Act. In July 2015, a Notice of Proposed Rulemaking was published in the Federal Register for the purpose of soliciting public comments on the rule. The 98-page (!) document is available for review here.

The result of the proposed rule change will be to require employers to pay workers for after-hours activities that they are required to perform, such as checking email, being available to deal with company emergencies, or responding to a manager’s inquiries. Currently, employees who earn more than $23,660 per year (about $11.38 per hour) are exempt from these rules and can be required to work after-hours for no additional overtime pay. The current rule, last updated in 2004, would raise the exemption level to $47,892, or the 40th percentile of earnings for a full-time, salaried employee in 2013. The new rule will add approximately five million additional employees to those already covered.

Here is my two cents on the proposed rule:

  • From a technology perspective, there will be a need to block employee access to a variety of corporate systems for employees whose salaries are below the FLSA threshold. These systems include most notably email, but also SharePoint, CRM systems, corporate social media, corporate instant messaging, VoIP, and any other communication or collaboration system that can possibly be used to respond to a manager’s inquiry, a customer request, a server alert, or that can be used for any type of work activity.
  • The alternative, of course, is to simply pay employees for an additional 10-15 or more hours per week, but that creates problems that many organizations may not want to address, and it could add dramatically to labor costs.
  • Today, approximately zero email systems have the ability to block access to specific users or roles (that’s going to change very soon), but this will be an essential capability once the rule goes into effect, as it will be for all corporate systems.
  • Access control will have to be appropriately linked between HR and IT so that employees who are below the FLSA-mandated threshold will be denied access to corporate systems during certain hours. When an employee’s salary reaches the government-mandated level, however, then access can be turned on for these individuals.
  • Moreover, there will be instances in which an employee whose salary is below the threshold will temporarily be required to work after-hours (such as an administrative assistant covering for his or her manager when he or she is out sick) and so access management capabilities will have to be in place to turn these capabilities on and off quickly to ensure that the employee can fulfill their job requirements. This will necessitate a tie-in to HR systems to guarantee that the employee is compensated appropriately for his or her after-hours work.
  • Larger companies will have to maintain even tighter controls to prevent violations of the law for the same employee roles if compensation for these roles differs. For example, according to Indeed.com a customer service representative in New York City makes $60,000 per year and so will have permission to access email and other corporate systems after-hours without the need to be paid extra, while the same job title in Wichita, Kansas makes $40,000 and so will not be allowed to do so without receiving overtime. While this would apply based on geography, this could also mean that a more experienced individual whose salary is above the government-mandated level would be entitled to after-hours access to email and other corporate systems, while his or her less experienced and lower paid counterpart would not.

Philosophically, I am opposed to this type of rule. While I fully realize that some employers abuse their employees’ time and expect them to work after-hours for no additional pay or other compensation, there are employees who actually want to work after-hours: some might want to catch up on email before bed simply to get a jump on the next day, some might want to respond as quickly as possible to a customer’s inquiry to gain some sort of a competitive advantage for their employer, or some might just want to impress their boss. Employees should have the right to do all of these things – and employees in Wichita should have the same options as their counterparts in New York, as should less experienced/lower paid employees who work alongside their more experienced/better paid co-workers.

All of that said, it will be essential for employers to be able to turn email and other corporate systems on and off based on this ruling. Not to do so could end up being very expensive.

A Response to “New report slams Office 365 compliance features unfairly”

We recently published a multi-sponsor white paper entitled, The Role of Third-Party Tools for Office 365 Compliance that you can download from our Web site here. Tony Redmond provided a critique of that report on the Windows IT Pro blog that you can read here. While we appreciate Mr. Redmond’s thoughtful response to the white paper, particularly given his stature in the industry and his many years of experience with Microsoft and Microsoft-related solutions, we wanted to offer our two cents on his critique:

The title, New report slams Office 365 compliance features unfairly, doesn’t really reflect the tone of several of the statements we made in the white paper, including:

  • “Microsoft has invested and continues to invest a significant amount of financial resources and effort to build compliance capabilities into Office 365.”
  • “Microsoft offers a range of current [compliance] capabilities in these areas, and is evolving its capabilities to increase coverage.”
  • “With a platform aimed at hundreds of millions of users, Microsoft recognizes that its compliance capabilities will not meet every need, nor address the requirements of every organization. The aim is to have sufficient systemic capabilities to address broad and general-purpose compliance requirements, in line with certain assumptions about the organization and its IT environment.”
  • “Office 365 is a robust platform that offers a number of useful capabilities.”

Quite honestly, we don’t think that we slammed Microsoft or Office 365, both of which we hold in high regard as reflected in these quotes from the paper.

Moreover, we have publicly stated within the past couple of weeks: “Should organizations consider deploying Office 365? Absolutely, since Microsoft offers a robust feature set and continues to enhance Office 365 with new features and capabilities.

In another multi-sponsor white paper we published earlier this year we stated, “There is no denying that Microsoft Office 365 is a robust offering that offers a wide range of capabilities. Microsoft has taken pains to ensure that Office 365 operates with reasonable reliability and that its features and functions meet the needs of a wide range of potential customers. However, as with any mass-market, technology-based offering there will be deficiencies in specific aspects of the features and functions that many customers require. Because no cloud-based offering can be all things to all customers, many – if not most – Office 365 customers will require third party products and services to supplement the native capabilities of the platform.”

In short, we like Office 365, we think it provides robust functionality, and we think it’s a good value for the money. What we’re not saying is that it can be all things to all users all the time.

Also, Mr. Redmond’s blog originally and inaccurately stated that that survey data cited in the report was used for three reports, but after our conversation he graciously and quickly corrected the statement – a survey was conducted specifically for this report, although we have conducted several Office 365-focused surveys this year. However, the fact that this white paper was also sponsored by five other companies in addition to Knowledge Vault (Good Technology, GWAVA, KeepIT, Mimecast and Smarsh) was not mentioned in the blog. As an aside, the link to “Knowledge Vault” in the blog goes here, an incorrect site; and not here, the correct one.

While we definitely do NOT think that Mr. Redmond’s review of our white paper was in any way tainted by the fact that he is on the Advisory Board of a Knowledge Vault competitor, a footnote stating that would have been a useful addition.

Mr. Redmond wrote that, “Another criticism leveled is that Microsoft delivers “good enough” compliance features. The report acknowledges that Office 365 has to service hundreds of millions of users, amounting to some 1.2 million tenants. A specific compliance requirement for one company might therefore not be found inside Office 365, especially if that requirement is specific to a certain industry or country. In any case, the success of Exchange and SharePoint in the on-premises arena is underpinned by an ecosystem of third party software that fill the gaps left by Microsoft.

Yes, that was exactly our point, as stated in the paper: “As with any cloud-based offering, these [Office 365 compliance] limitations will necessitate the use of third-party compliance capabilities in order for organizations to fully satisfy their regulatory and legal compliance obligations.”

On balance, we appreciated Mr. Redmond’s blog, but take issue with a few of its points.

Blocking Access to Bad Stuff on the Internet

There are more than 150 million domains in use: as of late March 2015 there were 157.7 million active domains in use, up from 139.3 million in late March 2012, an increase of 18.4 million domains, or 13% in a very saturated market. This represents a net increase of 12,000 domains per day, although roughly an order of magnitude more domains are created and taken down each day as cybercriminals exploit the security problems in the domain registrar industry.

One of the best defenses against the problem of malicious domains is user education: getting users never to click on malicious links in phishing emails, in poisoned Web searches, and so forth. However, this is clearly not a realistic solution to the problem because users are gullible or will make mistakes and click on links that lead to malicious content, thereby infecting their computer or an entire corporate network with malware.

While ongoing training of end users can go a long way toward eliminating these consequences, the primary line of defense against malicious domain use should be a system that will prevent a user who clicks on a link from being connected to the source of the malicious content. In this scenario, the domains in the links presented to the user will be analyzed for malicious content and managed appropriately: users who click on, or directly enter valid URL’s will be presented with the content they seek, while clicking on a malicious link will result in redirection to an informational page indicating that the URL is malicious, and thus not accessible from the organization’s network as a precautionary measure against malware or other threats.

The fundamental problem with the Internet in the context of proliferating bogus domains being registered so easily and then used for criminal purposes is that there has been no practical way to block traffic to these malicious domains. While it is, of course, technically possible to block access to a domain, the lack of information about domains has made this practically impossible, except in the most obvious of cases.

How does a provider know which domains are safe to resolve and which are not? While there are millions of widely used and long-standing domains in use that are obviously valid, there are millions more that may or not may not be. For example, how would an Internet service provider know that it is safe to resolve the valid domain “acutech-consulting.com”, but that they should block the bogus domain “trilane-consulting.com”? Moreover, how will a provider know when a formerly valid domain has now been compromised and is now serving up malicious content?

What providers need, therefore, is a reliable and timely source of information about domains. A DNS Response Policy Zone (RPZ) data feed is such a service, one that provides information about domains so that providers can make informed decisions about if and how they should resolve domains that are known to be bogus or serving up malicious content.

The concept behind a DNS RPZ is conceptually similar to the real-time block lists that have been used for email delivery for more than a decade. Using these block lists, email service providers can obtain real time information about email servers and then make a decision about whether or not to accept email from servers that have been used to send spam or infected content. In the same way, a DNS RPZ publishes information about domains for the purpose of letting providers make a decision about resolving domains based on their likelihood of being unsafe for users or applications to access. In short, DNS RPZ provides the same type of capabilities for DNS resolvers that Real Time Block Lists (RBLs) provide for email servers.

An RPZ is designed to rewrite queries or response sets when domains are accessed. RPZ is a technology that leverages data feeds, and so it is the quality of the data feeds that make or break their use. Therefore, the key to effective use of the RPZ is the quality and timeliness of the data feed. The time required to detect a potentially malicious domain and update the information about it can range from 90 seconds to 24 hours. The slower the update cycle, the less useful that RPZ data feed becomes.

The Spamhaus Domain Block List (DBL), launched in 2010, currently contains information on close to 300,000 suspicious or outright malicious domains and is updated every two minutes. It is important to note that the Spamhaus DBL is an extraordinarily dynamic block list: tens of thousands of domains are added and removed from the DBL approximately every 24 hours as cybercriminals create and take down domains used in their activities. Because cybercriminals generally do not pay for domains – registering and disabling them within six hours or so by using less than highly reputable domain registrars – they are able to maintain a continual supply of new domains at low cost. Keeping up with this cybercriminal technique is what the Spamhaus DBL has been designed to do.

Somewhat related to what Spamhaus does is ThreatSTOP. The company maintains a regularly updated database of suspicious IP addresses that is used to populate firewalls with threat intelligence. When malware that may be present on corporate endpoints attempts to “phone home”, access to these IP addresses is blocked, effectively making the network behind the firewall invisible to cybercriminals. Today, ThreatSTOP supports a number of firewalls, including those from Palo Alto Networks, Check Point, Fortinet, Cisco, Juniper and a number of other vendors.

We have written a white paper on Spamhaus’ DNS RPZ technology that you can download here.