Blocking Access to Bad Stuff on the Internet

There are more than 150 million domains in use: as of late March 2015 there were 157.7 million active domains in use, up from 139.3 million in late March 2012, an increase of 18.4 million domains, or 13% in a very saturated market. This represents a net increase of 12,000 domains per day, although roughly an order of magnitude more domains are created and taken down each day as cybercriminals exploit the security problems in the domain registrar industry.

One of the best defenses against the problem of malicious domains is user education: getting users never to click on malicious links in phishing emails, in poisoned Web searches, and so forth. However, this is clearly not a realistic solution to the problem because users are gullible or will make mistakes and click on links that lead to malicious content, thereby infecting their computer or an entire corporate network with malware.

While ongoing training of end users can go a long way toward eliminating these consequences, the primary line of defense against malicious domain use should be a system that will prevent a user who clicks on a link from being connected to the source of the malicious content. In this scenario, the domains in the links presented to the user will be analyzed for malicious content and managed appropriately: users who click on, or directly enter valid URL’s will be presented with the content they seek, while clicking on a malicious link will result in redirection to an informational page indicating that the URL is malicious, and thus not accessible from the organization’s network as a precautionary measure against malware or other threats.

The fundamental problem with the Internet in the context of proliferating bogus domains being registered so easily and then used for criminal purposes is that there has been no practical way to block traffic to these malicious domains. While it is, of course, technically possible to block access to a domain, the lack of information about domains has made this practically impossible, except in the most obvious of cases.

How does a provider know which domains are safe to resolve and which are not? While there are millions of widely used and long-standing domains in use that are obviously valid, there are millions more that may or not may not be. For example, how would an Internet service provider know that it is safe to resolve the valid domain “acutech-consulting.com”, but that they should block the bogus domain “trilane-consulting.com”? Moreover, how will a provider know when a formerly valid domain has now been compromised and is now serving up malicious content?

What providers need, therefore, is a reliable and timely source of information about domains. A DNS Response Policy Zone (RPZ) data feed is such a service, one that provides information about domains so that providers can make informed decisions about if and how they should resolve domains that are known to be bogus or serving up malicious content.

The concept behind a DNS RPZ is conceptually similar to the real-time block lists that have been used for email delivery for more than a decade. Using these block lists, email service providers can obtain real time information about email servers and then make a decision about whether or not to accept email from servers that have been used to send spam or infected content. In the same way, a DNS RPZ publishes information about domains for the purpose of letting providers make a decision about resolving domains based on their likelihood of being unsafe for users or applications to access. In short, DNS RPZ provides the same type of capabilities for DNS resolvers that Real Time Block Lists (RBLs) provide for email servers.

An RPZ is designed to rewrite queries or response sets when domains are accessed. RPZ is a technology that leverages data feeds, and so it is the quality of the data feeds that make or break their use. Therefore, the key to effective use of the RPZ is the quality and timeliness of the data feed. The time required to detect a potentially malicious domain and update the information about it can range from 90 seconds to 24 hours. The slower the update cycle, the less useful that RPZ data feed becomes.

The Spamhaus Domain Block List (DBL), launched in 2010, currently contains information on close to 300,000 suspicious or outright malicious domains and is updated every two minutes. It is important to note that the Spamhaus DBL is an extraordinarily dynamic block list: tens of thousands of domains are added and removed from the DBL approximately every 24 hours as cybercriminals create and take down domains used in their activities. Because cybercriminals generally do not pay for domains – registering and disabling them within six hours or so by using less than highly reputable domain registrars – they are able to maintain a continual supply of new domains at low cost. Keeping up with this cybercriminal technique is what the Spamhaus DBL has been designed to do.

Somewhat related to what Spamhaus does is ThreatSTOP. The company maintains a regularly updated database of suspicious IP addresses that is used to populate firewalls with threat intelligence. When malware that may be present on corporate endpoints attempts to “phone home”, access to these IP addresses is blocked, effectively making the network behind the firewall invisible to cybercriminals. Today, ThreatSTOP supports a number of firewalls, including those from Palo Alto Networks, Check Point, Fortinet, Cisco, Juniper and a number of other vendors.

We have written a white paper on Spamhaus’ DNS RPZ technology that you can download here.

The Impact of Good Tools on Employee Retention

Knowledge workers – those individuals in companies of all sizes and across all industries whose primary role is the creation or management of information – are an essential and growing component of the workforce. Moreover, document collaboration is a common (and increasingly so) component of knowledge workers’ daily tasks. For example, an Osterman Research survey found that during a typical month, knowledge workers create an average of 36 documents on which they will need to collaborate with others, and they are asked by others to collaborate on an additional 34 documents. The total of 70 such documents during a typical month equates to an average of more than three document collaborations per day.

However, despite the fact that document collaboration is such a frequent task for knowledge workers, the tools that many such workers employ are not sufficient to satisfy their needs. For example:

  • Nearly one-half of knowledge workers consider document collaboration to be problematic.
  • A significant proportion of knowledge workers finds that poor document collaboration contributes to compliance and regulatory problems, missed deadlines, poor document quality, and difficulty in maintaining corporate standards in their organizations.
  • Problems with document collaboration are experienced across all industries, but more significantly in organizations that operate in heavily regulated environments.

In an effort to remedy the current state of problematic document collaboration, decision makers should understand that:

  • Knowledge workers are difficult to attract and retain. A robust economy exacerbates the problem by shifting the economic balance from businesses to knowledge workers, since this makes opportunities for other employment more common, and increases wage pressure on employers who must pay more to attract and retain talented workers.
  • IT’s role in retaining knowledge workers and motivating them is essential. For example, we found that in 51% of the organizations surveyed, IT plays an important or critical role in knowledge worker retention and motivation, while in another 30% of organizations IT has some influence.
  • Importantly, the survey found that knowledge workers also play a key role in helping to drive change within IT departments in the context of purchasing and deploying document collaboration tools. Our research found that in 51% of organizations, knowledge workers play an important or critical role in helping to influence IT decisions, while in just 12% they play no role or only a minimal role.

We will be publishing a white paper on this topic shortly. Let us know if you’d like an advance copy of the paper prior to its publication.