Security tools' effectiveness hampered by false positives

False positives are a problem not only because they take up manpower and time to address, but also because they can distract companies from dealing with legitimate security alerts.

Thanks to technologies such as intrusion detection systems, services such as threat intelligence and other emerging sources of information, security programs today are gathering unprecedented amounts of data about threats and attacks.

This can help strengthen the security posture of organizations in a big way, by giving them a head’s up on the latest threats. But unfortunately it can also add to the nagging and costly problem of false positives — normal or expected behaviors that are identified as anomalous or malicious.

False positives are a problem not only because they take up manpower and time to address, but also because they can distract companies from dealing with legitimate security alerts.

According to a 2015 report by research firm Enterprise Management Associates (EMA), entitled “Data-Driven Security Reloaded,” half of the more than 200 IT administrators and security surveyed said too many false positives are keeping them from being confident on breach detection.

When asked for the key value drivers for advanced analytics software, about 30 percent of the organizations surveyed cited reduced false positives.

“False positives have always been a problem with security tools, but as we add more layers to our security defenses, the cumulative impact of these false positives is growing,” says Paul Cotter, security infrastructure architect at business and technology consulting firm West Monroe Partners.

The most common false positives exist in products such as network intrusion detection/prevention, endpoint protection platforms and endpoint detection and response tools, says Lawrence Pingree, research director for security technologies at Gartner.

“Each of these solutions use a variety of techniques to detect attacks such as signature patterns, behavioral detections etc.,” Pingree says. “False positives are a problem because the nature of trying to detect bad behaviors sometimes overlaps with indication of good behavior.”

A good example of how false positives can have an impact is the Target data breach, “where the technology used to monitor intrusions provided multiple alerts on different occasions regarding suspicious activities,” says Pritesh Parekh, CISO at Zuora, a billing platform for subscription services such as Netflix.

“The alerts were buried in hundreds of false positives and became deprioritized on the list of security items, resulting in a major data breach,” Parekh says.

There is a fine balance that security professionals need to strike to address the issue, Cotter says. On the one hand, they need to ensure that a tool does not interfere with daily operations and does not generate additional work for the organization. But on the other hand, they have to recognize that a single false negative (for example, an undetected intrusion) can have a far greater impact on the organization as a whole than many false positives.

“The greatest risk with false positives is that the tool generates so many alerts that [it] becomes seen as a noise generator, and any true issues are ignored due to fatigue on the part of those responsible for managing the tools,” Cotter says. “We frequently see this issue in tools that are not properly operationalized, such as when tools are installed and deployed using default settings and profiles.”

A common example is file integrity monitoring software, which alerts administrators when files on a monitored system are altered for any reason, and this can be an indicator of malware or intruder activity. “Using default settings, a simple patch will generate a very large number of file changes; when aggregated across a mid-sized enterprise, this could easily generate many tens of thousands of alerts,” Cotter says.

Any meaningful alerts could easily get lost in that flood of information, Cotter says, and dismissed by administrators as related to the updates. “In order to address that issue, a thorough process needs to be in place to test updates and ‘fingerprint’ their changes, so that those specific alerts can be filtered and/or dismissed, leaving a clear set of actionable alerts for administrators to follow up on,” he says.

[ ALSO ON CSO: Reining in out-of-control security alerts ]

Defining, refining, implementing and executing that process adds to the overall effort needed to support the operation of the tool, but can drastically reduce the longer-term cost of ownership as well as increase the signal-to-noise efficiency and usability of the system, Cotter says.

“Many other security tools systems can have a similar problem with excessive alerting, and are frequently ignored due to the low signal-to-noise ratio,” Cotter says. “Examples include intrusion detection systems, Web application firewalls and other systems that are monitoring Internet-accessible endpoints.”

Addressing the issue of false positives should start with a thorough understanding of what a given tool is intended to address, as well as how it functions.

“When implementing the tool, ensure that the implementers fully understand the intent of the tool deployment, rather than making assumptions about ‘normal’ use cases, or simply installing a tool with default settings,” Cotter says.

From a process and education standpoint, any security tool implementation will impact existing policies and procedures, including incident response and any operational procedures for systems that the tool impacts, Cotter says. “This impact should be reviewed and validated, and policy and procedure documentation should be updated in tandem with the tool deployment in order to ensure that operational activities are minimally impacted by the change,” he says.

The most important thing security practitioners should do is understand that not every detection is malicious in nature, Pingree says. “There are a variety of ways to categorize incidents in order to identify a false positive,” he says.

For example, an investigator will examine a detected malicious event and then determine the likelihood that an activity is malicious. “Investigators must go through a variety of steps to determine maliciousness, for example examining whether or not data exfiltration occurred or whether the behavior looks like acceptable behavior when more closely examined,” Pingree says.

Most products provide greater detail to determine whether something looks like a false positive detection, Pingree says. An investigator might compare the detected event to that of known good samples of files, such as whitelists.

If the investigation is of a network-based alert, investigators might examine other data sources about IP address involved such as the domain name, or other maliciousness ratings capabilities such as IP reputation scores and malware scanning of the URL itself.

“Sometimes these scores are derived by examining past behavior or the inclusion of a particular URL or IP address in past attacks,” Pingree says. “There is some guesswork involved in this, however most of the time it is possible to determine whether something is more than likely a false positive versus a real threat by examining logs, packet captures or other user activities involved in the incident more closely.”

When configuring and tuning new security tools to reduce the number of false positives and ensure adequate coverage, organizations need to take an incremental and phased approach and have a thorough understanding of the environment they are protecting to make intelligent tuning decisions, Parekh says. “Tuning is an ongoing process that needs to account for changes in the environment,” he says.

Once tuning has limited the number of false positives, an organization should determine a process to take action on the remaining alerts based on risk. “This involves determining indicators of compromise that can be used to identify alerts that pose the most risk and addressing in a timely fashion,” Parekh says.

Occasional false positive investigations are not entirely sunk costs, Cotter adds. “These incidents can be seen as an opportunity to exercise the incident response plan, and identify areas of process improvement for future incorporation into the organization’s policies and procedures,” he says. “Also, it should be recognized that an occasional false positive is a good thing to keep people aware of how incident response must be handled, as well as help validate the operation of tools and continually fine-tune their configuration.”

Show Comments