Your next digital security guard should be more like RoboCop

Machine intelligence can be used to police networks and fill gaps where the available resources and capabilities of human intelligence are clearly falling short

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Humans are clearly incapable of monitoring and identifying every threat on today's vast and complex networks using traditional security tools. We need to enhance human capabilities by augmenting them with machine intelligence. Mixing man and machine in some ways, similar to what OmniCorp did with RoboCop can heighten our ability to identify and stop a threat before it's too late.

The "dumb" tools that organizations rely on today are simply ineffective. There are two consistent, yet still surprising things that make this ineptitude fairly apparent. The first is the amount of time hackers have free reign within a system before being detected: eight months at Premera and P.F. Chang's, six months at Nieman Marcus, five months at Home Depot, and the list goes on.

The second surprise is the response. Everyone usually looks backwards, trying to figure out how the external actors got in. Finding the proverbial leak and plugging it is obviously important, but this approach only treats a symptom instead of curing the disease.

The disease, in this case, is the growing faction of hackers that are getting so good at what they do they can infiltrate a network and roam around freely, accessing more files and data than even most internal employees have access to. If it took months for Premera, Sony, Target and others to detect these bad actors in their networks and begin to patch the holes that let them in, how can they be sure that another group didn't find another hole? How do they know other groups aren't pilfering data right now? Today, they can't know for sure.

The typical response

Until recently, companies have really only had one option as a response to rising threats, a response that most organizations still employ. They re-harden systems, ratchet-up firewall and IDS/IPS rules and thresholds, and put stricter web proxy and VPN policies in place. But by doing this they drown their incident response teams in alerts.

Tightening policies and adding to the number of scenarios that will raise a red flag just makes the job more difficult for security teams that are already stretched thin. This causes thousands of false positives every day, making it physically impossible to investigate every one. As recent high profile attacks have proven, the deluge of alerts is helping malicious activity slip through the cracks because, even when it is "caught," nothing is being done about it.

In addition, clamping down on security rules and procedures just wastes everyone's time. By design, tighter policies will restrict access to data, and in many cases, that data is what employees need to do their jobs well. Employees and departments will start asking for the tools and information they need, wasting precious time for them and the IT/security teams that have to vet every request.

Putting RoboCop on the case

Machine intelligence can be used to police massive networks and help fill gaps where the available resources and capabilities of human intelligence are clearly falling short. It's a bit like letting RoboCop police the streets, but in this case the main armament is statistical algorithms. More specifically, statistics can be used to identify abnormal and potentially malicious activity as it occurs.

According to Dave Shackleford, an analyst at SANS Institute and author of its 2014 Analytics and Intelligence Survey, "one of the biggest challenges security organizations face is lack of visibility into what's happening in the environment." The survey of 350 IT professionals asked why they have difficulty identifying threats and a top response was their inability to understand and baseline "normal behavior." It's something that humans just can't do in complex environments, and since we're not able to distinguish normal behavior, we can't see abnormal behavior.

Instead of relying on humans looking at graphs on big screen monitors, or human-defined rules and thresholds to raise flags, machines can learn what normal behavior looks like, adjusting in real time and becoming smarter as they processes more information. What's more, machines possess the speed required to process the massive amount of information that networks create, and they can do it in near-real time. Some networks process terabytes of data every second, while humans, on the other hand, can process no more than 60 bits per second.

Putting aside the need for speed and capacity, a larger issue with the traditional way of monitoring for security issues is rules are dumb. That's not just name calling either, they're literally dumb. Humans set rules that tell the machine how to act and what to do the speed and processing capacity is irrelevant. While rule-based monitoring systems can be very complex, they're still built on a basic "if this, then do that" formula. Enabling machines to think for themselves and feed better data and insight to the humans that rely on them is what will really improve security.

It's almost absurd to not have a layer of security that thinks for itself. Imagine in the physical world if someone was crossing the border every day with a wheelbarrow full of dirt and the customs agents, being diligent at their jobs and following the rules, were sifting through that dirt day after day, never finding what they thought they were looking for. Even though that same person repeatedly crosses the border with a wheelbarrow full of dirt, no one ever thinks to look at the wheelbarrow. If they had, they would have quickly learned he's been stealing wheelbarrows the whole time!

Just because no one told the customs agents to look for stolen wheelbarrows doesn't make it OK, but as they say, hindsight is 20/20. In the digital world, we don't have to rely on hindsight anymore, especially now that we have the power to put machine intelligence to work and recognize anomalies that could be occurring right under our noses. In order for cyber-security to be effective today, it needs at least a basic level of intelligence. Machines that learn on their own and detect anomalous activity can find the "wheelbarrow thief" that might be slowly syphoning data, even if you don't specifically know that you're looking for him.

Anomaly detection is among the first technology categories where machine learning is being put to use to enhance network and application security. It's a form of advanced security analytics, which is a term that's used quite frequently. However, there are a few requirements this type of technology must meet to truly be considered "advanced." It must be easily deployed to operate continuously, against a broad array of data types and sources, and at huge data scales to produce high fidelity insights so as not to further add to the alert blindness already confronting security teams.

Leading analysts agree that machine learning will soon be a "need to have" in order to protect a network. In a Nov. 2014 Gartner report titled, "Add New Performance Metrics to Manage Machine-Learning-Enabled Systems," analyst Will Cappelli directly states, "machine learning functionality will, over the next five years, gradually become pervasive and, in the process, fundamentally modify system performance and cost characteristics."

While machine learning is certainly not a silver bullet that will solve all security challenges, there's no doubt it will provide better information to help humans make better decisions. Let's stop asking people to do the impossible and let machine intelligence step in to help get the job done.

Prelert provides Advanced Analytics for Threat Activity Detection. Prelert helps organizations quickly detect, investigate, and respond to post-breach threat activities with automated, machine learning anomaly detection.

Tags Home Depot

Show Comments