Soon, your most important security expert won’t be a person

Hollywood images of artificial intelligence – and the iconic familiars of HAL 9000, the Terminator and Her’s Samantha – have shaped the public perception of artificial intelligence (AI) as a vessel for human-like interaction. Yet with AI’s resurgence applying the technology to all manner of business problems, security specialists are rapidly warming to its potential as a fastidious assistant that works tirelessly to pick eyedroppers of insight from raging rivers of information.

This learning process has evolved from the refinement of big-data techniques feeding a surfeit of rich data sets to ever more sophisticated machine-learning solutions. Automated security systems now apply AI techniques to massive databases of security logs, building baseline behavioural models for different days and times of the week; if particular activity strays too far from this norm, it can be instantly flagged, investigated, and actioned in real time.

As security practitioners are well aware, the flood of security alerts has become a logistical nightmare. Figures in Cisco’s recent 2017 Annual Cybersecurity Report (ACR) suggest that 44 percent of security operations managers see more than 5000 security alerts per day. The average organisation can only investigate 56 percent of daily security alerts – 28 percent of which are ultimately held to be legitimate.

Little wonder that AI and machine-learning systems are become beacons of hope for CSOs drowning in security alerts. And the shift towards cloud computing – which substantially increases the number of logged Internet requests – has exacerbated the need. Just 1 in 5000 user activities associated with connected third-party cloud applications, the Cisco analysis found, is suspicious.

“The challenge for security teams,” the report’s authors note, “is pinpointing that one instance… Only with automation can security teams cut through the ‘noise’ of security alerts and focus their resources on investigating true threats. The multistage process of identifying normal and potentially suspicious user activities… hinges on the use of automation, with algorithms applied at every stage.”

The scope of the problem becomes clear when considering the volumes of attacks currently traversing the Internet. Security vendor Trend Micro, for one, reports blocking 81.9 billion threats through its Smart Protection Network in 2016 alone – a 56 percent increase compared with the previous year – and that’s just from one of dozens of vendors that are actively dealing with customers’ security risks using their cloud-based detection services.

Despite its promise, artificial intelligence technology has traditionally been unwieldy for end users to procure and implement. This has led firms like IBM, Amazon Web Services, Microsoft Azure, Unisys and startups like BigML, Ersatz and DataRobot and to offer machine learning as a service (MLaaS), providing API-based access to the core libraries necessary to apply machine learning techniques to large data sets.

Security firms are also incorporating such capabilities into their offerings, with companies like Nuix and Huntsman Security using machine learning to deliver security monitoring tools that are tuned to finding security anomalies and designed to scale along with business demand. And startups are using security as the base use case for deep-learning research that has, in early testing of technology from machine-learning startup Deep Instinct, improved malware detection rates by 20 to 30 percent compared with existing solutions.

Yet even as Ai capabilities become readily available within the security space, organisations need to pivot to not only take advantage of them, but to have the skilled experts to know what to do with their output. Here, many businesses are lacking: fully 37 percent of Cisco ASR respondents said their infrastructure was upgraded regularly but they still weren’t equipped with the latest and greatest tools.

This had created a security ‘effectiveness gap’, said Dave Justice, vice president of the Cisco Systems Global Security Sales Organisation at the company’s recent Cisco Live! conference, and better machine-learning tools were going to be the key to closing it.

“What’s going to solve this problem is not going to be people, and it’s not going to be how much money you can throw at it,” he said. “It’s going to be automation and machines making sense of this data, and responding to it in an automated fashion.”

Easier access to AI libraries and techniques will also benefit attackers, who are already reportedly looking into ways to use MLaaS offerings to improve the effectiveness of their own attacks. This trend has led some security pundits to predict that increasing use of AI would lead to machine-versus-machine deathmatches based on automated attacks and defensive mechanisms.

In the short term, however, AI is still on a short leash within many security environments: a recent Carbon Black survey of 410 cybersecurity researchers found that 74 percent still see AI-driven cybersecurity solutions as flawed and 70 percent said they can be bypassed by attackers.

Fully 87 percent still don’t trust artificial intelligence to replace human decision-making in security, arguing that it will take three years on average before the technology is advanced enough to take over for human involvement.

That’s a reality check that means for now, at least, AI and machine-learning tools will primarily serve as proxy analysts – solving problems of scale but still deferring to human masters when it comes to taking action.

“Based on how cybersecurity researchers perceive current AI-driven security solutions, cybersecurity is still very much a ‘human vs. human’ battle, even with the increased levels of automation seen on both the offensive and defensive sides of the battlefield,” Carbon Black co-founder and chief technology officer Michael Viscuso said in a statement.

“The fault with machine learning exists in how much emphasis organisations may be placing on it and how they are using it. Static, analysis-based approaches relying exclusively on files have historically been popular, but they have not proven sufficient for reliably detecting new attacks. Rather, the most resilient ML approaches involve dynamic analysis - evaluating programs based on the actions they take.”

Yet evaluating programs is only one part of the innovation soon to come around AI. With usability of the solutions a core goal, many security companies will be focusing on improving methods for alerts to be rapidly triaged and escalated to the appropriate staff for action.

Over time, tools will become more sophisticated and ever-larger security data sets help learning algorithms add ever more nuance to their detection mechanisms. They will also become valuable for purposes like detecting business email compromise (BEC) attacks, which relies on automated scanning of emails for key trigger words that suggest an email might be illegitimate.

They may not have the human personalities of movie AIs, but evolving tools for data analysis will become unremarkable parts of the security environment in no time. CSOs will soon be granting AI-driven systems a measure of autonomy to not only detect but to resolve security issues – and in so doing, they will help deliver the self-learning, self-healing systems that science-fiction writers have been imagining for more than half a century.

Tags IT careerscyber attackssecurity skillsskills gapCSO Buyers GuideAI capabilities

Show Comments