As security AI explodes, lack of efficacy comparisons leaves CSOs flying blind

Incorporation of artificial intelligence into information-security tools has reached fever pitch, but one cybersecurity expert has warned that lack of standardisation and nascent machine-learning security technologies means potential customers are often flying blind when choosing a solution.

Enthusiasm over AI and machine learning (ML) technologies has grown significantly, fuelled by companies’ increasing desire for sustainable tools that detect and classify security threats based on behaviour that often has never been seen before.

Bitdefender, for one, advertises the incorporation of “9th generation” machine learning and AI-based technologies in its recently-launched Total Security 2018 while analytics vendor ExtraHop advertises the use of “real-time analytics and machine learning to all digital interactions on the network” in its newly-launched Threat ID bundle.

Big-data innovator Splunk recently partnered with Palo Alto Networks to deliver a ML-driven ‘Security Nerve Center’ that taps Palo Alto’s cloud-based Application Framework, while machine learning is also key to the value proposition of security firms like CrowdStrike – which debuted in Australia last year and recently attracted a $US100m Series D financing round that included an investment by Telstra.

Despite heavy investment in AI and ML solutions, their promise remains vague for businesses that are still in early days and may be loathe to hand over key business functions to algorithms that are both untested and fluid.

Consistent performance measures would, for example, help customers evaluate whether machine language-powered Identity management from OneLogin recently launched offers real benefits compared with other solutions – or whether Forcepoint can deliver standout protection from a data loss prevention (DLP) solution that the company claims “applies machine learning to intelligently rank and classify security incidents across the cyber continuum of intent, including accidental leaks, broken business process or data theft. Security teams can proactively address issues and quickly prioritise responses for incidents linked to insider threats versus inadvertent user error.”

Marketing of such solutions inevitably focuses on benefits but doesn’t offer any insight into the strengths and weaknesses of the algorithms used by each vendor. The industry has yet to provide a consistent methodology for businesses to evaluate the claims being made around intelligent security tools.

“There isn’t really an adequate framework to be able to base these things off against each other,” Justin Dolly, chief security officer with security firm Malwarebytes, recently told CSO Australia. “It’s not easy to compare them.”

Malwarebytes, for its part, recently complemented its endpoint-protection tools with the Anomaly Detection Engine – a ML layer that the company says “leverages machine learning techniques to catch malware based on behaviours of ‘good’ (non-malware) files [and]… keep up with the rapidly evolving threat landscape.”

Yet Dolly concedes that hard metrics of ML accuracy are hard to come by; instead, Malwarebytes, like other firms, has leaned on serviceable metrics that can at least hint at the level of manpower that’s required to shepherd the solutions.

“The overwhelming metric that’s used to compare them is false positive rates,” Dolly explains, “and when it comes to machine learning it has always been this way. Users’ ultimate question is ‘Can I have this thing running on its own without any adult supervision or human interaction?’”

Exploring this question inevitably leads customers to contemplate the potential of AI to supplant humans from conventional operational roles. Service-management firm Demisto, for one, has positioned its AI-driven DBot technology as a way to emulate the actions of human security analysts, while a recent survey by PRINCE2 developer AXELOS found the 6 out of 10 project managers believe AI and ML will have “a profound impact” on the profession with 59 percent predicting that automation will replace humans on many routine tasks.

In the long term, Dolly says, the best measure of AI’s success may come not from their performance against arbitrary performance measures, but the degree to which they have enabled businesses to divert responsibility for mundane and time-consuming tasks such as security log analysis.

“The promise of many of these machine learning technologies is that I need to hire less humans,” he said. “They are expensive and hard to find – and I want technology to be able to bolster my technology solutions.”

One area that’s likely to change is regular security testing: In a recent Gartner analysis the firm predicted that, by 2020, 10 percent of penetration tests would be conducted by machine learning-based smart machines.

Gartner identified the desire for service-management efficiency as a key driver for growing adoption in ML-based tools for IT resilience orchestration automation, noting that such investments will “more than triple” by 2020, “helping reduce business outages from cascading IT failures.”

Whether or not the industry successfully floats a consistent measure of ML effectiveness, experts agree that the industry’s reliance on such technologies is set to increase – and should do so as businesses increasingly modernise their systems resilience strategies that have been heavily human-dependent for decades.

“Continued dependence on antiquated legacy systems is not sustainable,” Cylance chief security and trust officer Malcolm Harkins wrote in a recent Institute for Critical Infrastructure Technology (ICIT) essay about the importance of intelligent security tools to government modernisation.

“Intelligent modernisation will incorporate layer defense-grade bleeding-edge information security solutions,” he wrote, noting that AI and ML solutions “can decrease solution fatigue, increase cyber-hygiene compliance, and offset the cost of modernisation…. Since these intelligent solutions better protect the organisation and can be applied to automate cyber-hygiene practices, scarce public sector information security personnel can focus their attention on maintaining and securing systems rather than on policing other employees or on fixing legacy systems with duct tape and prayers.”

Tags palo alto networksmachine learningartificial intelligence (AI)

Show Comments