CIO

Cybersecurity strides: AI and machine learning aren’t the ultimate cybersecurity weapons … yet

by Budd Ilic, ANZ Country Manager, Zscaler
  • Budd Ilic (CSO Online)
  • 05 November, 2019 10:20

Artificial intelligence and machine learning are important weapons in the cybersecurity arsenal but they’re not yet a complete solution – or a substitute for human smarts.

Are you cognisant of the hype around artificial intelligence and machine learning? It would be pretty hard not to be for anyone with a modicum of exposure to the world of business and technology. These technologies have been hailed as this century’s great game changers – technologies that can “teach” robots to think and behave just as human beings do across an ever-increasing range of scenarios.

High-tech research organisation IDC predicted Asia Pacific investment in AI systems would hit $5.5 billion in 2019; up 80 percent on the previous year’s figures. Globally, spending on AI systems is expected to be just shy of US$80 billion by 2022, a figure that represents an extraordinary investment on the part of businesses and organisations anxious to ride the automation wave.

Given the unceasing rush of predictions about how AI and machine learning will revolutionise occupations and industries during the forthcoming decade, it’s easy to forget that these technologies are far from new.

The principles behind AI and machine learning – the use of algorithms for classification and pattern-matching purposes – have been around for decades. What’s changed is the availability and affordability of the infrastructure needed to run these algorithms, in volume and at speed. Cloud computing can offer even enterprises of modest dimensions the capacity to crunch the data without breaking the bank. And the digital revolution and the rise of big data has meant these same enterprises have a surplus of data to crunch.

The result: effective – and cost effective – AI that can enable machines to perform a much broader and more complex array of tasks than was possible in the past.

Scoping the potential for cybersecurity

Cybersecurity currently represents one of the most promising applications for AI. “Teaching” systems to weed out rogue programs swiftly and cost effectively without the need for human intervention certainly appears a reasonable possibility – provided three key building blocks are in place. Those building blocks are:

  • access to an abundance of data
  • data engineering expertise to design effective threat detection models
  • input from security domain experts who can provide clarity and insight around the classification of data samples as suspicious or non-suspicious.

Convincing rhetoric to the contrary notwithstanding, many security solution vendors have yet to nail this winning combination.

Moreover, AI’s effectiveness in combating a range of security challenges is variable. In some functions, it excels. Phishing emails, for example, can be identified by slight variations in the reproduced logos and images they typically use in their efforts to impersonate bona fide individuals and organisations. These likenesses can be good enough to fool the human eye – but not an intelligent system designed to flag even the tiniest of anomalies.

Conversely, the ability, or lack thereof, of AI-driven systems to explain the reasoning behind blocking or flagging files may make their use as a primary security measure problematic for many enterprises.

If organisations are to avoid situations where files can be deemed malicious on the unverifiable approval of a computer solution, they may still need to devote specialised security staff to the task of understanding and analysing suspicious patterns as they emerge.

Multiple defences matter more than ever

Although AI may be a valuable addition to the cybersecurity toolbox, it’s far from a complete solution. The most robust defence strategies are multifaceted and don’t rely on a single technology, however intuitive or powerful that technology may be.

Keeping systems patched and up to date is still vital, even in enterprises where AI-powered endpoints that promise to stop all threats are present. It is also vital to conduct regular and rigorous user training to reduce the likelihood of malware being introduced or data breaches occurring by way of human error.

When properly harnessed by specialists, AI and machine learning can help strengthen the protective shield against hackers and high-tech criminals that cybersecurity solutions are designed to provide. In time, they may become “hero” technologies - the ultimate answer to the ever-growing challenge enterprises face in keeping their systems and data safe.

Until that day comes, maintaining vigilance on multiple fronts remains a prudent approach for Australian organisations that want to ensure the security and integrity of the solutions upon which their operations depend.