In the mid to late 1800’s snake oil was sold as the cure for all ailments. People from all over the country would flock to hear snake oil salesmen spruik their wonder-potions, they’d invest their hard-earned cash and, ultimately, these big claims fell short of expectation.
Today, AI is approaching the same dilemma. Big claims of revolutionary technologies that are, simply, not living up to their expectation… yet. As such, there is growing disillusionment setting in as both AI and machine learning are held accountable for their claims.
However, AI remains to be the trend du jour with Australians still clamouring to build AI startups. According to the 2018 Startup Muster report, artificial intelligence is the biggest startup industry in Australia, having grown from 14.5 percent of all startups in 2017 to 20.6 percent in 2018. The appetite is still there.
In cybersecurity, AI has been long heralded as the next best defence for all attacks. The premise of AI has shaken up the cybersecurity with the promise of faster, smarter ways to identify and analyse threats in real time, and to then subsequently neutralise threats before disaster hits.
But are we hoping for too much, too soon from AI?
The promise of AI
In addition to the myriad constantly evolving threats in today’s landscape, organizations are hampered by an ongoing skills shortage— a 2018 report warned that Australia needs to train 18,000 more people by 2026 to protect businesses on these shores. Currently, that shortfall is costing Australia more than $400m in lost revenue and wages. In an attempt to fill the void, organizations have turned to the promise of big data, artificial intelligence (AI), and machine learning.
And why not? In other industries, these technologies represent enormous potential. In healthcare, AI opens the door to more accurate diagnoses and less invasive procedures. In a marketing organization, AI enables a better understanding of customer buying trends and improved decision making. In transportation, autonomous vehicles represent a big leap for consumer convenience and safety; revenue from automotive AI is expected to grow from $404 million in 2016 to $14 billion by 2025.
AI in Cyber
The buzz for cybersecurity AI is palpable. In the past two years, the promise of machine learning and AI has enthralled and attracted marketers and media, with many falling victim to feature misconceptions and muddy product differentiations. In some cases, AI startups are concealing just how much human intervention is involved in their product offerings. In others, the incentive to include machine learning-based products is one too compelling to ignore, if for no other reason than to check a box with an intrigued customer base.
Today, cybersecurity AI in the purest sense is nonexistent, and we predict it will continue to evade us through 2019, too. While AI is about reproducing cognition, today’s solutions are actually more representative of machine learning, requiring humans to upload new training datasets and expert knowledge. Despite increasing analyst efficiency, at this time, this process still requires their inputs—and high-quality inputs at that. If a machine is fed poor data, its results will be equally poor. Machines need significant user feedback to fine-tune their monitoring; without it, analysts cannot extrapolate new conclusions.
On the other hand, machine learning provides clear advantages in outlier detection, much to the benefit of security analytics and SOC operations. Unlike humans, machines can handle billions of security events in a single day, providing clarity around a system’s “baseline” or “normal” activity and flagging anything unusual for human review. Analysts can then pinpoint threats sooner through correlation, pattern matching, and anomaly detection. While it may take a SOC analyst several hours to triage a single security alert, a machine can do it in seconds and continue even after business hours.
However, organizations are relying too heavily on these technologies without understanding the risks involved. Algorithms can miss attacks if training information has not been thoroughly scrubbed of anomalous data points and the bias introduced by the environment from which it was collected. In addition, certain algorithms may be too complex to understand what is driving a specific set of anomalies.
Aside from the technology, investment is another troublesome area for cybersecurity AI. Venture capitalists seeding AI firms expect a timely return on investment, but the AI bubble has many experts worried. Michael Woodridge, head of Computer Science at the University of Oxford, has expressed his concern that overhyped “charlatans and snake-oil salesmen” exaggerate AI’s progress to date.
Researchers at Stanford University launched the AI Index, an open, not-for-profit project meant to track activity in AI. In their 2017 report, they state that even AI experts have a hard time understanding and tracking progress across the field.
While money is currently pouring into Australian AI businesses, a slowdown of funding for AI research is imminent, reminiscent of the “AI Winter” of 1969, in which the US Congress cut funding as results lagged behind lofty expectations. But attacker tactics are not bound by investments, allowing for the continued advancement of AI as a hacker’s tool to spotlight security gaps and steal valuable data.
The gold standard in hacking efficiency, weaponized AI offers attackers unparalleled insight into what, when, and where to strike. In one example, AI-created phishing tweets were found to have a substantially better conversion rate than those created by humans. Artificial attackers are formidable opponents, and we will see the arms race around AI and machine learning continue to build