As Artificial Intelligence (AI) becomes increasingly commoditised, cyber attackers will take advantage of AI in a similar way as businesses. Just as 2016 saw the first massive IoT-driven botnet unleashed on the Internet, 2017 will very likely be characterised by the first AI-driven cyber-attack.
These attacks will be characterised by their ability to learn and get better as they evolve. This will transform the “advanced attack” into the common place, and will drive a huge economic spike in the hacker underground. Attacks that were typically reserved for nation-states and criminal syndicates will now be available on a greater scale.
Indeed, the idea of robots taking on humans has caught our imagination because it’s not far off our reality, and we’re questioning if machines can, one day, become more intelligent and more powerful than us.
Far from being just a scaremongering topic for science fiction entertainment, artificial intelligence and robotics have been identified as the emerging technology with the greatest potential for negative consequences over the next decade. The new Global Risks Report by the World Economic Forum highlights some of these risks, from job losses to autonomous weapons and – critically – AI’s ability to attack online systems. In 2016, the DARPA Cyber Grand Challenge winner Mayhem was pit against humanity’s best at DEF CON 2016. Even though Mayhem came last, as we have seen in with machine learning in chess, it is reasonable to assume that Mayhem – and its successors - will become more and more formidable over time.
This is the battle cybersecurity experts are fighting every single day. How can we possibly compete with AI-driven cyberattacks which are characterised by their ability to learn and get better as they evolve?
We’re talking here about ransomware attacks that get smarter and more targeted about what information is held hostage and what to charge for it. AI could also be used to mimic the writing style of friends or colleagues so you immediately trust it is them and not a cybercriminal who has taken control of their account. These technological developments could transform the “advanced attack” into the common place, and mean that attacks which were typically reserved for nation-states and criminal syndicates could soon be available on a greater scale.
Of course, security software vendors can also play at this game, and the Global Risks Report states that “whether AI applications are better at learning to attack or defend will determine whether online systems become more secure or more prone to successful cyberattacks”. It will be a tightly-contested race between artificial attack and artificial defence, so the good guys must continually innovate with AI to predict, prevent and stay ahead of the next major cyberattack.
We’re seeing some good progress as the industry develops stronger ways to combine insights gathered from customer data to produce a more complete and immediate understanding of evolving threats. More automation is also being used to complete time-consuming tasks, such as analysing the normal behaviour of privileged users and detecting any anomalies.
Indeed, the areas that AI or machine learning have the most obvious usefulness are in freeing-up threat researchers to focus their time and energy in identifying new and complex threats. That is to say, AI and machine learning can potentially save a great deal of time working on lower level threat classifications.
Although the threat might not look like the human-like robots we see on TV, AI remains the leading driver of economic, geopolitical and technological risks. The threat of AI-driven cyber-attacks must not get lost in this conversation in Australia. Defeating them must be a major focus for the industry throughout 2017