CIO

When machines do the hacking

By Patrick Hubbard, Head Geek, SolarWinds

When it comes to cybersecurity, there’s only one thing worse than hackers: robot hackers. No, we’re not talking about Skynet – at least, not yet. But for IT managers, the next wave of cybersecurity threats is likely to be automated, targeted, and almost humanly impossible to predict.

That’s because of the technology phenomenon of machine learning, whereby a software platform analyses patterns in large volumes of data to find patterns in unfamiliar situations. Machine learning is already proving disruptive in a range of fields, from voice recognition (think Alexa & Siri) to financial services advice and even biomedical practices like diagnostic imaging. If cybersecurity pros aren’t careful, its most disruptive application could well be in cybercrime.

Beware the Bots

A typical sophisticated hack is tailored to bypass the target organisation’s unique configuration of defences, focusing on known vulnerabilities in technical systems or user behaviour that the hacker has identified. Human attackers, however, suffer from the same cognitive biases as all other people, and inevitably overlook or opt against using vulnerabilities which they’re not so comfortable at exploiting. Moreover, even large teams of human hackers can only coordinate attacks against a few vulnerabilities at any given time, before being overwhelmed by the speed and attention required.

Machine learning attacks will do away with these human limitations. The more data they absorb about the vulnerability or the target – from undefended ports to enterprise org-charts to reliably cataloguing new zero-day vulnerabilities – the more capable they’ll be of orchestrating a successful attack. Unlike human hackers, machine learning isn’t biased in using the data to develop incredibly novel attacks, combining multiple vectors and techniques in ways that even the most creative cybercriminals wouldn’t think of doing.

If one permutation of attacks fails, machine learning’s highly automated nature will mean that it can just cycle through more combinations at dizzying speed, with a randomness and persistence that will vex even the most diligent cybersecurity operators. And if that’s not worrying enough, most freely-available machine learning platforms are all hosted in the cloud – making them as elastically scalable as the very SaaS offerings their targets are using. Forget hiring more black-hat engineers: to ramp up the intensity of an assault, all a hacker needs to do is provision more cloud instances with a (stolen) credit card.

In other words, machine-learning hacks have far greater sophistication, scale, and ROI than traditional cyber attacks. We can assume that state-sponsored actors are already testing or even using machine learning in their arsenal – but if a nation-state is targeting your enterprise, there’s not much that even the best-resourced cybersecurity team can do. The more likely risk comes from commoditised machine-learning services – the sort that major cloud providers are now making openly available – being adapted by freelance cybercriminals. But fear not: in this war against the machines, resistance isn’t futile.

AI’s Achilles Heel

There are a few means by which enterprises will be able to protect themselves against the first wave of machine-learning attacks. As with any other cybersecurity threat, detection is of paramount importance. The first and most critical step is to determine you are under AI attack because you would take different precautions and even draconian measures to save your environment if under attack by AI than you would with a human-led attack. Machine-learning attacks can be identified by two traits: the novelty and the orchestration of their approaches, cyber security professionals should be develop skills for detecting novel multi-faceted attacks. If you see a range of SQL injections, probes, targeted email pishing and DDoS attacks being executed against your organisation at the same time, without any discernible pattern in intensity or sequence, you could be facing an AI rather than your typical mercenary or disgruntled software engineer.

Machine learning does have one weakness: the machine needs to learn before it can get to work. That learning process may give away an imminent attack to observant cybersecurity operators. Sometimes learning will take place through acquiring data. Persistent port scans, strangely personalised spam messages, and even random phone calls from “marketers” may all indicate that a cybercrime group is trying to gather the necessary data about your organisation to feed into the machine.

In other cases, learning happens through practical experience. So if you see an organisation in your industry go down to an inexplicable combination of sophisticated attacks, raise your threat level. It’s likely that if the attack is indeed AI-led, its human operators will use the experience to inform even more advanced attacks against similar organisations.

The matrix has you…protected

So what can enterprises do to repel the advance of the machines? Some AI platforms are already being applied to cybersecurity, but unless they can respond in a real-time manner they’re likely to be stuck playing catch-up to cybercrime first-movers. Machine learning can already supplement cybersecurity teams by automating responses based on simple protocols, but that’ll only prove effective against lower-level automated attacks like the ones we’re already seeing today.

The most effective solution is likely to be herd immunity. Since AI-led attacks will often go after similar organisations in order to keep learning, enterprises in the same industry can also adopt “security-as-a-service” clouds that roll out countermeasures across an entire matrix of organisations when one is hit. And unlike the cybercriminals, the defenders have one significant advantage: information sharing. By sharing anonymised information about breaches and vulnerabilities between members and encouraging a culture of collaboration rather than isolation, security clouds can gain far more intelligence than cybercrime AIs operating in isolation – putting them one step ahead.