Cop & Robber: The two faces of AI in Cybersecurity

by Bogdan Botezatu, Senior E-Threat Analyst.

In cybersecurity, the ability to adapt to new and complex challenges is critical. Innovation in our field has made our work as cybersecurity professionals easier but has also produced never-before-seen threats.

Take for example, Artificial Intelligence (AI) and machine learning capabilities - technologies that continue to grow at unprecedented rates. These technologies bring with them many useful applications for cyber defence, such as image analysis and machine translation to combat the spread of cybercrime on our devices.

These tools means security professionals do not have to spend time on the laborious task of advanced data and pattern recognition. Instead, the technology takes on these duties leaving us to focus on other areas of work.

But with any great tool, there exists the very real danger of it falling into the wrong hands. The same advancements being celebrated in AI, could be turned around and used to maliciously attack systems. The more universal AI becomes, the higher this risk.

If AI is deployed for malicious purposes the rise in theft, spear-phishing attacks and intelligent viruses could be catastrophic. Yet despite the risk it poses, the size of this challenge is still not fully understood.

Fighting the good fight

AI isn’t just a technology of the future – it is already in use across the cybersecurity industry. This is because it has the capacity to simplify the detection and reaction process at scale.

This is particularly true of machine learning which is used in cybersecurity to predict behaviours. To use Bitdefender as an example, our security solution has machine learning technologies which are designed specifically to detect malicious files and contrast them against good ones. This technology is constantly on the prowl, recognising user behaviour, and hunting for anomalies.

An example of AI in best practice can be seen in the detection of financial fraud. Recently, I purchased a ticket for my partner to accompany me on a pre-arranged work trip, and seconds after my purchase I received a phone call from my bank. The bank noted one-persons tickets were outside of my regular shopping behavior: so what’s the deal? Is it fraud?

The AI systems the bank had in place recognised it was symptomatic of fraudulent behaviour and it triggered an immediate warning to the necessary professionals. This is the exciting potential of modern cybersecurity: an environment where the process of recognising, reacting and ultimately preventing fraud is instantaneous.

Evil intentions

However, amidst this excitement there remains genuine concern. There is the potential for criminals to use these AI-fuelled security solutions as a benchmark against their own creations, strengthening their attacks.

It’s a process which has been used in the past by cybercriminals against new technologies: get a sample, see if any of the security solution is detected and engage in a process of tweaking and re-tweaking until the security solution fails to detect anything. 

Although no current example exists of AI being used criminally, the sheer growth in AI use suggests the likelihood of cyber-attacks utilising machine learning capabilities is all but inevitable. A recent survey of experts attending the Black Hat USA 2017 conference found 62 per cent of respondents believed AI will be used for attack within the next 12 months.

Recent breakthroughs have also shown some frightening examples of the potential of AI in offensive applications. Researchers at ZeroFox recently showcased a fully automated spear-phishing system that could create tailored tweets on Twitter based simply on a user’s demonstrated interests, generating clicks to malicious files.

The strengthening of spear-phishing attacks are a particular troubling aspect of AI cybercrime. Spear-phishing will often involve an extensive amount of personal research and data collection to pinpoint a victim within specific networks. It is a time intensive activity, identifying targets and generating contextually specific messaging to commit the fraud. But as the ZeroFox example highlighted, AI technology allows a criminal to cut through this process of data collection, and bombard millions with tailored emails at all times of the day.

Cat versus cat

Cybersecurity is often described as a game of cat and mouse, but in reality, it is a game of cat and cat. In modern times, our industry has assumed a more reactive role: ready to respond when bad things happen, but never fully having the upper hand. Technologies like machine learning have finally titled this balance in favour of the good guys but at a moment’s notice, this pendulum can swing.

It is undeniable AI wields huge power. From my experience, hackers don’t waste time and very rarely lose momentum. Although we don’t have a precise example of malicious AI to point to, the threat is very real. It is imperative we in the industry stay prepared for any developments.

Tags cyber criminalsartificial intelligence (AI)

Show Comments