Why businesses should be worried about the weaponisation of AI

By Robin Schmitt, General Manager, Neustar APAC

Cyber threats are a constantly moving target, with the pace at which vulnerabilities are discovered increasing every week. Organisations worldwide, regardless of size, receive tens of thousands of security alerts from their monitoring systems every day. For example, more than 30 percent of banks, get over 200,000 security alerts a day about possible attacks, according to research firm Ovum.  

Once a vulnerability is exposed, the patching process begins but during this period systems remain vulnerable. As soon as IT deploys a new technology to counter a threat, the threat changes with the landscape evolving at such a rapid rate that it is critical to be able to swiftly respond to those modifications as quickly as possible.

From a defence and offence perspective, the above mentioned process could happen in sheer milliseconds through using artificial intelligence (AI) as a tool for both hackers and defence teams. The weaponisation of AI is now widely predicted to be one of the biggest cyber security threats this year. In fact, 62 percent of security professionals believe AI will be used as a weapon in cyber-attacks within the next 12 months.

With this in mind, some Australian companies are currently exploring how they can use AI to help with cyber security. For example, the Commonwealth Bank announced in December 2016 that it was developing AI to assist with cyber security, fraud detection and regulatory compliance.  The bank is now using machine learning technology to help make sense of large sets of undefined data and alert management to areas requiring their attention.

For hackers, AI provides the perfect tool to enable scale and efficiency as it can be used to make automated decisions on who, what, when and where to attack. Already it’s possible for AI techniques to be used to craft personalised phishing attacks simply by collecting information on the targets from social media and other publicly available sources.

In fact, US security firm, ZeroFOX recently launched an experiment to test whether AI would be more successful than a human in launching a phishing attack. The firm used AI to monitor the behaviour of social media users, and then create and launch its own phishing bait.

The AI, named Snap_R was six times more effective than the human in getting Twitter users to click on its malicious links. Snap_R delivered spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, which lured 275 victims. This was compared to the human’s rate of 1.075 tweets a minute, which lured just 49 users.

Defending against AI launched attacks

The first step is to understand what you’re trying to protect. Once you understand, you can ensure the appropriate controls are in place for threat vulnerability management, patch management, ensuring important data is identified and encrypted, and ensuring visibility into the environment as a whole. They key factor is the ability to rapidly change course.

Having clearly defined requirements for procedures and processes is critical. Even in the best case, the most advanced technology in the world is only as good as the process you’re trying to model. Technology serves to augment those procedures; it’s not a replacement.

Moreover, it’s important for an organisation to know what’s normal for its environment. Not having context is a challenge for most companies. Having a good understanding of your assets and how they communicate and interact provides that context. Once this is established, it’s easier to isolate events that aren’t normal and investigate them. Security and governance isn’t something you do on a quarterly basis, it’s an everyday process.

When defences are strong, criminals change tactics and it’s usually the weakest link in the chain that receives all the attention. We can assume that as technological capabilities keep expanding, so will the procedures used by malicious actors. To mitigate that, organisations should focus on building a solid foundation for governance, understanding the assets, having a clear visibility of what is normal, and acknowledging any technology is only going to be as good as its processes and procedures. 

AI has the potential to be exploited equally by both attackers and defenders in a cat and mouse game. Not only can AI drive attacks rapidly but it can also change tactics and strategy just as fast so the AI driven defences respond equally as quickly. The key to this game is in knowing what is normal and then identifying behaviours that are irregular or unusual

Tags cyber threatsweaponising AI

Show Comments