​How AI smarts are changing the cyber-security struggle

By Mark Sinclair, ANZ Regional Director, WatchGuard Technologies

Credit: Illustration 123629047 © Mast3r - Dreamstime.com

Frequently referred to as the catalyst for the fourth industrial revolution, Artificial Intelligence is up-ending the way industries and organisations operate in Australia and around the world.

Gone are the days when an event such as the defeat of legendary chess grandmaster Gary Kasparov by IBM super-computer Deep Blue would make world headlines.

It’s just two decades since this occurred and, in the interim, AI has become embedded in everyday life experiences. Apps such as Uber and Lyft use machine learning to match drivers and passengers, predict demand and estimate travel times while chatbots and virtual assistants deliver first line customer service and support for many brands and organisations.

Research from management consultancy PwC suggests global GDP could rise by 14 per cent by 2030, as a result of AI adoption. That represents an additional $15.7 trillion of output and opportunity for companies and organisations which incorporate the technology into their operations.

Smarter and sneakier – cyber-crime’s embrace of AI

It’s not only businesses and governments which are alive to the possibilities of AI. Experts have warned we need to get set for cyber-criminals to start harnessing its power to boost the effectiveness of their efforts to dupe and defraud individuals and businesses.

There are a range of ways they might do so.

Bypassing CAPTCHA systems

CAPTCHA tools have become ubiquitous in recent years as organisations push back against bots which are programmed to fill in contact forms automatically in order to gather data to generate spam.

Presenting visitors to a site with a puzzle or task which only a human being is thought to be capable of completing – retyping a distorted alpha-numeric string or clicking on photographs which include images of bridges, for example – has thus far proved an effective means for businesses to determine whether they’re being contacted by a person or a machine.

Research from Columbia University suggests the measure may soon become useless. Researchers recently revealed they were able to get past Google’s reCAPTCHA system 98 per cent of the time, using AI techniques.

Catching more and bigger phish

Australians lost $340 million to scammers in 2017, much of it as a result of technology-driven deceit, according to the Australian Competition and Consumer Commission’s annual Targeting scams report.

Phishing attacks, which entail scammers impersonating legitimate businesses or organisations in order to trick people into revealing personal information such as bank account and credit card details, have become an everyday occurrence. Organisations have responded with tools and training programs to teach employees to rout out imposters.

These measures may become less effective should scammers begin employing AI to refine their ruses. The technology makes it possible for them to sift through vast amounts of data about their targets and craft more personalised and persuasive messages.

Security researchers at ZeroFox have used AI to analyse the social media histories of their phishing targets. The profiles they created, based on the contents of individuals’ tweets, were used to develop personalised phishing emails that were significantly more effective than their generic equivalents. Their targets clicked on the malicious links 30 per cent of the time, compared with the five to 15 per cent success rate regular phishing campaigns enjoy.

Developing more evasive malware

Scripts and toolkits to develop and distribute malware have always been hackers’ stock in trade. As cyber-defences have become better at detecting them, hackers are upping the ante by employing AI to make their malware sneakier and more slippery.

Striking back smarter

It’s not just cyber-criminals who are jumping on the AI bandwagon. Machine learning has been a catalyst for the development of a range of new-generation security measures which are helping security experts gain a better understanding of the way hackers operate.

They include the deployment of honeypots; single hosts which are left intentionally vulnerable. These can be used to attract attackers, in order to deter them from targeting legitimate network hosts, and to allow their behaviour to be recorded once they are interacting with what they believe to be compromised devices. In the same vein, a honeynet is a network of hosts which present as vulnerable, in order to simulate a legitimate network environment.

Another measure, sandboxing, allows malware to run in an isolated and protected environment so its behaviour can be tracked and analysed.

These deception technologies can help security professionals stay a step ahead, by providing them with a bird’s eye view of cyber-criminals’ modus operandi.

Being alert to risk

Hacking and cyber-crime are here to stay. AI is likely to up the ante in the ongoing struggle between organisations and individuals that wish to ensure data privacy and integrity and those who seek to compromise them, for reasons of malice or profit. As the latter get smarter, the need to remain vigilant has never been greater

Tags watchguard technologiesPwCartificial intelligence (AI)

Show Comments