CIO

Security industry welcomes Budget’s AI support, but BEC-riddled Aussie businesses are more ambivalent

Significant AI investment encourages security vendors, but many Australian companies still doubt the technology’s value despite its value in stamping down non-malware cybercrime

The government’s $29.9m artificial intelligence (AI)/machine learning (ML) Budget cash splash may reflect a desire to capitalise on the sector’s opportunities, but getting broad buy-in may prove difficult given that Australian companies are more sceptical of artificial intelligence’s potential than any other Asia-Pacific country.

Announced as part of the Australian Technology and Science Growth Plan (ATSGP) – a key component of the $2.4 billion announced for research, science and technology capabilities – the investment will “[support] economic growth and the productivity of Australian businesses” through funding of AI/ML projects within the Cooperative Research Centres Program; funding for AI and ML-focused PhD scholarships and school-related learning “to address skill gaps”; and the development of a national AI Ethics Framework, technology and standards roadmaps “to identify global opportunities and guide future investments.”

Despite industry and government enthusiasm, however, AI still faces hurdles in building general business awareness of its importance.

A recent survey by Seagate found that 12 percent of Australian companies don’t think their organisations need to adopt AI – nearly twice the 7 percent APAC average. And 11 percent of Australian respondents don’t think AI will drive improvements in productivity, compared with 4 percent across APAC generally.

The security industry has long felt differently, with AI driving investment priorities and macro trends. Another recent study found that 62 percent of respondents believe the main reason they can’t keep up their security incident response plans relevant and effective is a failure to invest in AI.

Security a frontrunner for AI

AI has found early support in security log monitoring, which remains an area where humans have long struggled to keep up with a flood of often spurious security alerts. AI’s ability to prioritise and correlate these events has made threat analysis a natural market for the technology, but it is also showing promise in helping businesses catch up with a business email compromise (BEC) industry that is netting cybercriminals billions per year.

Recent investigations by Palo Alto Networks’ Unit 42 cybercrime unit blamed more than 300 different Nigerian fraud groups for an average 17,600 attacks per month during 2017 – a 45 percent increase over 2016.

The trend – which Unit 42 codenamed ‘SilverTerrier’ and has documented over several years – highlights the increasingly sophisticated use of 15 separate commodity malware tools in support of BEC schemes.

BEC fraudsters took $20m from Australian businesses last year, according to the Australian Cyber Security Centre (ACSC), and the increasing success of the scams is seeing them become more prevalent over time.

Trend Micro, for one, has turned to AI to help with the problem – which evades conventional email filters because BEC mails generally lack specific malware attachments and rely on human engineering using otherwise-acceptable words.

AI techniques, such as the Writing Style DNA method described by Trend Micro, allow email filters to analyse messages for key phrases and words that convey key BEC characteristics – including a sense of urgency, request for action, threat of financial implications, and over-familiar language designed to lull victims into a false sense of security.

Broad application of AI algorithms to known BEC samples will reveal common writing-style techniques that can then be used to help ever-smarter email filters ferret out potentially problematic messages. That’s why it has attracted the attention of Australian firms like MailGuard, which like most security vendors has its eye on AI and stands to potentially benefit from the government’s AI market-development fund.

Given this enthusiasm, the finding that many Australian companies are less than enthusiastic about AI’s potential is striking. Yet this may also be due to a lack of clear direction around AI policy – which the government announcement will address. Fully 63 percent of respondents to the Seagate survey said they struggled to know where and how to start their organisation’s development and implementation of AI.

No matter how ambivalent Australian users feel about AI, the industry is continuing to push hard towards it – and security vendors have taken the government’s announcement as affirmation of the trend.

“As Australia continues to push forward in its digital ambitions, cybersecurity and data protection are key issues that need to be addressed,” LogRhythm APAC sales director Simon Howe said in a statement. “While the Federal Government boosts its investments in cyber security, continual efforts and investments in these areas are pertinent.”

"As future cyberwarfare will be waged between machines, it is critical to invest in technologies such as AI. It is also important to nurture an environment where the best talents and ideas can be developed.”