Data security threats from malicious insiders have already been recognised as a big problem for businesses – but an IBM Australia-built proof of concept could go a long way towards solving it with an artificial intelligence (AI) based solution that can spot disgruntled workers before they have acted.
The tool grew out of an AI-themed internal hackathon run at IBM’s Gold Coast-based Australian Security Development Lab, where developers are encouraged to come up with novel solutions.
A team of IBM Security engineers realised that businesses are collecting masses of data about network performance and user behaviour, QRadar flows product owner Holly Wright told CSO Australia, and set about looking for ways this information could be meaningfully paired with other data and analysed to give greater insight about users’ state of mind.
“QRadar gives us deep visibility into the messages, views and emails going across the network,” explained Wright, who shared details of the project with attendees at AISA's recent Australian Cyber Conference.
“We decided to look at users from a risk perspective. We’re essentially leveraging that information that’s on the network, that nobody has really done anything with.”
The tool leaned on tools like QRadar SIEM to collect operational data and QRadar Network Insights for deep packet inspection, then tapped into IBM’s evolving Watson set of AI-based services – which includes the likes of Personality Insights to comb the data for particular trigger words and semantic innuendo, and Watson Tone Analyzer to evaluate emotional tones in written text.
By collecting email traffic as it traverses the network and feeding it into the AI engine, the solution is able to derive a risk score for each employee that changes based on the content and semantic tone of their writing.
Once scores go above or below set thresholds, alarms can be raised to escalate the situation to a human analyst for potential follow-up or restriction of data privileges.
“If someone is writing a bunch of angry messages and downloading a bunch of files, we can pick up on it,” Wright explained. “That’s a massively common type of insider threat attack, and this gives you that visibility into what the content is without having to read every email.”
Additional modules enable scanning of Facebook and other social-media posts, providing even more insight into a particular employee’s state of mind.
User and entity behaviour analytics (UEBA) tools have become increasingly common as businesses search for ways to rein in employee behaviour that can turn malicious with little warning. UEBA capabilities are increasingly being integrated with SIEM systems, Gartner has flagged, with Ai and machine learning techniques providing seamless analysis of user activities.
Fully 34 percent of breaches analysed in Verizon’s 2019 data breach investigations report (DBIR) involved internal actors, with misuse by authorised users implicated in 15 percent of cases.
Healthcare organisations and educational institutions were hit particularly hard by internal actors, with 59 percent and 45 percent of threat actors, respectively, classified as internal.
“With internal actors, the main problem is that they have already been granted access to your systems in order to do their jobs,” the report’s authors note. “Effectively monitoring and flagging unusual and/or inappropriate access to data that is not necessary for valid business use… is a matter of real concern.”
“Across all industries, internal actor breaches have been more difficult to detect than do those breaches involving external actors.”
Availability of continuously-updated risk scores also provides another tool for human resources managers wanting to identify employees that may be in need of recognition or extra support.
Yet application of the techniques is most obviously promising in helping resolve visibility issues around potential or ongoing insider cybersecurity threats.
“It brings down the time to detect when something is going wrong,” Wright said, “so you can respond to it before it gets worse.”