When chatbots could become a real security threat

Whether a computer-powered chatbot recently passed the Turing Test for artificial intelligence is debatable, but there's little doubt that the growing sophistication of the conversation programs could one day make them a threat to corporate security, an expert says.

[Bad bots on the rise: A look at mobile, social, porn, and spam bots]

The better chatbots get at imitating human conversation through text or audio conversations, the more useful they will become to criminals looking to bilk victims of their savings or steal the credentials of a well-placed employee of an organization, said Kyle Adams, chief software architect for Juniper Networks' intrusion prevention system WebApp Secure.

"Everyday someone is making an improvement to these things and they are getting better," Adams said Thursday.

The closer chatbots get to convincingly imitate the average middle-class American in casual conversation, the greater threat they will become to U.S. companies and ordinary people.

Some of the techniques criminals use today to snare victims, such as phishing and social engineering, could become much more effective through the use of the programs, Adams said.

For example, scammers could use them to strike up an automated conversation with an office worker via email to build trust before the criminals send a message with a malicious link.

"They could dramatically increase the number of people they get to actually click the link at the end of that trail," Adams said.

Chatbots could also be useful to spammers sending the familiar email that purports to come from a wealthy foreigner seeking assistance to move millions of dollars from his homeland.

The swindlers often have to correspond with respondents through several emails before they find the super gullible willing to send dollars for a share of the fake money transfer. A specially designed chatbot could help in vetting respondents.

"The phishers could cast a much wider net and narrow that down to a very small list of good targets within a couple days with very little effort," Adams said.

Other nefarious techniques made more efficient could include convincing support staff at a company to divulge an employee's credentials for a service or corporate network.

The problem the attacker faces with this type of telephone-based scam is having the call traced, Adams said. With a chatbot, the call could be made from a device left anywhere, such as a coffee shop, with the conversation sent to a remote server over the public Wi-Fi.

For the chatbot to be effective, it would have to be programmed with knowledge of the person it is pretending to be. Such information is typically gathered today on social networks and other online communities and sources.

Finally, extortionists could use chatbots in a denial of service attack against a call center. Imagine, a chatbot that's good enough to keep a customer rep on the phone for just a few minutes. If enough, bogus calls are made, they could prevent legitimate customers from reaching service reps.

"I actually think this could probably be one of the most devastating uses of chatbots," Adams said.

None of these scenarios would be possible using today's programs, but the technology is advancing.

In the two-day Turing Test last week, a program dubbed Eugene Goostman posed as a 13-year-old Ukrainian and supposedly tricked a third of the human testers into believing it was human.

[Your Facebook friends may be evil bots]

The test at the Royal Society in London drew lots of critics who challenged the way it was conducted. Nevertheless, despite the controversy, chatbots are improving and they are likely to prove useful to the good guys and the bad guys.

Tags juniper networksfraud preventioncall center securityphishing scamchatbotemployee protection

Show Comments