CIO

The Sorcerer's Apprentice: AI as an amplifier of human failings

The Sorcerer’s Apprentice was written by Goethe over 200 years ago and it was influenced by the Industrial Revolution. At the time, hand weavers were railing against the introduction of machines that would replace them. It’s against that backdrop that Cosive’s Jayne Naughton discussed machine learning on the final morning of AusCERT 2016.

Naughton’s rapid-fire tour of the world of AI and machine learning began by looking at neural networks where inputs are processed to deliver an output.

"What a machine learning algorithm does is try to find a line of best fit,” he says. By taking different variables, a computer system tries to work out all the possible outcomes and then determine the one that best answers the original question.

For example, if the temperature is a particular value and the wind is blowing at a particular speed, should I play golf?

Naughton says learning systems can either be supervised or unsupervised. Supervised systems place limits on how the system calculates its outcomes. And these limits can influence the outcomes. In other words, our inherent biases can significantly influence the answers these systems give.

The problem, says Naughton, is that we apply human models on machine-based outcomes. For example, we apply causality to some correlations. For example, Naughton showed a correlation between boat drownings and the marriage attrition rate in Kentucky. And we anthropomorphise systems, assigning them human characteristics, when hey don’t really apply.

“There’s no human thought process going on,” says Naughton. "You never know where machine learning is stacking the odds”.

Naughton noted that facial recognition systems are designed using sample sets of face to train the systems. As a result, the systems struggle to recognise some ethnicities because they are under-represented in the data used when programming the artificial intelligence.

What are we worried about?

Many people think "Terminator is the standout AI threat,” says Naughton.

But Naughton says the problem is we don’t need systems to become intelligent; they just need to rapidly try enough options so that they eventually make a correct decision.

And such systems can make some significant errors. For example, one system identified someone from news agency Al Jazeera as an operative for two opposing terrorist organisations.

In simple terms, artificial intelligence and machine learning are only as effective as their programming.

When AI goes wrong?

There are many applications from military drones, to self driving trucks and understanding language. The Microsoft AI bot, TayTweets, highlighted what can happen when language analysis AI gets it wrong. And this was despite TayTweets having human supervisors.

And Naughton says many of the AI initiatives coming from Silicon Valley are actually powered by lowly paid people who answer questions in a uniform way.

There’s also a push to use AI in healthcare, such as IBM’s Watson system being used in cancer detection. However, there are some risks as governments give private companies access to our personal health information.

Furthermore, Naughton says the use of AI to support infosec efforts should not be seen as a silver bullet.

“If you do machine learning over all your security data, you’re doomed,” he says.

In simple terms, the error rates mean adversaries will still be able to breach your systems.

AI could make things better

There are many positive applications of AI says Naughton from detecting rigged elections to assisting automated cropdusting.

Superintelligence is also a potential future, but Naughton says we often get the perspective wrong. Super intelligent systems will be used to understand the human brain and not the other way around. And AI systems are being used to evaluate other AI systems and then design other AI systems.

Where are we now?

"AI is an army of barely competent lab assistants,” says Naughton. AI systems are only as good as the data and algorithms they are built on.