New technologies generate new buzzwords. Cloud, fabric, bitcoin, blockchain, containers, microservices, and so on. Artificial intelligence (AI) as a technology and term has been around for decades, but only recently seems buzzworthy. Unfortunately, some vendors are associating the buzzword with their products without having any real AI. Or to be charitable, they stretch the limits of what can be considered AI. Maybe they don’t really understand what AI is, or maybe the marketing team cajoled them into it. Either way, buyers need to be aware that many of the products claiming to have AI don’t.
I’ve been reviewing more vendors than I usually do, and it appears they all think if they have the word AI in their product description, they will sell more of their product or service. When I ask them how they are using AI, I get gaps of silence or descriptions that sound a whole lot more like rules engines.
Vendors who claim to have AI when they clearly don’t is confusing customers. This is frustrating all the vendors that have done all the hard work to have real AI in their products. If you are a consumer or purchaser of computer security products, it can only help you to understand what AI really means.
The difference between AI and rules-based engines
This topic came up recently when I was interviewing Yuri Frayman, CEO of Zenedge, and Laurent Gil, co-founder and chief product officer. I had put off the interview a few weeks and by the time we had hooked up, Zenedge had been purchased by Oracle, and because of acquisition rules concerning silent periods, they couldn’t talk about Zenedge or their products.
I was bummed because Zenedge is a computer security company with real AI, and I told them so. This casual comment unleashed 30 minutes of like-minded banter, centered around the frustration of companies falsely claiming they are AI. It was an energetic discussion to say the least.
I asked Laurent and Yuri what they felt the key difference was between rules-based engines and AI, because many vendors with hundreds of rules feel they have accomplished some sort of near version of AI. Laurent’s response: “Rules-based engines are like signature-based antivirus (AV). They already know what to expect. You’ve got a bunch of researchers looking at what has happened in the past, and based on that they write a bunch of IF-THEN rules that identify known malware. Rules are only as good as your research and what the hacker community is doing that you know about. Hackers don’t share ahead of time, so you are always behind.”
He continued, “True AI is about the future. AI says, ‘I don’t know what this is, but we’ve seen something similar so we will flag it.’ Or, ‘We’ve never seen this before, it’s an anomaly, so we will flag it.’ The key difference between rules engines and AI is where they are focused. Rules are IF-THEN decisions based on past data. AI is all about recognizing anomalies simply because they are new. We are interested when the machine says, ‘I don’t know. I haven’t seen this before.’ This is when AI is the most powerful and useful.”
Laurent offered, “A key way to tell the difference between AI and rules-based engines, is that a rules-based engine will never improve on its own until someone updates the rules. AI improves its accuracy the more it is used. The more you use it the better it becomes. The adaptability of the model is what makes AI work.”
Yuri strongly agreed, "Rules are basically in the past. The machine [AI] can predict the future."
I asked Yuri, what is the one question you would ask as a consumer to help determine if the vendor is being honest about whether or not they are using true AI? Yuri replied, “I would ask how they would handle zero-day attack detecting and handling. They have no history. There are no rules you can write for the unexpected. The answer they give you will reveal whether their product is rules-based or AI-based.”
AI vs. AI is the future of hacking and defense
The next thing Yuri and Laurent said was surprising and gave me chills.
Yuri said, “The future of hacking [and defense] is machine versus machine. It sounds like Jules Verne’s stuff, but attackers are using AI to attack us, to get around our detection. It’s going to take machine learning and AI to fight back.”
Laurent added, “Bots, good and bad, are already using AI to act like humans. They are changing their behavior on the fly, using AI, to defeat protection and detection, so we need AI to fight back.”
These statements surprised me, but at the same time I understood them completely. They were right. It is machine versus machine. If it’s not there yet, it won’t be long before that’s mostly what is going on between the good and bad in the computer world.
The machine-versus-machine idea has been playing out in the rules-based world for a long time. For example, I’m a big fan of VirusTotal, a site where you can submit the hash of any file you have and find out if it is marked as malicious by any of the dozens of antivirus software programs. One antivirus program might miss something, but dozens rarely do, or not for long.
I love VirusTotal. Unfortunately, so do the malware writers. For a long time, they’ve automated the process of creating malware that is guaranteed to scan “clean” on VirusTotal. When it starts to get detected on VirusTotal, the malware auto-updates itself to avoid detection. This is machine versus machine in the rules world. Take the same concept and apply it to the AI-based world. It’s already starting to happen, and pretty soon it will be how malware and badness works in our online world.
To be clear, I’m not worried about Skynet become self-aware and a bunch of terminators running around exterminating humans, but we are entering a different, more sophisticated world of good versus evil in the online world. We’re going to need better and more AI to defend ourselves.
Too many companies are claiming they have AI when what they really have is lots of rules and rules-based engines. AI is coming because we need it to fight the malicious AI that is being pointed our way. We just need to be clear about which companies have it now and which don’t.