Despite the spiel, we're still some decades from true anti-malware AI
Vendors stuff jargon into antivirus marketing mix
Opinion The cybersecurity industry is investing heavily in "machine learning" technologies in the hope of providing a more dynamic defence against malware. The practical upshot of this is that the delegates to the RSA Conference next week are likely to hear a lot about artificial intelligence in next-generation antivirus (NGAV) even though neither term is particularly well defined.
The need for improved defences is clear enough, driven both by the volume of malware variants pushed out by the bad guys and the stratospheric rise in ransomware. File-encrypting ransomware, such as Locky, has become a lucrative money spinner for crooks, particularly in the last year or so.
Cybercriminals have used malware of various types (banking trojans, spyware etc ad nauseam) to run scams since the turn of the century if not earlier. The profit motive means that crooks have spent money testing their creations prior to their release in a bid to outpace defences. Malware slingers don't even have to do this themselves, thanks to the availability of so-called crypting services that promise "fully undetectable" malware.
Releasing multiple variants of their nasties has also become standard practice among cybercrooks.
The security industry's response to this has been automation and cloud-based technologies. Anti-malware is long past reliance on signature detection alone. Whitelisting, heuristics (generic detection), behaviour-based detection have all come into play as part of a multi-layered defences.
For the last few years, vendors have talked about their use of the cloud as a differentiator from competitors. More recently, in the last few months, there has been a sea change in marketing messages and talking about "artificial intelligence" has become de rigeur.
Next week's RSA Conference is set to become a battleground for contrasting marketing claims about artificial intelligence and anti-malware.
Self-described next-generation antivirus firms, exemplified by Cylance, will argue that they are the first to apply artificial intelligence against the malware menace. In reality the technology is, in the opinion of this security writer, better described as pattern recognition and data analytics.
This approach brings benefits such as a much smaller footprint on client machines, a lower attack surface and a reduction in the number of updates needed. The marketing material doesn't about talk that, though – it talks about Cylance as the "first company to apply artificial intelligence, algorithmic science and machine learning to cybersecurity".
SentinelOne, another next-gen contender, also talks about delivering realtime protection powered by "machine learning and dynamic behaviour analysis", laying its own claim to applying AI to the security problem.
A load of spiel?
Established vendors are also claiming to use AI. Avast, Sophos (partly because of its recent acquisition of next-gen vendor Invincea) and more will also be talking artificial intelligence at San Francisco.
Long-standing experts argue that pattern recognition, theorem proving, neural networks, expert systems, machine vision – all "AI techniques" – have been applied in the anti-malware world for years.
There's yet a third leg on this marketing chair. As well NGAV firms such as Cylance, which claims to be among the first to use artificial intelligence, and traditional developers, who say they have pioneers in the field (without talking much about it), there's Carbon Black, which has begun talking about an alternative to AI. Its technology is based on event stream processing, the technique previously applied to algorithmic day-trading. Similar to those applications, "Streaming Prevention" continuously updates a risk profile based on a steady stream of computer activity.
The appearance of an alternative to AI for anti-malware would suggest that artificial intelligence is an established technique for combating malware.
Frankly, I'm skeptical.
What I can say for sure is that artificial intelligence has only recently begun reappearing in marketing pitches to tech reporters. The theme has come up before. CA talked about neugents, neural network agents "smarter than a million Albert Einsteins" for a couple of years around the turn of the millennium.
Nothing much came of that technology, which (being charitable) might have come before its time. Maybe, in a new century, AI can tame the malware menace that has surpassed the ability of mere meat sacks to contain. Alternatively, artificial intelligence for antivirus might just be a rebrand of heuristics, more Gary Numan than Alicia Vikander, as some experts argue. ®
Whatever happens, we hope security software vendors avoid signing up singer and sometime tech pitchman Will.i.am to promote their wares. Look no further than Symantec's ill-fated HackIsWack stunt, which roped in rapper Snoop Dogg, for a example of how such efforts can go hopelessly wrong.
Sponsored: Becoming a Pragmatic Security Leader