Kaspersky defends false detection experiment
Claws in copy cat dust-up
Kaspersky Lab has defended its handling of a controversial experiment criticised by some as a marketing exercise of questionable technical value.
The Russian anti-virus firm created 20 innocent executable files, adding fake malware detections for ten of the sample, before uploading the files to online online malware scanning service VirusTotal. VirusTotal routinely distributes samples of suspect files submitted to the service, which provides a useful tool for security pros to identify malware strains, to other vendors.
Ten days later, Kaspersky reports, 14 other vendors had added detection for the files it had deliberately (and falsely) labelled as malign.
Some representatives of security vendors, certainly those we spoke to at an event in Madrid last week, accused Kaspersky of attempting the suggest other vendors were copying its malware detections without applying anything like rigorous checks, in at least some cases. Kaspersky unveiled its findings to journalists attending its Moscow Labs rather than at a security conference or through peer review, further stoking negative sentiment about the experiment.
VirusTotal, which is run by Spanish penetration testing firm Hispasec Sistemas, accused Kaspersky of misusing its services. Kaspersky's actions particularly irked VirusTotal because in the past it was accused of running a service that was misused by virus writers in order to test if their creations would escape detection.
Magnus Kalkuhl, the senior virus analyst at Kaspersky who ran the test, said such criticisms were misplaced: "The goal was not to show any problems with VirusTotal or AV [anti-virus] vendors, but to show that AV vendors detecting a sample does not automatically guarantee that it's really malware - simply because false positives can happen, and they duplicate quickly," Kalkuhl told El Reg.
Kaspersky's blog is clear that its experiment was not intended to discredit any anti-virus firms but to highlight the "negative effect of cheap static on-demand tests" and the potential pitfalls of using multiple scanners as a way to flag malware detections.
"I've received feedback from people who were just focusing on the question why other anti-virus companies would detect a clean file we uploaded. And I can only repeat as I did in the blog: This could have happened to us as well," Kalkuhl explained.
"So, if anyone might have seen this as a PR stunt, this is not what was intended," he added.
Anti-virus labs routinely deal with 50,000 malware samples a day. It's therefore inevitable that labs place some reliance on "source reputation and multi-scanning" rather than manual validation, at least as a temporary stopgap, net security firm Eset argues. It disagrees with Kaspersky's conclusion that dynamic testing will solve the problem.
Static testing only looks at whether a potentially malign sample is recognised by its signature while in dynamic testing the effectiveness of the whole chain of detection components in modern anti-virus (cloud-based, behaviour-based) in labeling malware is tested.
David Harley, director of malware intelligence at Eset, explains that using scanners instead of analysis to validate samples is a bad practice. However he criticised Kaspersky's tactics for drawing attention to the admitted problem.
"I think most of us in the industry understand and sympathise with the point about multi-scanning as a (poor) substitute for sample validation, resulting in false positives being cascaded through the AV and testing industries," Harley said. "While I can't altogether like the way it was done (even though our lab didn't fall into the trap) I'll certainly raise a cheer for Kaspersky if we can re-focus on that problem, rather than on 'who copied who' and Virus Total."
Vendors share malware samples in multiple ways so it would have been far better if a route other than VirusTotal - "good guys who get dumped on quite a lot" - had been used in the experiment, Harley added. ®
Sponsored: Global DDoS threat landscape report