Original URL: https://www.theregister.com/2007/08/09/anti_virus_testing/

Is AV product testing corrupt?

Who can you trust?

By Robin Bloor

Posted in Channel, 9th August 2007 09:37 GMT

I had a conversation a month or two ago with someone high up in one of the IT security companies. He was bemoaning the fact that his company's AV product had performed poorly in tests run by AV-Test.org. He was deeply suspicious of the results anyway because his company actually provides its AV engine to another company that had performed better in the test. He didn't see how that could be, unless a mistake had been made in running the tests.

As it happens there are a few AV vendors who are less than impressed with "independent AV Tests". The lists that are published influence buying decisions, but quite a few vendors believe they don't reflect product capability.

I was sent a well written essay on the topic which I'm reproducing here. It explains the problem better than I can…

The Need for a Standard Malware Competitive Comparison

When I go to buy something, the first thing I do is check out the competitive reviews. If it's a household appliance, I'll look at Consumer Reports. If it's a car, I may look at Car and Driver® or Edmunds. What about when you're looking at security for your home PC, who can you trust to give you the honest review?

The average consumer is being pummeled by competitive comparisons of the performance of anti-virus and anti-spyware. The comparisons include the large and the small anti-malware vendors, and they provide amazingly discordant results. Can I place my computer's health and safety in a free, online product? Which of the major companies have the best performance? Major magazines report comparison statistics, but which do you trust?

One of my favorite quotes is attributed to Benjamin Disraeli and popularised in the US by Mark Twain: "There are three kinds of lies: lies, damned lies, and statistics." He went on to explain that a person can use statistics to support an argument or even twist the statistics based on how the numbers are manipulated. This is a key issue with many of the product comparisons in the media today. Depending upon who paid for, supported or endorsed the test, the bias may change wildly.

I was just reading an article that really hit the nail on the head. Jeremy Kirk, in an article called "Security vendors question accuracy of AV tests" published in InfoWorld, talked about how this debate is finally being noticed by the public. The people he quotes are absolutely correct in their opinions that the current tests aren't truly reflective of the capabilities of today's anti-malware solutions.

In the article, John Hawes, a technical consultant for Virus Bulletin, said the signature-based tests are "not enormously representative of the way things are in the real world". That is an understatement in my opinion.

With almost any industry today, the acknowledged correct form for an evaluation is publish the criteria and methods used in their evaluations. This creates a clear and easily understood direction taken by most evaluators so that both their methods as well as their internal criteria are understood and can be carefully examined publicly and objectively. In the malware comparison market, such practices are not the norm, and this is concerning since results are often grossly misinterpreted.

As an analogy, if I was looking to build a system to detect cancer, I'd build one that detects every kind of cancer that's out there. I surely wouldn't allow individual drug companies to supply me with samples of some "special" kind of cancer that only their drug works on, that would be silly. Also, what good would that do to the public and to those that would lose their lives due to the lack of cancer detection?

This is analogous to testing practices in the anti-malware industry with respect to the detection of virus samples. It has been widely known that individual companies are being allowed to supply specific "samples" to several "independent testing companies" so that their product will rate much higher then the competitions'. This is not only unfair and technologically flawed, but also strides across a wide line of ethically appropriate behavior. This is potentially harmful to all of us as consumers and individuals on the Internet and to the Internet as a whole.

Let's take this one step further, let's say that I put in a cancer sample that isn't even a cancer. It's a benign form of a cell structure. In the previous analogy then I could be told I have cancer when in fact I don't. I could be forced to undergo surgery just because of a faulty detection.

Now, one step even further. Now I submit a "cancer" sample that is ONLY detected by my drug company. The implications become quite obvious in this analogy. Why aren't they obvious to the users of today's anti-virus products? Because there are only a couple of companies out there that are in the business to "sell" their "so called" unbiased review of the antivirus products on the market.

In a major industry publication, the following was quoted from one of these "anti-virus reviewers", they indicated, "by his own admissions [he] does not verify that everything submitted to his list is in fact malware." Taking "just" that fact alone with nothing else, should lead any person to question the tests as a good measure of an anti-virus product's effectiveness in preventing a computer infection.

If this whole comparison testing wasn't complicated enough, then let's add to it the changes in the anti-malware industry that are adding new features and functionality to the products. Some of the anti-virus companies have added anti-spyware capabilities, some anti-spyware companies have added anti-virus scanning to their products, and some "security suites" are including other kinds of security protection including firewall and even host-based intrusion prevention.

Are any of these blocking techniques included in the evaluation or detection comparisons? No. Isn't it more important how secure the product keeps the user rather then just simply how many samples from a tainted, untested, unvalidated, out-of-date and highly dubious boxed set are detected?

The public is getting a VERY corrupt and biased view that doesn't relate even remotely to the real world level of malware protection. This type of testing is very biased and yet the public is generally unaware of the faith that they place in several magazines that are promoting the results of these fly-by-night companies with lack of industry standard credentials.

In the end, the computer user doesn't really care about the rate of detection or the features, they just want their computer to be protected as best as they can possibly make it with administration that's as easy as possible. My computer hasn't had a malware infection in YEARS and according to the latest "review" in a major magazine, my product was in the bottom half. Hmmm, kind of makes you wonder, doesn't it?

The anti-malware industry needs a "gold standard" to guide the development of a fair and truly unbiased measure of the product's effectiveness. This gold standard needs to be untainted by graft, ignorance and pseudo-science. The public deserves it. The industry should pull together to force it. Until then, place your trust with anti-malware companies with a proven history of performance... or, just hope you don't get hit using one of the reviewer's "best" anti-virus products.

Copyright © 2007, IT-Analysis.com