My spam-filled search index is bigger than yours!
NCSA study backfires
Last week Yahoo! claimed it had sailed past Google by indexing 20 billion web pages.
Because as much as a third of the wild wild web consists of artificially-generated pages of spam designed to promote commercial web sites, this isn't much to boast about. Many of the fake pages are 'splogs', or spam blogs, or phoney catalogs, or simply pages of dictionary words. You can meet one of the perps here, in our story Interview with a link spammer.
And because few users have the patience to find the gem returned at position #12,711 in the search results, then the size of an index is meaningless.
More is never a substitute for better.
Unofficial Google spokesperson John Battelle has disputed the claim, and now the supercomputer center NCSA has poured cold water on the boast, too.
In a sample comparison, three researchers University of Illinois at Urbana-Champaign found that Google returned more search results than Yahoo! They conclude,
"It is the opinion of this study that Yahoo!'s claim to have a web index of over twice as many documents as Google's index is suspicious. Unless a large number of the documents Yahoo! has indexed are not yet available to its search engine, we find it puzzling that Yahoo!'s search engine consistently returned fewer results than Google."
A bold claim, for sure. There's a problem, however - NCSA's results are a little too incredible.
As Seth Finkelstein points out on his weblog, the NCSA researchers fed random dictionary words into both search engines - and not surprisingly, the results reflect pages containing random dictionary words.
"By sampling random words, they biased the samples to files of large word lists! And this effect applies, to a great or lesser extent, to every sample.
Here's an example. For the spam friendly gibberish words "carbolization clambers" Google returned 7 pages, all from a dictionary, and Yahoo! returned none. For the words " alkaloid's observance", Google returns 30 pages and Yahoo none.
In other words, the methodology is geared not to measure who has the most useful documents, but who has the most spam. To be more precise, in these examples, Google returns a number of copies of a dictionary file. It's a different frequency of noise.
"Just to show the problem, imagine that Google had returned results of three dictionary files, and Yahoo one dictionary file," explains Finkelstein. "Do this *10,000* times, and you get Google returning 30,000 results, and Yahoo! returning 10,000 results. So, wow, Google has 3x the size! But, in fact, it's just the same little quirk being counted 10,000 times over!"
The intrepid NCSA researchers appear to be blissfully unaware of that they've marched into a swamp.
"We feel that small, randomly selected search queries gives us the best chance to locate some of the most obscure web documents," they insist. Nevertheless, they plough bravely on.
"By counting the presence of these obscure documents in either search engine, we can measure the comprehensiveness of each search engine to determine the relative size of each search engine's index," they maintain.
The Argentian writer Borges described the Falklands War as "two bald men fighting over a comb", and it's an apt description for the spectacle of the search engines, and their rival supporters, duking it out to boast who's has collected the most garbage. ®
Sponsored: Benefits from the lessons learned in HPC