Feeds

Pimplier Batgirls and Sawdusty Barmen

Whacking Google's wordlist woes

Top three mobile application threats

Earlier this week the NCSA released a study that attempted to compare the respective merits of Google and Yahoo!'s search engines. (See My spam-filled search index is bigger than yours!). Unfortunately, the only thing it proved was which search engine was publishing the most gibberish it had collected - a fact apparently lost on the researchers. The three academics insisted that because Google was returning more gibberish, it must be doing a better job.

Doh!

The phenomenon, we discover, is relatively recent, and it's an unintended consequence of both search engines trying to make their searches more comprehensive. The trouble is that Google is returning pages which are nothing but great long lists of words as valid search results, when rarely, if ever, is this what the searcher is looking for. Unless you have a thing for strange combinations of words.

But strangely enough, some people do have a thing for strange combinations of words - and this week, it's these very hobbyists who have been able to shed more light on the search giants' internal operations than the academics. It's Google Whacking - the art of finding two words that produce just a single result from the search engine - which comedy writer Dave Gorman turned into a book and a stage show. You can see a list of the most recently discovered Google Whacks here. Current Whacks at the time of writing include " rhubarb underkill", "oxymoronically flakier", "overpaid brainworkers" - somewhat surprisingly - and, to our relief, there's only one web page in the world with the words "subhuman stepsiblings" close enough together to merit a hit.

Don't expect these Whacks to work for long - as soon as they're repeated on a public forum, and Google finds them, they cease to be Whacks. Today's most delightful Whacks, like "pimplier batgirls" and "sawdusty barmen", are sure to be gone tomorrow.

Search guru Gary Stock, CTO of Nexcerpt, who coined the phrase, says things have been very strange indeed recently in the world of Google and Yahoo!

What the Whackers have highlighted is how the major search engines are struggling to cope with word lists - like this one - with Yahoo! doing a rather better job than Google. It's a consequence, he says, of both search engines removing the cap from the length of documents that they index.

You may have marvelled how so many web pages appeared to be precisely 101kb long. Well, for some years Google only indexed the first 101kb of a web page, and that's all you could see. A year ago Yahoo! lifted the cap, and more recently Google followed suit. The result has been to open the door to more gibberish.

"The contribution was distinct and dramatic," says Stock. "Googlewhackers were commenting on the change well before it was noted in most search forums. We're exposed to a lot of raw, peculiar stuff around the edges."

Google was already pretty good at recognizing duplicates, he points out. But not quite good enough. Surely this isn't rocket science to fix, we wondered?

"Google has the advantage of knowing precisely what is in their corpus, and being able to call upon vast research and statistical data about what is - and what is not - legitimate text. Wordlists look exactly like … lists of words!

"Here's a clue for low hanging fruit: legitimate text won't include a sequence of 1,000 words in alphabetical order - with no punctuation!"

"If Google wants to exclude other machine-generated texts (search 'hipcrime' or 'sporgeries' for many scary instances), that'll require a bit more effort. Again though, they have plenty of data to create useful profiles, and plenty of CPU to apply them."

"If Googlewhack's code can identify wordlists with a high degree of accuracy, then Google could readily make it part of The Machine."

That's a point made by reader Jeremy Pickens, although it may sound counter-intuitive on the first pass.

"Just because a page never gets returned in the top 1,000 doesn't mean there is no use in having that page in your index," he writes. "For example, if Yahoo! were to do some sort of statistical or structural analysis of those additional 10 billion spam pages, they might be able to use all that information to better detect/classify search spam in the future."

"So, kudos to Yahoo! for actually taking the disk space to store all that extra information, instead of just throwing it away."

Reader David notes, "The bottom line seems to be that Google is better at including obscure pages or Yahoo is better at rejecting useless pages and the NCSA excels at keeping incompetent PhDs funded."®

Bootnote: Martin Torzewski earning himself the last word with this thought. "Your reference to Borges prompted the thought that his story "The Library of Babel" might be relevant here!" Too true, alas.

The Essential Guide to IT Transformation

More from The Register

next story
BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
Auntie tight-lipped as major outage rolls on
You! Pirate! Stop pirating, or we shall admonish you politely. Repeatedly, if necessary
And we shall go about telling people you smell. No, not really
Airbus promises Wi-Fi – yay – and 3D movies (meh) in new A330
If the person in front reclines their seat, this could get interesting
UK Parliament rubber-stamps EMERGENCY data grab 'n' keep bill
Just 49 MPs oppose Drip's rushed timetable
ITC: Seagate and LSI can infringe Realtek patents because Realtek isn't in the US
Land of the (get off scot) free, when it's a foreign owner
Samsung threatens to cut ties with supplier over child labour allegations
Vows to uphold 'zero tolerance' policy on underage workers
Dude, you're getting a Dell – with BITCOIN: IT giant slurps cryptocash
1. Buy PC with Bitcoin. 2. Mine more coins. 3. Goto step 1
There's NOTHING on TV in Europe – American video DOMINATES
Even France's mega subsidies don't stop US content onslaught
prev story

Whitepapers

Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.