When algorithms attack, does Google hear you scream?
Inside Google's search penalties gulag
Redmond Building 666 - where MS went wrong
Google rankings and the company's ever-evolving explanations of them are clearly self-serving. But how much of this is happy coincidence, and how much dictated by corporate agenda?
It seems unlikely, considering the awful example of Microsoft, that Google execs would deliberately set out to suppress competing services and technologies, and while Microsoft at least had the defence that it was a commercial company competing with rivals, Google's business is dependent on it maintaining at least a general impression of impartiality. If its execs start snarling "knife the baby", it's dead. Or as Google CEO Eric Schmidt put it recently, in a frankly bizarre quote: "If we went into a room and were exposed to evil light and came out and announced evil strategies, we would be destroyed. The trust would be destroyed... We have not yet found the evil room on our campus."
The notion of evil as some kind of force that is beamed at you by aliens, as opposed to something that grows within yourself - or your company - is a curious one. But consider instead how the hive mind, running a company that is by (its own) definition not evil, might do an equivalent job.
If you're absolutely certain that your company is a force for good, then by extension you're going to think everything your company does is good too. If your company is deciding what constitutes a quality website or service, then obviously your company's websites and services are going to be quality ones, right? And if you believe, however erroneously, that the rankings and weighting your company assigns to sites and services are generated by entirely impartial algorithms, then when people get in touch with you to complain about those rankings and weightings, you're going to be unshakably convinced that you're right and they're wrong.
Most Google staff probably do believe this, just as most of the users do. Add to this the fact that at a superficial level it can be quite difficult for a human to distinguish between genuine businesses, scams and quick and dirty travel aggregators and comparison sites. Spend a little while checking and a human most likely could figure out the genuine ones, but hey, you've got thousands of scam sites to deal with, thousands of people bitching at you about their rankings, and you've got perfect algorithms doing all the heavy lifting for you anyway, right? Think IT support departments, but for search. Try reinstalling your site, then call us back if that doesn't fix it.
And while the documentation may have changed, Google's High Command still pitches the legend of the magic algorithm inside the black box that's going to solve all your problems. Eric Schmidt, for example, envisages Google eventually providing a single, perfect answer to your question: "We’ll get to the point – the long-term goal is to be able to give you one answer, which is exactly the right answer over time.... And what I’d like to do is to get to the point where we could read his site and then summarize what it says, and answer the question..."
Surrender to the machine, it knows best
So in the world he's describing, Google is in a continuing quest for machine perfection. There's no view or opinion here, just great code that gets ever closer to perfection. And he's been at it for a long time too. "I keep asking for a product called Serendipity. This product would have access to everything ever written or recorded, know everything the user ever worked on and saved to his or her personal hard drive, and know a whole lot about the user's tastes, friends and predilections."
Google will know everything about everything - including you - so it will know what you want, even if you don't know you want it. Just give in - you know it makes sense.
But what if we're not on the road to perfection? Schmidt's visions, and those of other Google execs, are arguably more relevant to what Google was rather than to what it is now. The search examples they give tend to be about verifiable facts (eg "what percentage of Americans have [sic] passports?"), whereas now a lot of what Google is about is assessing the worth of, and operating its own, services. Separating the good ones from the duds and the scams involves setting up hurdles that are more a matter of personal taste than verifiable fact, and in the absence of a perfect algorithm that zaps the bad guys and leaves the good ones intact, Google is going to need humans to whitelist an increasingly large number of sites.
Alongside this, as Google drives further towards its Universal Search goals, it will more and more find itself organising results that are about shopping and services, and Google Products and AdWords Comparison Ads will come more and more into the foreground, grinding up against the web stores and service vendors who're already trying to deliver the stuff consumers want out of the Internet.
Which will make it even more important that Google has a transparent and speedy appeals process; but will also make it even more obvious, to a lot more people, that there's a lot more human involvement in the magic algorithms than people thought.
And the algorithms? Where does that leave them? Consider the possibility that they're not actually getting better, that maybe they're slowly being overwhelmed. It nearly happened once before. ®
Sponsored: Global DDoS threat landscape report