Original URL: http://www.theregister.co.uk/2009/04/09/google_and_hadoop/

Hadoop - Why is Google juicing Yahoo! search?

Inside the Mountain View mind

By Cade Metz

Posted in Software, 9th April 2009 01:01 GMT

It's the Google equivalent of the everlasting gobstopper. And for some reason, the Mountain View Chocolate Factory has encouraged a knockoff industry among its Slugworthian rivals.

Considering the code of secrecy that typically envelops Google's internal operations, you have to wonder why the company helped foster the birth and ongoing development of Hadoop, the open-source incarnation of the new-age grid-computing platform that underpins its vast online infrastructure. Hadoop now drives at least a portion of Yahoo!'s search engine, and it runs Powerset, the basis for Microsoft's next-generation search extravaganza.

According to Christophe Bisciglia - the former Google engineer who recently jumped ship for the much-discussed Hadoop startup Cloudera - any advantages Hadoop bestows on Google's chief rivals is outweighed by the long-term benefits shoveled back into the Chocolate Factory. Famously, Hadoop is an educational tool for the next-generation of Google Oompa Loompas, and in theory its widespread adoption will eventually shove more stuff through Google's own search engine - meaning Google can serve ads and make more money.

But, it seems, the old Google arrogance is also at play. In sharing its distributed-computing genius with the rest of the world, Bisciglia says, Google "showed the world that they were right."

In 2004, Google published a pair of research papers describing its distributed file system, known as GFS, and its software framework for distributed data-crunching, known as MapReduce. And in short order, an independent developer named Doug Cutting launched an open-source project based on the two papers. He called it Hadoop after his son's yellow stuffed elephant.

By early 2006, Yahoo! was toying with the project, and the Google rival soon put Cutting on the payroll, slowly rolling Hadoop into its back-end infrastructure. The open-source platform powers the new Yahoo! Search Webmap, a mega-app that builds a database of all known web pages – complete with all the metadata needed to, shall we say, understand them. According to Yahoo! Grid Computing Pooh-Bah Eric Baldeschwieler, the fledgling app draws its map 33 per cent faster than the company's previous system - on the same hardware.

Facebook has embraced Hadoop in similar fashion. Amazon is offering the platform as a web service over its AWS virtual data center. And even Microsoft is feeding off the project's open-sourciness, thanks to its recent purchase of Powerset.

But in a very different way, Hadoop has also become a valuable tool for Google itself.

Big Data 101

When Christophe Bisciglia was still at Google, interviewing student engineers for admission to the Chocolate Factory, he was struck by how difficult it was for the uninitiated to grasp the company's multi-terrabyte data transformations.

"I started notice repeating pattern when interviewing students," he tells The Reg. "I would say 'OK, that's a great solution to the problem, but what would you do if you had a 1000 times as much data?' And they would just stare out at me, blank. It wasn't that they weren't smart or talented. It's just they'd never had the exposure."

In the hopes of shrinking this education gap, Google sent Bisciglia back to his alma mater, the University of Washington, where he taught a course on "working with big data." And Hadoop was the teaching model.

Google ended up hiring about half the students who took the class. And after the company open-sourced the curriculum, the same course was picked up by several other universities, including MIT and Berkeley. "In the past, it took three to six months to get hires up to speed with how to work with [Google] technology," Bisciglia says. "But if schools are teaching this as part of the standard undergraduate curriculum, Google saved that three to six months - multiplied by thousands of engineers."

To further facilitate such education, the company setup a Hadoop cluster inside one of its (then top secret) data centers, offering access to researchers across the planet.

Yes, this also juices the Yahoo!s and the Microsofts of the world. But Google is fond of saying "what's good for the internet is good for Google."

"As a result of having this large-scale data-processing technology easily available in open-source form, it makes it easier for other business to create and publish more data," ex-Googler Christophe Bisciglia says. "The more data that other business create and publish, the more data Google can slurp up and make universally accessible and useful."

Why didn't Google just open-source MapReduce and GFS on its own? Bisciglia says the company mulled the idea "a little bit," but decided it was less than practical. "MapReduce and GFS is infinitely integrated with so many other systems. Trying to cleanly excise them would be a software engineering challenge that would take millions of man hours. There would be no clean way to cut it out."

Plus, by the time Google got around to its mulling, Hadoop was already a thriving open source project. "It had a good community around it. It was seeing adoption at Yahoo! and Facebook," he says. "It wouldn't have been good for the community to have these two competing projects that do the same thing."

And, Bisciglia acknowledges, Google likes the fact that it's internal platform is "just a little bit better."

Last year, Hadoop researchers set an record on Jim Gray's sort algorithm, sorting a terabyte of random data in three minutes across 900 machines. But shortly thereafter, Google couldn't help but pipe up with the claim that it's very own MapReduce had done the job in just 60 seconds.

When it comes time to praise itself, Google isn't above lifting the code the secrecy. ®