This article is more than 1 year old

Google search index splits with MapReduce

Welds BigTable to file system 'Colossus'

Exclusive Google Caffeine — the remodeled search infrastructure rolled out across Google's worldwide data center network earlier this year — is not based on MapReduce, the distributed number-crunching platform that famously underpinned the company's previous indexing system. As the likes of Yahoo!, Facebook, and Microsoft work to duplicate MapReduce through the open source Hadoop project, Google is moving on.

According to Eisar Lipkovitz, a senior director of engineering at Google, Caffeine moves Google's back-end indexing system away from MapReduce and onto BigTable, the company's distributed database platform.

As Google's Matt Cutts told us last year, the new search infrastructure also uses an overhaul of the company's GFS distributed file system. This has been referred to as Google File System 2 or GFS2, but inside Google, Lipkovitz says, it's known as Colossus.

Caffeine expands on BigTable to create a kind of database programming model that lets the company make changes to its web index without rebuilding the entire index from scratch. "[Caffeine] is a database-driven, Big Table–variety indexing system," Lipkovitz tells The Reg, saying that Google will soon publish a paper discussing the system. The paper, he says, will be delivered next month at the USENIX Symposium on Operating Systems Design and Implementation (OSDI).

Before the arrival of Caffeine, Google built its search index — an index of the entire web — using MapReduce. In essence, the index was created by a sequence of batch operations. The MapReduce platform "maps" data-crunching tasks across a collection of distributed machines, splitting them into tiny sub-tasks, before "reducing" the results into one master calculation.

MapReduce would receive the epic amounts of webpage data collected by Google's crawlers, and it would crunch this down to the links and metadata needed to actually search these pages. Among so many other things, it would determine each site's PageRank, the site's relationship to all the other sites on the web.

"We would start with a very large [collection of data] and we would process it," Lipkovitz tells us. "Then, eight hours or so later, we'd get the output of this whole process and we'd copy it to our serving systems. And we did this continuously."

But MapReduce didn't allow Google to update its index as quickly as it would like. In the age of the "real-time" web, the company is determined to refresh its index within seconds. Over the last few years, MapReduce has received ample criticism from the likes of MIT database guru Mike Stonebraker, and according to Lipkovitz, Google long ago made "similar observations." MapReduce, he says, isn't suited to calculations that need to occur in near real-time.

MapReduce is a sequence of batch operations, and generally, Lipkovitz explains, you can't start your next phase of operations until you finish the first. It suffers from "stragglers," he says. If you want to build a system that's based on series of map-reduces, there's a certain probability that something will go wrong, and this gets larger as you increase the number of operations. "You can't do anything that takes a relatively short amount of time," Lipkovitz says, "so we got rid of it."

With Caffeine, Google can update its index by making direct changes to the web map already stored in BigTable. This includes a kind of framework that sits atop BigTable, and Lipkovitz compares it to old-school database programming and the use of "database triggers."

"It's completely incremental," he says. When a new page is crawled, Google can update its index with the necessarily changes rather than rebuilding the whole thing.

Previously, Google's index was separated into layers, and some layers updated faster than others. The main layer wouldn't be updated for about two weeks. "To refresh a layer of the old index, we would analyze the entire web, which meant there was a significant delay between when we found a page and made it available to you," Google said in a blog post when Caffeine was rolled out.

"With Caffeine, we analyze the web in small portions and update our search index on a continuous basis, globally. As we find new pages, or new information on existing pages, we can add these straight to the index. That means you can find fresher information than ever before — no matter when or where it was published."

There's also a "compute framework," Lipkovitz says, that lets engineers execute code atop BigTable. And the system is underpinned by Colossus, the distributed storage platform also known as GFS2. The original Google File System, he says, didn't scale as well as the company would like.

Colossus is specifically designed for BigTable, and for this reason it's not as suited to "general use" as GFS was. In other words, it was built specifically for use with the new Caffeine search-indexing system, and though it may be used in some form with other Google services, it isn't the sort of thing that's designed to span the entire Google infrastructure.

At Wednesday's launch of Google Instant, a new incarnation of Google's search engine that updates search results as you type, Google distinguished engineer Ben Gomes told The Reg that Caffeine was not built with Instant in mind, but that it helped deal with the added load placed on the system by the "streaming" search service.

Lipkovitz makes a point of saying that MapReduce is by no means dead. There are still cases where Caffeine uses batch processing, and MapReduce is still the basis for myriad other Google services. But prior the arrival of Caffeine, the indexing system was Google's largest MapReduce application, so use of the platform has been significantly, well, reduced.

In 2004, Google published research papers on GFS and MapReduce that became the basis for the open source Hadoop platform now used by Yahoo!, Facebook, and — yes — Microsoft. But as Google moves beyond GFS and MapReduce, Lipkovitz stresses that he is "not claiming that the rest of the world is behind us."

"We're in business of making searches useful," he says. "We're not in the business of selling infrastructure." ®

More about

TIP US OFF

Send us news


Other stories you might like