This article is more than 1 year old

Google Percolator – global search jolt sans MapReduce comedown

The machine that brews the Caffeine

Death to stragglers

In a Percolator cluster, three pieces run on each machine: a Percolator worker, a BigTable tablet server, and a GFS chunkserver. With GFS, master nodes oversee data spread across a series of distributed chunkservers, which store, yes, chunks of data. Observers hook into the Percolator worker, and the worker interfaces with BigTable. GFS, as Lipokovitz explained, is the database's underlying storage engine.

Whereas MapReduce nabbed all of the data for tens or even hundreds of webpages, Percolator executes roughly fifty BigTable operations when processing a single document.

Google Percolator setup

Percolator applications are essentially a series of observers. Each observer completes a task and passes more work onto the next observer by writing to the table. There are relatively few observers per app: Caffeine uses about 10. Because the system can operate without rescanning the entire index, it's much simpler than the 100-MapReduce indexing setup of the past. And with latency reduced, Google can expand the size of its index. Caffeine's collection of documents is three times larger than that used by the old MapReduce system.

The size of the system, Google engineers say, is limited only by the available disk space.

Percolator also avoids the MapReduce "straggler" problem, where a few slow operations can hold up the entire process, and according to the Google engineers, it's easier to operate. "In the old system, each of a hundred different MapReduces needed to be individually configured and could independently fail. Also, the 'peaky' nature of the MapReduce workload made it hard to fully utilize the resources of a datacenter compared to Percolator’s much smoother resource usage."

The rub is that Caffeine uses roughly twice the resources to keep up with the same crawl rate. According to the paper, Percolator performance lies somewhere between that of MapReduce and a traditional database management system (DBMS). "Because Percolator is a distributed system, it uses far more resources to process a fixed amount of data than a traditional DBMS would; this is the cost of its scalability. Compared to MapReduce, Percolator can process data with far lower latency, but again, at the cost of additional resources required to support random lookups."

Google Percolator benchmarks

According to Peng and Dabek, the performance of the system will scale almost linearly a resources are added, as indicated by tests with the industry standard TPC-E benchmark. But that added overhead may be an issue. "The system achieved the goals we set for reducing the latency of indexing a single document with an acceptable increase in resource usage compared to the previous indexing system," the paper concludes.

"The TPC-E results suggest a promising direction for future investigation. We chose an architecture that scales linearly over many orders of magnitude on commodity machines, but we’ve seen that this costs a significant 30-fold overhead compared to traditional database architectures. We are very interested in exploring this tradeoff and characterizing the nature of this overhead: how much is fundamental to distributed storage systems, and how much can be optimized away?" ®

More about

TIP US OFF

Send us news


Other stories you might like