Feeds

Google Percolator – global search jolt sans MapReduce comedown

The machine that brews the Caffeine

Top 5 reasons to deploy VMware with Tegile

Death to stragglers

In a Percolator cluster, three pieces run on each machine: a Percolator worker, a BigTable tablet server, and a GFS chunkserver. With GFS, master nodes oversee data spread across a series of distributed chunkservers, which store, yes, chunks of data. Observers hook into the Percolator worker, and the worker interfaces with BigTable. GFS, as Lipokovitz explained, is the database's underlying storage engine.

Whereas MapReduce nabbed all of the data for tens or even hundreds of webpages, Percolator executes roughly fifty BigTable operations when processing a single document.

Google Percolator setup

Percolator applications are essentially a series of observers. Each observer completes a task and passes more work onto the next observer by writing to the table. There are relatively few observers per app: Caffeine uses about 10. Because the system can operate without rescanning the entire index, it's much simpler than the 100-MapReduce indexing setup of the past. And with latency reduced, Google can expand the size of its index. Caffeine's collection of documents is three times larger than that used by the old MapReduce system.

The size of the system, Google engineers say, is limited only by the available disk space.

Percolator also avoids the MapReduce "straggler" problem, where a few slow operations can hold up the entire process, and according to the Google engineers, it's easier to operate. "In the old system, each of a hundred different MapReduces needed to be individually configured and could independently fail. Also, the 'peaky' nature of the MapReduce workload made it hard to fully utilize the resources of a datacenter compared to Percolator’s much smoother resource usage."

The rub is that Caffeine uses roughly twice the resources to keep up with the same crawl rate. According to the paper, Percolator performance lies somewhere between that of MapReduce and a traditional database management system (DBMS). "Because Percolator is a distributed system, it uses far more resources to process a fixed amount of data than a traditional DBMS would; this is the cost of its scalability. Compared to MapReduce, Percolator can process data with far lower latency, but again, at the cost of additional resources required to support random lookups."

Google Percolator benchmarks

According to Peng and Dabek, the performance of the system will scale almost linearly a resources are added, as indicated by tests with the industry standard TPC-E benchmark. But that added overhead may be an issue. "The system achieved the goals we set for reducing the latency of indexing a single document with an acceptable increase in resource usage compared to the previous indexing system," the paper concludes.

"The TPC-E results suggest a promising direction for future investigation. We chose an architecture that scales linearly over many orders of magnitude on commodity machines, but we’ve seen that this costs a significant 30-fold overhead compared to traditional database architectures. We are very interested in exploring this tradeoff and characterizing the nature of this overhead: how much is fundamental to distributed storage systems, and how much can be optimized away?" ®

Intelligent flash storage arrays

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Driving business with continuous operational intelligence
Introducing an innovative approach offered by ExtraHop for producing continuous operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?