Google search primed for 'Caffeine' injection
A shot in the back-end
Google has completed testing on "Caffeine," a semi-mysterious overhaul of its back-end search infrastructure, and it will soon roll the new platform behind its live search engine.
In mid-August, Google unveiled a online sandbox where it invited world+dog to test the new infrastructure, but as noticed by Mashable.com, the sandbox has been replaced by a brief message from the Mountain View Chocolate Factory.
"Based on the success we've seen, we believe Caffeine is ready for a larger audience," Google's missive reads. "Soon we will activate Caffeine more widely, beginning with one data center. This sandbox is no longer necessary and has been retired, but we appreciate the testing and positive input that webmasters and publishers have given."
Previously, über-Googler Matt Cutts told The Reg that the new infrastructure was under test in a single data center - though he declined to say which one. A Google spokesman indicates that Caffeine will now be moved to a second data center for live deployment, adding that this will happen "over the next few months."
In typical Google fashion, the company has been coy about the design of Caffeine. But Matt Cutts acknowledged that it's built atop a complete revamp of the company's custom-built Google File System (GFS). Two years in the making, the new file system is known, at least informally, as GFS2.
"There are a lot of technologies that are under the hood within Caffeine, and one of the things that Caffeine relies on is next-generation storage," Cutts said. "Caffeine certainly does make use of the so-called GFS2." Caffeine includes other fresh additions to Google's famously distributed infrastructure, but Cutts declined to describe them.
Speaking with The Reg, Matt Cutts described Caffeine as an overhaul of Google's search indexing system. "Caffeine is a fundamental re-architecting of how our indexing system works," he said. "It's larger than a revamp. It's more along the lines of a rewrite. And it's really great. It gives us a lot more flexibility, a lot more power. The ability to index more documents. Indexing speeds - that is, how quickly you can put a document through our indexing system and make it searchable - is much, much better."
Building a search index is an epic number-crunching exercise. Today, Google handles the task using its proprietary Google File System, which stores the data, in tandem with a distributed technology called MapReduce, which crunches it. But these tools are used across other Google services as well, including everything from search to YouTube.
Google's overarching philosophy is to build a single, distributed architecture that runs all its services. And Cutts acknowledged that many of the back-end tools that drive the new indexing system - including GFS2 - will eventually be put to use across other Google services.
Part of the appeal of GFS2 is that it's specifically designed to handle low-latency applications, including Gmail and YouTube. With the original GFS, a master node oversees data spread across a series of distributed "chunkservers." For apps that require low latency, that lone master is a problem.
"One GFS shortcoming that this immediately exposed had to do with the original single-master design," former GFS tech lead Sean Quinlan has said. "A single point of failure may not have been a disaster for batch-oriented applications, but it was certainly unacceptable for latency-sensitive applications, such as video serving."
GF2S uses not only distributed slaves, but distributed masters as well.
In recent weeks, Mountain View has also acknowledged the existence of a new back-end technology known as Google Spanner, a means of automatically moving and replicating loads between the company's mega data centers when traffic and hardware issues arise. But a company spokesman tells us this is not part of Caffeine, although he says that "both [are] part of an ongoing company-wide effort to improve our infrastructure."
In a recent presentation (PDF) at a distributed-computing shindig in Montana, Google fellow Jeff Dean seemed to describe Spanner in the present tense. Though he declined to discuss the presentation with The Reg, he indicated that all the information in our recent piece on the mystery technology is correct.
According to Dean, Google intends on scaling Spanner to between one million and 10 million servers, encompassing 10 trillion directories and a quintillion bytes of storage. And all this would be spread across “100s to 1000s” of facilities across the globe.
Today, Google operates roughly 40 data centers, and it seems that Caffeine will be deployed one facility at a time. According to Cutts, this involves taking each data center offline and shifting its load elsewhere.
"At any point, we have the ability to take one data center out of the rotation, if we wanted to swap out power components or different hardware - or change the software," he said. "So you can imagine building an index at one of the data centers and then copying that data throughout all the other data centers. If you want to deploy new software, you could take one of the data centers out of the traditional rotation."
So, somewhere in the world, there's a mega data center on the verge of sabbatical. Or perhaps it's already happened. Presumably, Google will tell us at some point. And tell us very little. ®