Feeds

Google Caffeine: What it really is

Wake up and smell the file system

Internet Security Threat Report 2014

Today, Caffeine. Tomorrow, The Empire

Google's philosophy is to build a single distributed architecture that treats its vast network of data centers as a single virtual machine.

"[Data centers] are just atoms," Google senior manager of engineering and architecture Vijay Gill said recently. "Any idiot can build atoms together and then create this vast infrastructure. The question is: How do you actually get the applications to use the infrastructure? How do you distribute it? How do you optimize it? That's the hard part. To do that you require an insane amount of force of will...

"We have a set of primitives, if you would, that takes those collections of atoms - those data centers, those networks - that we've built, and then they abstract that entire infrastructure out as a set of services - some of the public ones are GFS obviously, BigTable, MapReduce."

Caffeine is about the search index. But GFS2 is designed specifically for applications like Gmail and YouTube, applications that - unlike an indexing system - are served up directly to the end user. Such apps require ultra-low latency, and that's not something the original GFS was designed for.

With GFS, a master node oversees data spread across a series of distributed chunkservers. And for apps that require low latency, that lone master - a single point of failure - is a problem.

"One GFS shortcoming that this immediately exposed had to do with the original single-master design," former GFS tech lead Sean Quinlan has said. "A single point of failure may not have been a disaster for batch-oriented applications, but it was certainly unacceptable for latency-sensitive applications, such as video serving."

GF2S uses not only distributed slaves, but distributed masters as well.

So, today, Caffeine - tomorrow, everything else. Cutts confirms that Caffeine is running in a single Google data center - and that would seem to imply that GFS2 has only been deployed in that one facility. Reg readers have marveled at the scope of Google's pending upgrade, with one commenter hoping that Google has equipped its engineers with "massively reinforced underwear."

But Cutts downplays the risks and hassle, saying the migration is a matter of taking one data center offline at a time. "At any point, we have the ability to take one data center out of the rotation, if we wanted to swap out power components or different hardware - or change the software," he says. "So you can imagine building an index at one of the data centers and then copying that data throughout all the other data centers.

"If you want to deploy new software, you could take one of the data centers out of the traditional rotation. And you can send any degree of traffic to it."

Vijay Gill has even hinted that Google has developed some sort of magical software layer that can automatically migrate loads in and out of data centers in near time. But when asked about this - with a Google PR man listening on the line - Cutts gave a very Googly response. "I don't believe we have published any papers regarding that." The company likes being coy.

In similar fashion, Cutts won't say all that much about the tools rolled into Caffeine, which is publicly available here (except when it's not). But he leaves no doubt that this, well, semi-secret project isn't just a search upgrade. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.