Feeds

Google Research: Three things that MUST BE DONE to save the data center of the future

Think data-center design is tough now? Just you wait

  • alert
  • submit to reddit

Providing a secure and efficient Helpdesk

The evil 1 per cent

Imagine, Barroso said, that you're running a straightforward web server that runs quite smoothly, giving results at the millisecond level 99 per cent of the time. But if 1 per cent of the time something causes that server to cough – garbage collection, a system daemon doing management tasks in the background, whatever – it's not a fatal problem, since it only effects 1 per cent of the queries sent to the server.

However, now imagine that you've scaled up and distributed your web-serving workload over many servers, as does Google in their data centers – your problem increases exponentially.

"Your query now has to talk to, say, hundreds of these servers," Barroso explained, "and can only respond to the user when you've finished computing through all of them. You can do the math: two-thirds of the time you're actually going to have one of these slow queries" since the possibility of one or more – many more – of those servers coughing during a distributed query is much higher than if one server alone were performing the workload.

"The interesting part about this," he said, "is that nothing changed in the system other than scale. You went from a system that worked reasonably well to a system with unacceptable latency just because of scale, and nothing else."

Google's response to this problem has, in the main, to try to reduce that 1 per cent of increased latency queries, and to do so on all the servers sharing the workload. While they've had some success with this effort, he said, "Since it's an exponential problem it's ultimately a losing proposition at some scale."

Even if you get that 1 per cent down to, say, 0.001 per cent, the more you scale up, the more the problem becomes additive, and your latency inevitably increases. Not good.

Learning from fault-tolerance

Squashing that 1 per cent down is not enough, Barroso said, and so Google "took inspiration" from fault-tolerant computing. "A fault-tolerant system is actually more reliable in principle than any of its components," he said, noting that what needs to be done is to create similar techniques for latency – in other words, create a large, distributed system that is highly responsive even if its individual components aren't completely free from latency hiccups, coughs, belches, or other time-wasting bodily functions.

One example of a possible lesson from fault tolerance that could be extended to latency tolerance – what Barroso calls tail-tolerance – is the replication in Google's file system. You could, for example, send a request to one iteration of the file system, wait for, say, 95 per cent of the time that you'd normally expect a reply, and if you haven't received it, fire off the request to a replicated iteration of the same file-system content.

"It works actually incredibly well," he said. "First of all, the maximum amount of extra overhead in your system is, by definition, 5 per cent because you're firing the second request at 95 per cent. But most of all it's incredibly powerful at reducing tail latency."

Barroso pointed his audience to a paper he and coauthor Google Fellow Jeffrey Dean published in February 2013 for more of their thoughts on how to reduce tail latency and/or tail tolerance, but said that much more work needs to be done.

"This is not a problem we have solved," he said. "It's a problem we have identified."

Security for virtualized datacentres

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
No biggie: EMC's XtremIO firmware upgrade 'will wipe data'
But it'll have no impact and will be seamless, we're told
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.