Feeds

Google Research: Three things that MUST BE DONE to save the data center of the future

Think data-center design is tough now? Just you wait

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

The evil 1 per cent

Imagine, Barroso said, that you're running a straightforward web server that runs quite smoothly, giving results at the millisecond level 99 per cent of the time. But if 1 per cent of the time something causes that server to cough – garbage collection, a system daemon doing management tasks in the background, whatever – it's not a fatal problem, since it only effects 1 per cent of the queries sent to the server.

However, now imagine that you've scaled up and distributed your web-serving workload over many servers, as does Google in their data centers – your problem increases exponentially.

"Your query now has to talk to, say, hundreds of these servers," Barroso explained, "and can only respond to the user when you've finished computing through all of them. You can do the math: two-thirds of the time you're actually going to have one of these slow queries" since the possibility of one or more – many more – of those servers coughing during a distributed query is much higher than if one server alone were performing the workload.

"The interesting part about this," he said, "is that nothing changed in the system other than scale. You went from a system that worked reasonably well to a system with unacceptable latency just because of scale, and nothing else."

Google's response to this problem has, in the main, to try to reduce that 1 per cent of increased latency queries, and to do so on all the servers sharing the workload. While they've had some success with this effort, he said, "Since it's an exponential problem it's ultimately a losing proposition at some scale."

Even if you get that 1 per cent down to, say, 0.001 per cent, the more you scale up, the more the problem becomes additive, and your latency inevitably increases. Not good.

Learning from fault-tolerance

Squashing that 1 per cent down is not enough, Barroso said, and so Google "took inspiration" from fault-tolerant computing. "A fault-tolerant system is actually more reliable in principle than any of its components," he said, noting that what needs to be done is to create similar techniques for latency – in other words, create a large, distributed system that is highly responsive even if its individual components aren't completely free from latency hiccups, coughs, belches, or other time-wasting bodily functions.

One example of a possible lesson from fault tolerance that could be extended to latency tolerance – what Barroso calls tail-tolerance – is the replication in Google's file system. You could, for example, send a request to one iteration of the file system, wait for, say, 95 per cent of the time that you'd normally expect a reply, and if you haven't received it, fire off the request to a replicated iteration of the same file-system content.

"It works actually incredibly well," he said. "First of all, the maximum amount of extra overhead in your system is, by definition, 5 per cent because you're firing the second request at 95 per cent. But most of all it's incredibly powerful at reducing tail latency."

Barroso pointed his audience to a paper he and coauthor Google Fellow Jeffrey Dean published in February 2013 for more of their thoughts on how to reduce tail latency and/or tail tolerance, but said that much more work needs to be done.

"This is not a problem we have solved," he said. "It's a problem we have identified."

Remote control for virtualized desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.