Google Research: Three things that MUST BE DONE to save the data center of the future
Think data-center design is tough now? Just you wait
The evil 1 per cent
Imagine, Barroso said, that you're running a straightforward web server that runs quite smoothly, giving results at the millisecond level 99 per cent of the time. But if 1 per cent of the time something causes that server to cough – garbage collection, a system daemon doing management tasks in the background, whatever – it's not a fatal problem, since it only effects 1 per cent of the queries sent to the server.
However, now imagine that you've scaled up and distributed your web-serving workload over many servers, as does Google in their data centers – your problem increases exponentially.
"Your query now has to talk to, say, hundreds of these servers," Barroso explained, "and can only respond to the user when you've finished computing through all of them. You can do the math: two-thirds of the time you're actually going to have one of these slow queries" since the possibility of one or more – many more – of those servers coughing during a distributed query is much higher than if one server alone were performing the workload.
"The interesting part about this," he said, "is that nothing changed in the system other than scale. You went from a system that worked reasonably well to a system with unacceptable latency just because of scale, and nothing else."
Google's response to this problem has, in the main, to try to reduce that 1 per cent of increased latency queries, and to do so on all the servers sharing the workload. While they've had some success with this effort, he said, "Since it's an exponential problem it's ultimately a losing proposition at some scale."
Even if you get that 1 per cent down to, say, 0.001 per cent, the more you scale up, the more the problem becomes additive, and your latency inevitably increases. Not good.
Learning from fault-tolerance
Squashing that 1 per cent down is not enough, Barroso said, and so Google "took inspiration" from fault-tolerant computing. "A fault-tolerant system is actually more reliable in principle than any of its components," he said, noting that what needs to be done is to create similar techniques for latency – in other words, create a large, distributed system that is highly responsive even if its individual components aren't completely free from latency hiccups, coughs, belches, or other time-wasting bodily functions.
One example of a possible lesson from fault tolerance that could be extended to latency tolerance – what Barroso calls tail-tolerance – is the replication in Google's file system. You could, for example, send a request to one iteration of the file system, wait for, say, 95 per cent of the time that you'd normally expect a reply, and if you haven't received it, fire off the request to a replicated iteration of the same file-system content.
"It works actually incredibly well," he said. "First of all, the maximum amount of extra overhead in your system is, by definition, 5 per cent because you're firing the second request at 95 per cent. But most of all it's incredibly powerful at reducing tail latency."
Barroso pointed his audience to a paper he and coauthor Google Fellow Jeffrey Dean published in February 2013 for more of their thoughts on how to reduce tail latency and/or tail tolerance, but said that much more work needs to be done.
"This is not a problem we have solved," he said. "It's a problem we have identified."
Sponsored: 2016 Cyberthreat defense report