Feeds

Google Research: Three things that MUST BE DONE to save the data center of the future

Think data-center design is tough now? Just you wait

  • alert
  • submit to reddit

Choosing a cloud hosting partner with confidence

'We suck at microseconds'

The third of the three challenges that Barroso identified as hampering the development of highly responsive, massively scaled data center is perhaps a bit counterintuitive: microsecond computing.

Addressing his audience of ISSCC chip designers, he said "You guys here in this room are the gods of nanosecond computing – or maybe picosecond computing." Over the years, a whole raft of techniques have been developed to deal with latencies in the nanosecond range.

But there remain a number of problems with latencies of much longer periods of time: microsecond latencies. The internet or disk drives, for example, induce latencies at the millisecond level, and so far the industry has been able to deal with these latencies by providing context-switching at the five-to-seven microsecond level. No problem, really.

However, Barroso said, "Here we are today, and I would propose that most of the interesting devices that we deal with in our 'landheld' computers are not in the nanosecond level, they're not in the millisecond level – they're in the microsecond level. And we suck at microseconds."

From his point of view the reason that the industry sucks at handling microsecond latencies is simply because it hasn't been paying attention to them while they've been focused on the nanosecond and millisecond levels of latencies.

As an example of a microsecond latency in a mega–datacenter, he gave the example of the data center itself. "Think about it. Two machines communicating in one of these large facilities. If the fiber has to go 200 meters or so, you have a microsecond." Add switching to that, and you have a bit more than a microsecond.

Multiply those microsecond latencies by the enormous amount of communications among machines and switches, and you're talking a large aggregate sum. In regard to flash storage – not to mention the higher-speed, denser non-volatile memory technologies of the future – you're going to see more microsecond latencies that need to be dealt with in the mega–data center.

The hardware-software solution

"Where this breaks down," Barroso said, "is when people today at Google and other companies try to build very efficient, say, messaging systems to deal with microsecond-level latencies in data centers."

The problem today is that when programmers want to send a call to a system one microsecond away and have it respond with data that takes another microsecond to return, they use remote procedure call libraries or messaging libraries when they want to, say, perform an RDMA call in a distributed system rather than use a direct RDMA operation.

When using such a library, he said, "Those two microseconds quickly went to almost a hundred microseconds." Admittedly, some of that problem is that software is often unnecessarily bloated, but Barroso says that the main reason is that "we don't have the underlying mechanisms that make it easy for programmers to deal with microsecond-level latencies."

This is a problem that will have to be dealt with both at the hardware and the software levels, he said – and by the industry promoting microsecond-level latencies to a first-order problem when designing systems for the mega–data center.

All three of these challenges are about creating highly scalable data centers that can accomplish the goal of "big data, little time" – but despite the fact that he was speaking at an ISSCC session during which all the other presenters spoke at length about the buzzword du jour, Barroso refused to be drawn into the hype-fest.

"I will not talk about 'big data' per se," he said. "Or to use the Google internal term for it: 'data'." ®

Security for virtualized datacentres

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Bitcasa bins $10-a-month Infinite storage offer
Firm cites 'low demand' plus 'abusers'
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
AWS pulls desktop-as-a-service from the PC
Support for PCoIP protocol means zero clients can run cloudy desktops
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.