Feeds

Get ready for the coming data centre crunch

Can you go short on a power hungry server?

Intelligent flash storage arrays

"If there's going to be a theme of the Press Summit this year," mused one delegate on the flight to Portugal, "then it's going to be power, and heat." He should have been right.

We covered femtocells, 100-gig Ethernet, managed wireless, specialised security-oriented operating software for network switches, media gateways and (briefly) "green computing". The green session, I expected, would reveal the details of some hairy truths.

For example, everybody in the co-lo business in every major metro area knows that the internet is on a cliff-edge because of electricity problems. The costs of providing power for huge centres like Telehouse and Red Bus down in London's Docklands are huge - but it's not the cost that frightens the planners. The question which has them shaking their heads is: "What will we do when it all hits the ceiling?" And many of them believe it will happen soon. They say, quite simply, that there isn't enough power for the equipment that's already installed; and that new equipment will need even more power. And it's simply not available.

I was talking to one medium-sized ISP about their move to a Maidenhead centre. "What about getting peering links to other internet centres?" I asked him. "Don't you have to be there for that?" He shook his head with vigour: No. "Oh, transit - that's not a problem. Two years ago, if you'd asked me I would have said yes, transit is our main concern, but today, it's power. We can't stay in Docklands. We can't get the power." And it's not a question of "they'll put the price up" or "it's carbon-careless" but "they simply can't get more power into the buildings".

From the point of view of rival co-location centres, probably, power problems in London are good news - more refugees fleeing the congestion means more customers. But there's another problem, and that's the spectre of "points of failure". And in theory, the internet ignores single failures, and routes round them. In reality, people have been cutting corners.

One BT engineer expressed his frustration: "It isn't the company I joined ten years ago. Then, we did things which needed to be done. Today, there's the collapse of a whole raft of ISPs all connected to the internet through a single exchange in Stepney, East London. Thieves stole the switches, and for nearly 24 hours, all those ISPs and their customers were off the Web. It should not have been possible, but it happened. And there are other examples which employees like me can't talk about publicly... but we all know where they are."

His fear, and the fears of others in big networks, amounts to a stark prediction: That if we try carrying on the way we are going, the system will start fracturing. Byte-outs which take thousands of internet users offline for days or even weeks at a time, will start becoming more frequent.

Intel recently did a test on power consumption in a big server-switch farm, on the idea that power might be saved on cooling.

The problem with cooling in a big co-lo is that the new generation of hardware runs much faster by dint of using a lot more power. But it also generates a lot more heat, and then the operators need to spend even more power on cooling. The Intel experiment suggested that perhaps, we're over-cooling - perhaps, having the data centres so cool that humans have to wear sweaters, we could use ordinary ambient air at ambient temperatures. How about (said the experimenters) taking the cooling system right offline, and only starting to chill the air if it was higher than 90 deg F (32 deg C)? Yes, we'd have to run the cooling fans in the racks faster, but that wouldn't be significant...?"

The experiment wasn't really necessary, say some. One switch manufacturer confided that "in many of the data centres our customers operate in China and other parts of equatorial Asia, this is already standard practice. Many of them simply couldn't afford to cool their climate down to accepted custom and practice as seen in London or New York, and temperatures are a lot higher in those centres. But the difference between knowing that the computers will work OK at higher temperatures, and knowing what the cooling costs, was important. The Intel experiment saved millions of pounds in power."

Beginner's guide to SSL certificates

More from The Register

next story
729 teraflops, 71,000-core Super cost just US$5,500 to build
Cloud doubters, this isn't going to be your best day
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Getting ahead of the compliance curve
Learn about new services that make it easy to discover and manage certificates across the enterprise and how to get ahead of the compliance curve.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.