Get ready for the coming data centre crunch
Can you go short on a power hungry server?
"If there's going to be a theme of the Press Summit this year," mused one delegate on the flight to Portugal, "then it's going to be power, and heat." He should have been right.
We covered femtocells, 100-gig Ethernet, managed wireless, specialised security-oriented operating software for network switches, media gateways and (briefly) "green computing". The green session, I expected, would reveal the details of some hairy truths.
For example, everybody in the co-lo business in every major metro area knows that the internet is on a cliff-edge because of electricity problems. The costs of providing power for huge centres like Telehouse and Red Bus down in London's Docklands are huge - but it's not the cost that frightens the planners. The question which has them shaking their heads is: "What will we do when it all hits the ceiling?" And many of them believe it will happen soon. They say, quite simply, that there isn't enough power for the equipment that's already installed; and that new equipment will need even more power. And it's simply not available.
I was talking to one medium-sized ISP about their move to a Maidenhead centre. "What about getting peering links to other internet centres?" I asked him. "Don't you have to be there for that?" He shook his head with vigour: No. "Oh, transit - that's not a problem. Two years ago, if you'd asked me I would have said yes, transit is our main concern, but today, it's power. We can't stay in Docklands. We can't get the power." And it's not a question of "they'll put the price up" or "it's carbon-careless" but "they simply can't get more power into the buildings".
From the point of view of rival co-location centres, probably, power problems in London are good news - more refugees fleeing the congestion means more customers. But there's another problem, and that's the spectre of "points of failure". And in theory, the internet ignores single failures, and routes round them. In reality, people have been cutting corners.
One BT engineer expressed his frustration: "It isn't the company I joined ten years ago. Then, we did things which needed to be done. Today, there's the collapse of a whole raft of ISPs all connected to the internet through a single exchange in Stepney, East London. Thieves stole the switches, and for nearly 24 hours, all those ISPs and their customers were off the Web. It should not have been possible, but it happened. And there are other examples which employees like me can't talk about publicly... but we all know where they are."
His fear, and the fears of others in big networks, amounts to a stark prediction: That if we try carrying on the way we are going, the system will start fracturing. Byte-outs which take thousands of internet users offline for days or even weeks at a time, will start becoming more frequent.
Intel recently did a test on power consumption in a big server-switch farm, on the idea that power might be saved on cooling.
The problem with cooling in a big co-lo is that the new generation of hardware runs much faster by dint of using a lot more power. But it also generates a lot more heat, and then the operators need to spend even more power on cooling. The Intel experiment suggested that perhaps, we're over-cooling - perhaps, having the data centres so cool that humans have to wear sweaters, we could use ordinary ambient air at ambient temperatures. How about (said the experimenters) taking the cooling system right offline, and only starting to chill the air if it was higher than 90 deg F (32 deg C)? Yes, we'd have to run the cooling fans in the racks faster, but that wouldn't be significant...?"
The experiment wasn't really necessary, say some. One switch manufacturer confided that "in many of the data centres our customers operate in China and other parts of equatorial Asia, this is already standard practice. Many of them simply couldn't afford to cool their climate down to accepted custom and practice as seen in London or New York, and temperatures are a lot higher in those centres. But the difference between knowing that the computers will work OK at higher temperatures, and knowing what the cooling costs, was important. The Intel experiment saved millions of pounds in power."
Sponsored: Network DDoS protection