This article is more than 1 year old

Get ready for the coming data centre crunch

Can you go short on a power hungry server?

More to the point, from the perspective of big city co-los, it saved power, meaning that people can continue to run servers and switches when they might otherwise have had the plug pulled.

Logic says that you plan for this. You work out what the biggest demand spike will be, what the hottest day might be, and provide power for that. Then, when you get people saying "Please put another five servers in for us" you say "No, that will take us over our safety margin for hot days on demand peaks" and turn them down.

Obviously, as temperatures rise, the power drain rises. But the logic which says "Always limit your exposure to this risk" isn't one which the marketing department wants to hear. "Surely we can take a few more systems in? Do you want our customers to take their business to Maidenhead? Have we in fact had any days when high temperatures and unusually high demand coincided? Aren't you exaggerating the possible risks? You techies..."

London has had an unusually cold summer. Some observers have suggested that if the weather had been like it was in the hottest years, when the region suffered long-term droughts and old people had to be taken to hospital suffering from hyperthermia, we'd already have seen large-scale equipment switch-offs. Others say this is nonsense - scare tactics from equipment vendors. But the careful and experienced are moving.

There's a lot we could do to reduce power drain. For example, Extreme Networks say that they have figures showing that most Ethernet ports are using up to 40 watts when powered up - and they are powered up even when there's no traffic going through them. "Monitoring power to the device means two savings: First, we know what the device is, and how much power it needs, so we don't let Power over Ethernet (PoE) waste energy by over-supplying those devices which are low-power items. And also, we can tell when there's nothing attached to the port, and turn power to it off."

It's also been suggested that there are a lot of old, unused servers in data centres. "Nobody knows what they do, and nobody is prepared to say that if nobody knows, maybe they should be turned off," said one centre technician. "Some of them are antiques, generating enormous amounts of heat, which could be easily replaced by one new piece of kit which would do the work of dozens of those old ones, and use half the power of any one of them."

One supplier told me that his estimate was that as many as 40 per cent of servers were unused in long-established data centres. Unused, but switched on.

There are new approaches, too. Clustering, for example. I'm expecting to hear of a startup (now in stealth mode pre-launch) which is building servers from boxes the size of a Rubik's cube, running off a tenth of the power needed to operate a quad-core AMD or Intel box. "The power-MIPS curve can be changed with these things. As the demand rises, instead of the power rising exponentially, you just switch in another micro-server," said a source which knows the product plans.

The question of why people continue to expand their server designs when they must be aware of the problems they cause is an interesting one. "I know a customer who buys hundreds of servers a year, who approached one of the two big chip makers and said: "Design us a lower power server" and was told: "No, that's not in line with our strategy."

Naturally, once the ceiling is breached and angry internet users are being evicted from cyberspace panic will ensue, and steps will start to be taken to bail out the co-lo centres. It will be rushed, it will cost far more than it should, and it will be impossible to do quickly anyway. As with the credit crunch, the people responsible could have predicted and avoided the problems if they'd started planning five years ago.

So the real question is: Are we taking the problem seriously now? And if not, shouldn't we be? ®

More about

TIP US OFF

Send us news


Other stories you might like