Feeds

Give them a cold trouser blast and data centre bosses WILL dial up the juice

Power limits in the server room

Secure remote control for conventional and virtual desktops

If you've ever looked at putting your servers and other infrastructure in a data centre, you'll have come across the power limitation they place upon you: they'll only allow your kit to suck up a certain amount before they cut you off.

Generally, they'll tell you that you can have three or four kilowatts per cabinet, and even if you pay “overage” charges for going higher they'll still be very keen to put a limit on what you use. It's seldom that they let you go above five or six kilowatts.

In some heavy-usage areas, this can simply be down to the power supply into the building. When I used to host equipment in London there were several data centres that simply shut up shop to new customers because the supply into their building was maxed out. Generally speaking, though, the reason is more complex than just the number of electrons the provider can stuff up the cable.

Power supply

Sticking with the power supply for the moment, as it's the relatively easy part of the formula, there's more to consider than the feed into the building from the power company.

First of all, you have the power smoothing and UPS provision: the provider will need at least N+1 redundancy (meaning that they can lose one power unit without service being affected) or, if they're any good, N+2 – so for every kilowatt a client draws, they need to provide considerably more than a kilowatt of backup.

And the next step from the UPS is the generator in the basement: again that needs to be at least N+1 and should be able to run – at least in the case of the top-end data centres – for 99.995 per cent of the time. And this works on the premise that if you've got room for it, you can cool it, and so on.

When the power provision required for each new server is considerably more than the actual power that server's likely to draw, of course the data centre manager's going to count your watts carefully. Where I work, in the Channel Islands, the problem is exacerbated by the fact that the electricity supply from the mainland is an ancient, knackered piece of damp string. Thankfully that's not so much of a problem to UK users.

It's getting hot in here

Power is, however, only half of the problem: the remainder of the discussion relates to heat. As anyone who owns a computer knows, they kick out heat - relatively speaking, lots of it.

While it's true that devices uses modern technology (particularly processors) are more heat-efficient than their ancestors, the gains in efficiency of new hardware are largely offset by people's desire to cram more of that technology into a single box.

Furthermore, the move to chassis-based servers crams more wattage into each inch of rack space. So while the data centre manager is certainly concerned with the amount of power you draw from his or her supply (after all, they have to maintain N+1 power provision not only for the multi-level power supply sending amps into your servers), they're also worried about how to deal with the heat coming out at the other end.

How to keep your cool in a data centre... not as simple as it sounds

Now, there's more to cooling a data centre than blowing some cold air around it. You have to blow the cold air in such a way that it's absorbed by the cooling intakes of the equipment being cooled, and that the hot efflux from that equipment is fed back into the cooling system to be chilled and recycled.

You know when you have a server with multiple hot-plug drive bays, and the manual tells you to leave blanking units in the bays you're not using? That's so that the airflow over the internal boards takes place in the way the designers meant for it to do, thus maximising the airflow through the device. If you leave the blanking plates off, you'll simply end up with turbulent air burbling around at the front of the server instead of actually flowing over the circuitry.

Well, the same applies to data centre cabinets: if you're not using a bit of a rack, put blanking plates in the front so that the air flowing through the cabinet moves optimally – otherwise the air won't flow smoothly.

And have you ever noticed where the aircon outlets are in a data centre? Yes, they're in the floor, but have you noticed precisely where in the floor? Answer: in the least convenient place, in front of the rack, where you get a chilly draught up your trouser leg when you're stood at the keyboard.

The reason's pretty simple, though: the fans in the servers pull air in through the front grille and throw it out through the back. So the cooling system presents nice cold air to the front of the cabinet, lets the servers add some heat to it, and then recovers it from the back through the inlets (sometimes in the floor, often higher up as hot air rises).

Some data centres go the whole nine yards and use “cold aisle” technology. It's a funky concept but really all it's doing is adding to the control of the airflow. Instead of row upon row of cabinets, you instead have pairs of rows enclosed with semi-permanent partitions, with a door on each end, making each pair a self-contained unit.

The fronts of the servers face into the cold aisle, and the backs face to the outside for the warm efflux to be salvaged.

All of which is very well, of course, but then some bright spark says to himself: “If I mount my switches, routers and firewalls in the back of the cabinet, the LAN ports are adjacent to the backs of the servers, making the network plumbing easier,” - thereby reversing the flow of air and chucking the hot stuff out of the front.

Suck it up, spit it out

In short, then, when your data centre provider tells you the power limit available to you, don't have a rant at them and tell them to stop being daft. Just bear in mind that the provision they have to make to accommodate your equipment is considerably greater than the perceived power draw you think you'll be placing on the cabinets.

And, try your hardest to suck the cold air in from the front and throw the hot air out of the back, and to use baffles both in your servers and in the cabinets. It'll enable him to be more lenient to you with regard to power provision, because you're not making your kit battle with the aircon, and proper airflow means prolonged life for your equipment too. ®

Dave Cartwright is a senior network and telecoms specialist who has spent 20 years working in academia, defence, publishing and intellectual property. He is the founding and technical editor of Network Week and Techworld and his specialities include design, construction and management of global telecoms networks, infrastructure and software architecture, development and testing, database design, implementation and optimisation. Dave and his family live in St Helier on the island paradise of Jersey.

Internet Security Threat Report 2014

More from The Register

next story
Just don't blame Bono! Apple iTunes music sales PLUMMET
Cupertino revenue hit by cheapo downloads, says report
The DRUGSTORES DON'T WORK, CVS makes IT WORSE ... for Apple Pay
Goog Wallet apparently also spurned in NFC lockdown
IBM, backing away from hardware? NEVER!
Don't be so sure, so-surers
Hey - who wants 4.8 TERABYTES almost AS FAST AS MEMORY?
China's Memblaze says they've got it in PCIe. Yow
Microsoft brings the CLOUD that GOES ON FOREVER
Sky's the limit with unrestricted space in the cloud
This time it's SO REAL: Overcoming the open-source orgasm myth with TODO
If the web giants need it to work, hey, maybe it'll work
'ANYTHING BUT STABLE' Netflix suffers BIG Europe-wide outage
Friday night LIVE? Nope. The only thing streaming are tears down my face
Google roolz! Nest buys Revolv, KILLS new sales of home hub
Take my temperature, I'm feeling a little bit dizzy
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
Internet Security Threat Report 2014
An overview and analysis of the year in global threat activity: identify, analyze, and provide commentary on emerging trends in the dynamic threat landscape.