The Register® — Biting the hand that feeds IT

Feeds

Give them a cold trouser blast and data centre bosses WILL dial up the juice

Power limits in the server room

Free ESG report : Seamless data management with Avere FXT

If you've ever looked at putting your servers and other infrastructure in a data centre, you'll have come across the power limitation they place upon you: they'll only allow your kit to suck up a certain amount before they cut you off.

Generally, they'll tell you that you can have three or four kilowatts per cabinet, and even if you pay “overage” charges for going higher they'll still be very keen to put a limit on what you use. It's seldom that they let you go above five or six kilowatts.

In some heavy-usage areas, this can simply be down to the power supply into the building. When I used to host equipment in London there were several data centres that simply shut up shop to new customers because the supply into their building was maxed out. Generally speaking, though, the reason is more complex than just the number of electrons the provider can stuff up the cable.

Power supply

Sticking with the power supply for the moment, as it's the relatively easy part of the formula, there's more to consider than the feed into the building from the power company.

First of all, you have the power smoothing and UPS provision: the provider will need at least N+1 redundancy (meaning that they can lose one power unit without service being affected) or, if they're any good, N+2 – so for every kilowatt a client draws, they need to provide considerably more than a kilowatt of backup.

And the next step from the UPS is the generator in the basement: again that needs to be at least N+1 and should be able to run – at least in the case of the top-end data centres – for 99.995 per cent of the time. And this works on the premise that if you've got room for it, you can cool it, and so on.

When the power provision required for each new server is considerably more than the actual power that server's likely to draw, of course the data centre manager's going to count your watts carefully. Where I work, in the Channel Islands, the problem is exacerbated by the fact that the electricity supply from the mainland is an ancient, knackered piece of damp string. Thankfully that's not so much of a problem to UK users.

It's getting hot in here

Power is, however, only half of the problem: the remainder of the discussion relates to heat. As anyone who owns a computer knows, they kick out heat - relatively speaking, lots of it.

While it's true that devices uses modern technology (particularly processors) are more heat-efficient than their ancestors, the gains in efficiency of new hardware are largely offset by people's desire to cram more of that technology into a single box.

Furthermore, the move to chassis-based servers crams more wattage into each inch of rack space. So while the data centre manager is certainly concerned with the amount of power you draw from his or her supply (after all, they have to maintain N+1 power provision not only for the multi-level power supply sending amps into your servers), they're also worried about how to deal with the heat coming out at the other end.

How to keep your cool in a data centre... not as simple as it sounds

Now, there's more to cooling a data centre than blowing some cold air around it. You have to blow the cold air in such a way that it's absorbed by the cooling intakes of the equipment being cooled, and that the hot efflux from that equipment is fed back into the cooling system to be chilled and recycled.

You know when you have a server with multiple hot-plug drive bays, and the manual tells you to leave blanking units in the bays you're not using? That's so that the airflow over the internal boards takes place in the way the designers meant for it to do, thus maximising the airflow through the device. If you leave the blanking plates off, you'll simply end up with turbulent air burbling around at the front of the server instead of actually flowing over the circuitry.

Well, the same applies to data centre cabinets: if you're not using a bit of a rack, put blanking plates in the front so that the air flowing through the cabinet moves optimally – otherwise the air won't flow smoothly.

And have you ever noticed where the aircon outlets are in a data centre? Yes, they're in the floor, but have you noticed precisely where in the floor? Answer: in the least convenient place, in front of the rack, where you get a chilly draught up your trouser leg when you're stood at the keyboard.

The reason's pretty simple, though: the fans in the servers pull air in through the front grille and throw it out through the back. So the cooling system presents nice cold air to the front of the cabinet, lets the servers add some heat to it, and then recovers it from the back through the inlets (sometimes in the floor, often higher up as hot air rises).

Some data centres go the whole nine yards and use “cold aisle” technology. It's a funky concept but really all it's doing is adding to the control of the airflow. Instead of row upon row of cabinets, you instead have pairs of rows enclosed with semi-permanent partitions, with a door on each end, making each pair a self-contained unit.

The fronts of the servers face into the cold aisle, and the backs face to the outside for the warm efflux to be salvaged.

All of which is very well, of course, but then some bright spark says to himself: “If I mount my switches, routers and firewalls in the back of the cabinet, the LAN ports are adjacent to the backs of the servers, making the network plumbing easier,” - thereby reversing the flow of air and chucking the hot stuff out of the front.

Suck it up, spit it out

In short, then, when your data centre provider tells you the power limit available to you, don't have a rant at them and tell them to stop being daft. Just bear in mind that the provision they have to make to accommodate your equipment is considerably greater than the perceived power draw you think you'll be placing on the cabinets.

And, try your hardest to suck the cold air in from the front and throw the hot air out of the back, and to use baffles both in your servers and in the cabinets. It'll enable him to be more lenient to you with regard to power provision, because you're not making your kit battle with the aircon, and proper airflow means prolonged life for your equipment too. ®

Dave Cartwright is a senior network and telecoms specialist who has spent 20 years working in academia, defence, publishing and intellectual property. He is the founding and technical editor of Network Week and Techworld and his specialities include design, construction and management of global telecoms networks, infrastructure and software architecture, development and testing, database design, implementation and optimisation. Dave and his family live in St Helier on the island paradise of Jersey.

5 ways to reduce advertising network latency

Whitepapers

5 ways to reduce advertising network latency
Implementing the tactics laid out in this whitepaper can help reduce your overall advertising network latency.
Supercharge your infrastructure
Fusion­‐io has developed a shared storage solution that provides new performance management capabilities required to maximize flash utilization.
Avere FXT with FlashMove and FlashMirror
This ESG Lab validation report documents hands-on testing of the Avere FXT Series Edge Filer with the AOS 3.0 operating environment.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Email delivery: 4 steps to get more email to the inbox
This whitepaper lists some steps and information that will give you the best opportunity to achieve an amazing sender reputation.

More from The Register

next story
Dedupe-dedupe, dedupe-dedupe-dedupe: Flashy clients crowd around Permabit diamond
3 of the top six flash vendors are casing the OEM dedupe tech, claims analyst
Disk-pushers, get reel: Even GOOGLE relies on tape
Prepare to be beaten by your old, cheap rival
Dragons' Den star's biz Outsourcery sends yet more millions up in smoke
Telly moneybags went into the cloud and still nobody's making any profit
Hong Kong's data centres stay high and dry amid Typhoon Usagi
180 km/h winds kill 25 in China, but the data centres keep humming
Microsoft lures punters to hybrid storage cloud with free storage arrays
Spend on Azure, get StorSimple box at the low, low price of $0
WD unveils new MyBook line: External drives now bigger... and CHEAP
Less than £0.04/GB, but it loses the Thunderbolt speed
VMware vSAN test pilots: Don't panic but there's a chance of DATA LOSS
AHCI SATA controller won't play nice with Virtzilla's robo-storage beta
prev story