Take the heat from data centres’ PUE pitch
It’s hard to match new data centres' efficiency, but you can improve your own
Data centre openings have become a dime a dozen of late, nearly always featuring (here in Australia at least) a suit from the operator talking up the new facility’s power usage effectiveness (PUE) rating as a compelling reason to move your kit within its walls.
PUE measures the amount of energy that goes into the building and divides it by the amount consumed by actual working kit – servers, SANs and other computing devices. If one incoming kilowatt leaves .8KW to power kit, PUE will be 1.25. The lower the PUE, the better, as a low number means the data centre’s energy overheads are low and the prices you’ll be charged to lodge kit within its walls should be commensurately lower.
The best greenfield data centres can achieve PUEs of around 1.2, and when they do their operators aren’t shy about letting us know. The recently-opened HP Aurora  and Macquarie Telecom  data centres, for example, both made their 1.3 PUE score a centrepiece of their launch events, and mentioned the rating as a reason to abandon on-premises kit and instead move it into their hallowed data halls.
That’s just the kind of argument one’s CEO is likely to read in an airline magazine, before asking some not-especially-well-informed questions about Reg readers' data centres or server rooms and what they cost to operate.
Leaving aside the many reasons it might be impractical to send all your kit outside your office, the first point with which to address questions about PUE is that new data centres have the benefit of being entirely new. That does confer some advantages, given that newer technology and designs nearly always improve on their predecessors.
A second point is that that just because a data centre offers a low PUE doesn’t mean it will be cheaper to run your computers within its walls.
“PUE is just looking at power in and power out, but does not look at the efficiency of computing,” says Per Grandjean-Thomsen, Engineering Manager, UPS, at Emerson Network Power. “A 1kw server with 1000 IOPS does not compare to 1kw server with 100 IOPS ops per second.”
That makes computers that crunch more data with the same, or lesser, electricity consumption a fine way to reduce the cost of computing on your premises. Or in a third-party facility.
“It all starts with new IT equipment,” says Michael Mallia, Eaton Industries marketing manager for power quality. “The latest hardware is always more efficient.”
If the boss wants lower electricity bills, you can therefore start to think about new silicon. Virtualising those new, shiny, servers is another low-hanging tactic to improve on-premises performance, given the likelihood it will allow you to run fewer servers.
A look at your UPS is also worthwhile, as if yours is set up to provide more power than you really need, it will be consuming more electricity. A lesser or newer and more efficient model might just, after careful consideration of the many factors contributing to your need for uptime, pay for itself.
The next thing to do to make your server room less thirsty for electrons is tidy it up.
“I've seen nests of cables that block exhaust heat,” says Grandjean-Thomsen. “That means the server temperature goes up, so the fans have to work harder.” Harder-working fans means more heat. And more heat means your air conditioning needs to work harder, which means more electricity consumption.
Tidying up cabling and other impediments to airflow can therefore help servers to run cooler.
Looking at absences can also help, as there's no point in letting air cooled at considerable expense waft into spaces where it won't be useful. “Too often I walk around data centres and racks are not fully populated, with empty spaces not covered in blanking panel,” says Schneider IT's vice president for APAC, Paul Tyrer. “Plugging those gaps with blanking panels - a cheap plastic strip that goes across the front of a rack - stops cold air going into those spaces.”
Eaton's Mallia recommends a similar tactic, namely wedging pieces of foam into gaps between racks to stop cold air going to waste in those spaces.
Another useful tactic for reducing electricity consumption is running the data centre at a higher temperature. That may sound like madness, but Mallia says many data centres and server rooms are often set to run at a temperature below that at which the equipment they contain will happily operate. “Making small adjustments of even one or two degrees to your desired temperature takes huge load away from cooling systems,” he says.
Once you tick off the low-hanging fruit of efficient kit, tidiness, virtualisation and setting a realistic temperature, it's time to think about airflow.
Servers and other appliances generally push hot air out their rears, so it is important to make sure that hot air doesn't travel in the direction of something you want to keep cool. The most common response to this issue is to run a “hot aisle/cold aisle” regime whereby you make sure hot air travels in one direction only. That means placing racks so that their rears face one another, with the result being a nastily-warm aisle between the two.
Such an arrangement means all the hot air ends up in once place from which it can be whisked away and treated, rather than spreading warm air around.
“You are not optimising cooling if you mix hot and cold air,” says Emerson's Grandjean-Thomsen.
If your data centre or server room isn't set up in this way, the bad news is that you'll almost certainly need to stop operations in order to re-rack kit in a more cooling-friendly way. If that's not an option, the likes of Schneider IT offer hot aisle containment systems, drop-in pods that isolate hot aisles and bring cooling to where it can be most impactful.
Beyond these tactics like a raft of techniques that fall under the label of 'data center infrastructure management'. Best-suited to larger data centres, because of their reliance on fine monitoring of a facility, DCIM requires a dedicated application (which means an additional server to power and cool) but can return inform power management and cooling practices that operate at a very granular level. For example, DCIM makes it possible to identify servers that consistently run hot because they are being worked hard, or even look at which sockets are sucking most energy. With that knowledge in hand, you can then distribute workloads among different boxen to prevent hot spots from forming.
It's also possible to fight back against power bills by making them someone else's problem . Electricity procurement is, according to Schneider IT's Tyrer, a fiendishly tricky business. Schneider's Australian arm therefore acquired a specialist consultancy to crunch the numbers offered by electricity companies and will happily offer you its services. Perhaps if your CIO asks you about PUE, pointing him or her in the direction of such a consultancy wherever you reside will deflect the matter and demonstrate your own efficiency at the same time. &ref;