Feeds

Intel says data centers much too cold

Frozen assets a waste of cash

Next gen security for virtualised datacentres

Intel wants you to know that data centers are wasting energy - and money - by over-cooling their servers, burdened by warranties that may prevent them from aggressively raising their temperature.

During a wide-ranging discussion on enterprise-level cloud computing last week, Intel's director of platform technology initiatives in the company's server platform group, Dylan Larson, pointed reporters to a recent energy-efficiency study (PDF) conducted by representatives from Intel, IBM, HP, Liebert Precision Cooling (a division of Emerson Network Power), and the Lawrence Berkeley National Lab.

That study included results from a recent Liebert survey of members of the Data Center Users Group (DCUG) that showed that each and every one of the respondents are cooling their data centers significantly more than needed.

The "than needed" figure is 27°C (80.6°F), as recommended by no less than the leading US cooling-industry consortium, the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).

The ASHRAE recommendations were published last year in the consortium's "Environmental Guidelines for Datacom Equipment," which raised their 2004 high-end recommendation for inflow temperature from 25°C (77°F) to 27°C (80.6°F).

Of the 98 respondents to the DCUG survey, however, none had a computer-room air handling (CRAH) inflow temperature higher than 74°F (23.3°C), and the majority chilled their air to 70°F (21.1°C) or below.

Data-center temperature-survey results

Cooler heads say that chilling below 81°F wastes money - survey says everyone does it

This info came up when Intel's Larson was discussing Advanced Cooling Environment (ACE) technology intended to provide a closed-loop monitoring and management system to allow servers to safely rise to that ASHRAE max. Doing so would save a tremendous amount of power.

As he explained it, "The [wasted] power comes from the fans in the CRAC units running overtime to push cool air into the system" - CRAC units being computer-room air-conditioning units, like CRAHs.

"If you can reduce how much the fan has to work by even a small percentage, you get a substantial improvment in power," he said, "and reducing the fan requirement by half reduces power consumption by something like 87 per cent."

Chilling the air, of course, also requires a significant power outlay, but getting the servers to tell the CRAC units exactly how much cooling they need and exactly when they need it isn't as straightforward as it might sound.

"This has been a pretty interesting ride," Larson explained, "The CRAC vendors run a totally different protocol than the server vendors, which run IPMI or WS-MAN. The CRAC vendors run Modbus and another protocol that are completely orthogonal."

Translation: Servers and cooling systems don't speak the same language.

However, says Larson, "Working together with that industry to map a communication of those protocols can help us make a pretty dramatic reduction in power consumption by reducing the level of cooling required."

Of course, CRAC-unit vendors aren't motivated to step up and say that overly cool data centers are wasting power and money, seeing as how some of that money goes to them for more-powerful cooling systems.

But Larson is optimistic. "Believe it or not, we've been able to work with CRAC-unit vendors like Liebert and Emerson to look at how we can actually manage to get to a point of addressing reductions in power by managing temperature more effectively."

Perhaps CRAC dealers with enlightened self-interest will find increased profits in retrofitting their installed bases with power-management intelligence.

But when asked about a specific data center that has been rumored to be trying some aggressively higher inflow temperatures - up to even 100°F (38°C) - Larson said: "I don't think they're breaking our requirements yet because we have a warranty association with that."

When asked about Intel's policy when it comes to spreading the higher-temperature gospel, Larson said: "I'll be honest. We are pretty conservative about quality at Intel. We do get people who say, 'Hey, give me some relief on my ability to get warranty for your products when I run them really, really hot.' We haven't really done that. But we do know that we can meet the requirements of ASHRAE in this context."

So take a look at your warranties. Although Intel's Larson says that he doesn't see a wave of warranty renegotiation in the near future, he does suggest that ASHRAE's guidelines are sound.

Just don't try to sell him on a 100°F data center. ®

Next gen security for virtualised datacentres

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.