Google cranks the thermostat
In a story published today at Data Center Knowledge, Google recommends operating data centers at higher temperatures than the norm. "The guidance we give to data center operators is to raise the thermostat," Google energy program manager Erik Teetzel told Data Center Knowledge. "Many data centers operate at 70 [Fahrenheit] degrees or below. We’d recommend looking at going to 80 [Fahrenheit] degrees."
On August 1, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) raised its recommended data center temperature range to between 68 and 77 degrees Fahrenheit (20 to 25 degrees Celsius). This recommendation is backed by 17 industry players, including IBM, Cisco, and Intel. "Most data centers tend to operate between 68 and 70 degrees [Fahrenheit]," says Fred Stack, an Emerson Network Power who heads ASRAE's data center committee. "But ASHRAE is actively promoting the increase of that number, along with server OEMs."
According to Stack, it's well known that Google operates its data centers at high temperatures. "Google has gone beyond the 77 degree point where most data centers start speeding up their server fans," he says. "There's no question Google operates its servers in warmer environments than the general [data center] population. Google doesn't talk about this but there are enough rumors in the world that I'm quite sure of this..."
"You can get computers to operate in environments that are well above 77 degrees. The military does it all the time."
Sorting It Out
Stack would not be surprised if Google also had a pact with Intel that upped the temperature qualifications of its processors. Processor temperature qualifications deal with the temperature of the chip itself, as opposed to the ambient temperature of the data center. So, whereas the data center might be cooled to 25 degrees Celsius, the chip itself might run 55 degrees.
"[Google uses] so many servers, they can command something special for their specific application," Stack says. "I would see no reason that Google couldn't get an Intel or AMD to commit to a special selection of components to meet a higher requirement. The military does it all the time."
In other words, he's speculating that Intel would use a special sort routine that would select chips more qualified than others to deal with high temperatures. "You could come up with a test routine that would test the ability of processors to withstand heat," he says.
AMD tells us that currently, it does not provide such a service to any of its customers. "Right now, our business just isn't set up to do that sort of thing," says Brent Kerby, product marketing manager for AMD's Opteron chips. But the company says that it would be able to do special sorting if a particular "business case" warranted it.
In selling chips to Google, Intel is fending off competition not just from AMD, but also the big-name server manufacturers. As recently as this week, Dell lamented its inability to land Google as a server customer. And privately, the big server OEMs have complained that Intel distributes chips directly to Mountain View. If Intel is providing Google with specialized chip qualifications, the server brigade must wonder why Mountain View gets perks they don't. ®
Update: This story has been updated to amend AMD statements on special processor sorts.
Shooting themselves in the foot
All Intel has to do is provide a spec spread where towards the upper thresholds for any given clock frequency, those parts have a higher voltage spec. Voila, a mere 5C higher is attainable even in worst case. All Google has done is fail to see the science in what it requires to meet their spec and how it would ultimiately effect the parts offered under this agreement (if it is true at all, frankly it seems foolish because the CPU is not the most heat vulnerable part in a server if your plan is to allow ambient temp to rise).
Unless they're overclocking, it is rather trivial to slap a stock heatsink on and have it stay cool enough with 80F ambient temps. At stock speeds most of Intel's products would stay cool enough even at 90F ambient unless these servers were ill designed, with especially bad airflow.
Paris, because even if she doesn't know what the "C" in 5C stands for, she understands pushing the limits.
Frank: There's a difference for the local wildlife. Increasing the ambient temperature in the region by 1C could easily cause the local unique <x> to die out. Yes, a lot of the heat does end up in the sea, but not all within 300yds of the shore.
AC: Total power in = total power out, yes. However, the power lost through "normal means" rather than aircon is dependant on the temperature of the datacentre as much as anything else - therefore you can use "passive" cooling for a lot more if the place is warmer. To put it bluntly, you may be able to just use cooling fins and a water supply instead of needing aircon if you can run 20C hotter than ambient.
AC2: Running them hotter decreases the MAXIMUM, not the "real", speed. Clock your new CPU to 1GHz, and warm it up - it'll stay at 1GHz until it breaks.
This isn't rocket science people! (Despite the icon)
outside air mixing
all else being equal, it wouldn't matter whether you ran the servers at 20C or 25C, you'd still be getting rid of the same heat!
if the building were sealed perfectly insulated so no heat entered or left the building through its walls, floor and ceiling, then only the air-con removes the waste heat.
however, if the outside air temp is cooler than the inside, you don't need coolers, just suck cold outside air in (clean to keep dirt out), and blow out the hot! You can do this in more northern latitudes where air temp is cooler, and so if Google can run their computer rooms at 25C instead of 20, they can make better use of "economisers".