Data center efficiency - the good, the bad and the way too hot
Avoiding crisis in the rain-forest
Meat Cast Ah, data center efficiency. The big vendors have embraced this topic like an admin embraces a bag of Doritos.
Luckily, we're here to separate fact from fiction. Episode 5 of Semi-Coherent Computing has Chris Hipp and me interview Rumsey Engineers founder Peter Rumsey. The folks at Rumsey Engineers know their stuff, having built data centers for the likes of Bank of America, Lawrence Berkeley National Labs and the University of California at San Diego's Supercomputer facility.
Peter and Chris have been locked into green computing, power consumption issues and unique cooling techniques for years. In this show, they share a bit of wisdom and set the record straight as to which vendors, energy concerns and standards bodies have their acts together.
This is a must listen episode for anyone in the data center realm that has power consumption or infrastructure design as a concern. So go ahead and tune in to Episode 5 - code-named the Gelsinger Co-Efficient.
The curious will find Rumsey Engineers here.
As always, send any feedback to hardware @ theregister.com. Enjoy. ®
Interesting, for more info go here
Good overview of the datacentre cooling issues we are facing.
The Datacentre Specialist Group of the British Computer Society has already been working on this issue, and has come up with some very interesting stuff here http://dcsg.bcs.org/ partic on energy efficiency.
Simple use less power, prevention not cure.
Why do we make this so complicated, spending money on more energy hungry cooling systems. Fix it as source use lower power servers, that are as or faster than existing servers. Also, delete data on disks or archive stuff you do not access daily on to tape. All discussed here: http://blogs.sun.com/ValdisFilks/category/Environment
In broadcasting the switch from air-cooling to liquid cooling already has been made and the difference is amazing. Last year I visited a transmitter site. Which had turned it's old equipment off and the new one on.
From their perspective the main advantages are:
1. It's a _lot_ quieter.
2. Equipment lasts way longer as it's way cooler.
3. It takes less space.
However they also had some problems. Rohde&Schwarz, their equipment maker hasn't been in cooling before. So the amplifier in the transmitter has a lot of transistors bolted onto a copper block. That block has holes inside which are conected by small copper loops on the sides.
There's a valve enabling them to shut off the supply of liquid and when they pull out the amplifier of the rack it will also be isolated. However when they do that, the temperature will rise and eventually the amplifier will turn itself off. This causes the pressure inside the copper block to rise effectively loosening the loops. Doing this a couple of dozend times gets them so loose that they actually fall out..... :(
So when you buy a liquid cooling system make sure it has something to compensate for pressure fluctuations.
My 200 MIPS Acorn used to have a 1W StrongARM - no fan required. Good enough for web serving.
Now we have dual core modern CPUs, we pay more for electricty than bandwidth.
Shouldn't Friday be capitalised?