Geeks fight the smelter with embedded processor-based box
The Climate Computer
In 2002, a couple of hardware geeks thrust a crazy, fresh idea on the computing community. Chris Hipp, a co-founder of blade maker RLX, and Wu-chun Feng, then a researcher at Los Alamos National Laboratory, concocted a very powerful cluster with servers based on laptop chips from Transmeta. The cluster failed to outperform similar Xeon-based systems, but it could run in a desert warehouse with no special cooling and with less failures than other Los Alamos boxes protected by very expensive air conditioning systems.
A lot of people scoffed at the Green Destiny system at the time. "Laptop chips, you say? Please. I need a man's machine and a rocket inside of every box."
Flash forward to 2008, and Green Computing is all the rage. Vendors do all they can to show how concerned they are about "greening" data centers and selling power-friendly kit. The scoffing has stopped with performance per watt, energy costs and political correctness rising in importance.
Sadly, relatively little work was done to further the concepts ushered in with the Green Destiny project. RLX shifted away from Transmeta toward Intel and then exited the hardware business altogether. Blade servers went mainstream, and are now filled with beefy Xeons, Opterons and RISC chips.
So, we're glad to hear that Horst Simon, a prominent computer scientist at Berkeley Lab, has renewed work around slotting low power chips into supercomputer class machines.
During a presentation this week at Lawrence Berkeley National Lab, Simon emphasized the rather profound challenges facing data center operators in the coming years.
We're used to thinking of factories as some of the largest energy consumers. You can picture the smokestacks puffing black clouds into the air and fueling the work taking place at car assembly lines or aluminum smelting plants. In the near future, data centers will consume just as much energy simply to process, move and store bits of information.
US companies and organizations spend about $16bn a year powering computers, consuming close to 200TWh (terawatt hours) of energy.
You've all likely encountered similar figures and heard about companies now spending more on managing and cooling data centers than on buying the actual gear to fill them. Even with improvements in the performance per watt of general purpose chips, computer buyers face growing energy consumption challenges.
Similarly, stories about Google, Microsoft, Yahoo! and others building data centers near cheap power have become commonplace. These giants can and must go to great lengths in an effort to afford the number of systems they need.
A utility computing-style model could help offset some of the problems faced by the average computer buyer over the long-term. But we're likely talking about a very lengthy transition where companies move away from managing their own data centers.
In the nearer-term, companies must deal with the power and space issues and could use some help.
Simon and other researchers at Berkeley have partnered with low power chip designer Tensilica and Rambus to create a new class of computer that could show dramatic performance per watt gains and aid end computer buyers.
Looking past even a laptop chip, they're studying how to make a very powerful machine around customized embedded processors.
This so-called Climate Computer could run on myriad multi-core 650MHz chips and consume just a fraction of power compared to, say, an Opteron-based cluster or an IBM BlueGene machine, according to Simon.
The Climate Computer
"We can get the same computer power at one-twentieth of the power consumption," Simon said.
You can catch a proposed layout for a Climate Computer here.
Simon sees this type of experiment as being necessary for a world in which computer centers will consume between 20MW and 130MW. (All of Berkeley's labs today consume 20MW total.)
The researchers look to work on this project "for about a year so and then go to the Department of Energy and say, 'We are ready to the build the real prototype.'"
The highest-end computer users will balk at any performance trade offs, but a many-cored system with such low power consumption may prove attractive in three to four years to a large set of less demanding customers, especially when you consider complementary trends.
All of the major chip makers will continue to release processors with more and more cores, forcing software makers to craft better multi-threaded code. Around these general purpose chips we're seeing a rise of accelerators from the graphics and FPGA realms, which also require improved coding techniques.
With tens and hundreds of cores floating about on each chip and accelerators, the processor almost becomes a modern day version of what the transistor was 30 to 40 years ago, Simon noted. As hardware and software vendors embrace this notion, even systems running on very low power chips stand to demonstrate remarkable performance at cranking through software threads.
Large service providers like Google and Microsoft will likely continue with their current plans.
"But that is not the wave of the future," Simon said. "(Those systems) will remain for the big players, but the mainstream will switch directions in the decade."
Utility computing bigots might argue that average customers will tap into centralized systems before something like a solid embedded processor-based machine makes it to market. Such criticism, however, relies on an awful lot of prognostication.
It's refreshing to find computer scientists returning to the extreme low-power idea as these larger shifts in computing take place because the Green Destiny concept once exhibited a great deal of potential.®
The presentation about the Climate Simulator pilot project can be found here in PDF. A detailed article in the International Journal of High Performance Computing Applications can be found here in PDF as well. A lot of the original work has been done by John Shalf, Lenny Oliker, and Michael Wehner, the three authors of this paper.