US boffins use Obama dough to study clouds
The computing kind, not the fluffy kind
The US Department of Energy, which runs the largest supercomputing centers in the country, is using some of the Obama stimulus money to take a gander on how parallel HPC applications might be deployed on cloud-style, virtualized infrastructure instead of on the less malleable parallel supers that the DOE's labs spend big bucks building, housing, and operating.
The project, which is known by the code-name Magellan, is being funded with a $32m splash that comes from the $787bn trough of cash that is the American Recovery and Reinvestment Act of 2009, signed into law by President Obama back in February.
ARRA is being used to prop lots of IT-related scientific research around the country, notably a $62m project being funded by DOE to create 100 Gigabit Ethernet switching gear that was announced in August; this project is being headed up by Lawrence Berkeley National Laboratory, which runs the network linking the big DOE labs. (Sandia, Lawrence Livermore, Lawrence Berkeley, Oak Ridge, Los Alamos, Brookhaven, Argonne, Pacific Northwest, and Ames are the key DOE labs.)
The Magellan project is exploring how cloud tools for virtualizing and provisioning servers might be used by researchers wanting to run their applications on a cloud that is actually a subset of the capacity available through the DOE labs. The idea is that it may be more cost-effective to give people pieces of the DOE supercomputer centers that look and feel like a local HPC cluster than actually have them plunk down $50,000 to buy a baby cluster.
The problem is not the initial hardware and software support, according to Katherine Yelick, director of the National Energy Research Scientific Computing (NERSC) division at Lawrence Berkeley National Laboratory, which is one of the leaders on the Magellan project. "You can buy a small computer cluster for $50,000, but the cost of ownership often exceeds the cost of hardware when you factor in floor space, power demands, and staff support."
This is not the kind of talk that Cray, Silicon Graphics, Penguin Computing, and others who are peddling baby HPC clusters want to hear.
At first, the Lawrence Berkeley and Argonne National Laboratory centers are going to carve out around 100 teraflops of computing capacity and set it up as a cloud, allowing university and DOE researchers to schedule jobs on the cloud. And the servers will be backed up with lots of storage and I/O capacity so large datasets can be thrown against this relatively modest amount of computing capacity. This, says Yelick, is where in-house baby HPC clusters often fall short, frustrating researchers who bought gear believing they would get better performance than they see when they actually run their workloads.
The exact software technologies that the Magellan project will deploy have not been finalized yet, and it is not clear if and how the underlying servers and storage will be virtualized. Ironically, most of the $32m will be spent on build new clusters based on Intel's Xeon 5500s for the cloud testbeds being installed at Lawrence Berkeley and Argonne. The latter lab will be monkeying around with the open source Eucalyptus framework for creating a clone of Amazon's EC2 compute cloud, and apparently the project also has some money left over to do comparisons of the Magellan internal cloud with various cloud services from Amazon, Microsoft, and Google. Some 3,000 researchers at Lawrence Berkeley are going to be given access to the Magellan cloud to kick the tires over the next several years, to see which of their codes work well on the cloud and which do not.
Incidentally, the two cloud setups at Lawrence Berkeley and Argonne will be linked using that 100 GE network also being funded by ARRA. ®
Sponsored: 2016 Cyberthreat defense report