Feeds

US boffins use Obama dough to study clouds

The computing kind, not the fluffy kind

The essential guide to IT transformation

The US Department of Energy, which runs the largest supercomputing centers in the country, is using some of the Obama stimulus money to take a gander on how parallel HPC applications might be deployed on cloud-style, virtualized infrastructure instead of on the less malleable parallel supers that the DOE's labs spend big bucks building, housing, and operating.

The project, which is known by the code-name Magellan, is being funded with a $32m splash that comes from the $787bn trough of cash that is the American Recovery and Reinvestment Act of 2009, signed into law by President Obama back in February.

ARRA is being used to prop lots of IT-related scientific research around the country, notably a $62m project being funded by DOE to create 100 Gigabit Ethernet switching gear that was announced in August; this project is being headed up by Lawrence Berkeley National Laboratory, which runs the network linking the big DOE labs. (Sandia, Lawrence Livermore, Lawrence Berkeley, Oak Ridge, Los Alamos, Brookhaven, Argonne, Pacific Northwest, and Ames are the key DOE labs.)

The Magellan project is exploring how cloud tools for virtualizing and provisioning servers might be used by researchers wanting to run their applications on a cloud that is actually a subset of the capacity available through the DOE labs. The idea is that it may be more cost-effective to give people pieces of the DOE supercomputer centers that look and feel like a local HPC cluster than actually have them plunk down $50,000 to buy a baby cluster.

The problem is not the initial hardware and software support, according to Katherine Yelick, director of the National Energy Research Scientific Computing (NERSC) division at Lawrence Berkeley National Laboratory, which is one of the leaders on the Magellan project. "You can buy a small computer cluster for $50,000, but the cost of ownership often exceeds the cost of hardware when you factor in floor space, power demands, and staff support."

This is not the kind of talk that Cray, Silicon Graphics, Penguin Computing, and others who are peddling baby HPC clusters want to hear.

At first, the Lawrence Berkeley and Argonne National Laboratory centers are going to carve out around 100 teraflops of computing capacity and set it up as a cloud, allowing university and DOE researchers to schedule jobs on the cloud. And the servers will be backed up with lots of storage and I/O capacity so large datasets can be thrown against this relatively modest amount of computing capacity. This, says Yelick, is where in-house baby HPC clusters often fall short, frustrating researchers who bought gear believing they would get better performance than they see when they actually run their workloads.

The exact software technologies that the Magellan project will deploy have not been finalized yet, and it is not clear if and how the underlying servers and storage will be virtualized. Ironically, most of the $32m will be spent on build new clusters based on Intel's Xeon 5500s for the cloud testbeds being installed at Lawrence Berkeley and Argonne. The latter lab will be monkeying around with the open source Eucalyptus framework for creating a clone of Amazon's EC2 compute cloud, and apparently the project also has some money left over to do comparisons of the Magellan internal cloud with various cloud services from Amazon, Microsoft, and Google. Some 3,000 researchers at Lawrence Berkeley are going to be given access to the Magellan cloud to kick the tires over the next several years, to see which of their codes work well on the cloud and which do not.

Incidentally, the two cloud setups at Lawrence Berkeley and Argonne will be linked using that 100 GE network also being funded by ARRA. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.