Feeds

US boffins use Obama dough to study clouds

The computing kind, not the fluffy kind

Choosing a cloud hosting partner with confidence

The US Department of Energy, which runs the largest supercomputing centers in the country, is using some of the Obama stimulus money to take a gander on how parallel HPC applications might be deployed on cloud-style, virtualized infrastructure instead of on the less malleable parallel supers that the DOE's labs spend big bucks building, housing, and operating.

The project, which is known by the code-name Magellan, is being funded with a $32m splash that comes from the $787bn trough of cash that is the American Recovery and Reinvestment Act of 2009, signed into law by President Obama back in February.

ARRA is being used to prop lots of IT-related scientific research around the country, notably a $62m project being funded by DOE to create 100 Gigabit Ethernet switching gear that was announced in August; this project is being headed up by Lawrence Berkeley National Laboratory, which runs the network linking the big DOE labs. (Sandia, Lawrence Livermore, Lawrence Berkeley, Oak Ridge, Los Alamos, Brookhaven, Argonne, Pacific Northwest, and Ames are the key DOE labs.)

The Magellan project is exploring how cloud tools for virtualizing and provisioning servers might be used by researchers wanting to run their applications on a cloud that is actually a subset of the capacity available through the DOE labs. The idea is that it may be more cost-effective to give people pieces of the DOE supercomputer centers that look and feel like a local HPC cluster than actually have them plunk down $50,000 to buy a baby cluster.

The problem is not the initial hardware and software support, according to Katherine Yelick, director of the National Energy Research Scientific Computing (NERSC) division at Lawrence Berkeley National Laboratory, which is one of the leaders on the Magellan project. "You can buy a small computer cluster for $50,000, but the cost of ownership often exceeds the cost of hardware when you factor in floor space, power demands, and staff support."

This is not the kind of talk that Cray, Silicon Graphics, Penguin Computing, and others who are peddling baby HPC clusters want to hear.

At first, the Lawrence Berkeley and Argonne National Laboratory centers are going to carve out around 100 teraflops of computing capacity and set it up as a cloud, allowing university and DOE researchers to schedule jobs on the cloud. And the servers will be backed up with lots of storage and I/O capacity so large datasets can be thrown against this relatively modest amount of computing capacity. This, says Yelick, is where in-house baby HPC clusters often fall short, frustrating researchers who bought gear believing they would get better performance than they see when they actually run their workloads.

The exact software technologies that the Magellan project will deploy have not been finalized yet, and it is not clear if and how the underlying servers and storage will be virtualized. Ironically, most of the $32m will be spent on build new clusters based on Intel's Xeon 5500s for the cloud testbeds being installed at Lawrence Berkeley and Argonne. The latter lab will be monkeying around with the open source Eucalyptus framework for creating a clone of Amazon's EC2 compute cloud, and apparently the project also has some money left over to do comparisons of the Magellan internal cloud with various cloud services from Amazon, Microsoft, and Google. Some 3,000 researchers at Lawrence Berkeley are going to be given access to the Magellan cloud to kick the tires over the next several years, to see which of their codes work well on the cloud and which do not.

Incidentally, the two cloud setups at Lawrence Berkeley and Argonne will be linked using that 100 GE network also being funded by ARRA. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
'Kim Kardashian snaps naked selfies with a BLACKBERRY'. *Twitterati gasps*
More alleged private, nude celeb pics appear online
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
EMC, HP blockbuster 'merger' shocker comes a cropper
Stand down, FTC... you can put your feet up for a bit
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.