Feeds

US boffins use Obama dough to study clouds

The computing kind, not the fluffy kind

Combat fraud and increase customer satisfaction

The US Department of Energy, which runs the largest supercomputing centers in the country, is using some of the Obama stimulus money to take a gander on how parallel HPC applications might be deployed on cloud-style, virtualized infrastructure instead of on the less malleable parallel supers that the DOE's labs spend big bucks building, housing, and operating.

The project, which is known by the code-name Magellan, is being funded with a $32m splash that comes from the $787bn trough of cash that is the American Recovery and Reinvestment Act of 2009, signed into law by President Obama back in February.

ARRA is being used to prop lots of IT-related scientific research around the country, notably a $62m project being funded by DOE to create 100 Gigabit Ethernet switching gear that was announced in August; this project is being headed up by Lawrence Berkeley National Laboratory, which runs the network linking the big DOE labs. (Sandia, Lawrence Livermore, Lawrence Berkeley, Oak Ridge, Los Alamos, Brookhaven, Argonne, Pacific Northwest, and Ames are the key DOE labs.)

The Magellan project is exploring how cloud tools for virtualizing and provisioning servers might be used by researchers wanting to run their applications on a cloud that is actually a subset of the capacity available through the DOE labs. The idea is that it may be more cost-effective to give people pieces of the DOE supercomputer centers that look and feel like a local HPC cluster than actually have them plunk down $50,000 to buy a baby cluster.

The problem is not the initial hardware and software support, according to Katherine Yelick, director of the National Energy Research Scientific Computing (NERSC) division at Lawrence Berkeley National Laboratory, which is one of the leaders on the Magellan project. "You can buy a small computer cluster for $50,000, but the cost of ownership often exceeds the cost of hardware when you factor in floor space, power demands, and staff support."

This is not the kind of talk that Cray, Silicon Graphics, Penguin Computing, and others who are peddling baby HPC clusters want to hear.

At first, the Lawrence Berkeley and Argonne National Laboratory centers are going to carve out around 100 teraflops of computing capacity and set it up as a cloud, allowing university and DOE researchers to schedule jobs on the cloud. And the servers will be backed up with lots of storage and I/O capacity so large datasets can be thrown against this relatively modest amount of computing capacity. This, says Yelick, is where in-house baby HPC clusters often fall short, frustrating researchers who bought gear believing they would get better performance than they see when they actually run their workloads.

The exact software technologies that the Magellan project will deploy have not been finalized yet, and it is not clear if and how the underlying servers and storage will be virtualized. Ironically, most of the $32m will be spent on build new clusters based on Intel's Xeon 5500s for the cloud testbeds being installed at Lawrence Berkeley and Argonne. The latter lab will be monkeying around with the open source Eucalyptus framework for creating a clone of Amazon's EC2 compute cloud, and apparently the project also has some money left over to do comparisons of the Magellan internal cloud with various cloud services from Amazon, Microsoft, and Google. Some 3,000 researchers at Lawrence Berkeley are going to be given access to the Magellan cloud to kick the tires over the next several years, to see which of their codes work well on the cloud and which do not.

Incidentally, the two cloud setups at Lawrence Berkeley and Argonne will be linked using that 100 GE network also being funded by ARRA. ®

Combat fraud and increase customer satisfaction

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
BOFH: Oh DO tell us what you think. *CLICK*
$%%&amp Oh dear, we've been cut *CLICK* Well hello *CLICK* You're breaking up...
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
Microsoft builds teleporter weapon to send VMware into Azure
Updated Virtual Machine Converter now converts Linux VMs too
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Top three mobile application threats
Learn about three of the top mobile application security threats facing businesses today and recommendations on how to mitigate the risk.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.