Feeds

Uncle Sam shells out $62m for 100GbE

Obama stimulates fed networks

Secure remote control for conventional and virtual desktops

It would be tough to call the development of 100 Gigabit Ethernet switches and adapters a shovel-ready project, but the Obama administration's stimulus package is picking up the $62m tab to help get faster networks to market.

The $787bn American Recovery and Reinvestment Act, which was signed into law in February, allocates money for traditional infrastructure - roads, bridges, and the like - as well as for its electronic infrastructure - broadband Internet, healthcare systems, and other goodies aimed at IT vendors.

The Department of Energy's vast supercomputing programs are getting a piece of the ARRA pie to build out a faster network for linking the nation's behemoth massively parallel supercomputers together. Specifically, Lawrence Berkeley National Laboratory, which runs the DOE's Energy Sciences Network (ESnet) for linking the HPC gear at the government labs (Sandia, Lawrence Livermore, Lawrence Berkeley, Oak Ridge, Los Alamos, Brookhaven, Argonne, Pacific Northwest, and Ames are the biggies) is getting the dough to pay more engineers at Berkeley Lab as well as to pick the hardware vendors who will help boost ESnet to 100 Gigabit Ethernet speeds.

DOE has its eyes on a much more ambitious network, however, and one that will take many years and much more funding to fulfill.

"This network will serve as a pilot for a future network-wide deployment of 100 Gbps Ethernet in research and commercial networks and represents a major step toward DOE's vision of a 1-terabit - 1,000 times faster than 1 gigabit - network interconnecting DOE Office of Science supercomputer centers," explained Michael Strayer, the head of the DOE's Office of Advanced Scientific Computing Research in a statement announcing the $62m contract.

DOE says that its supercomputers are already running simulations that have datasets on the terabyte scale and that soon they will be chewing through datasets in the petabytes range. For instance, a climate model that spans past, present, and future at Lawrence Livermore National Lab currently spans 35 terabytes and is being used by over 2,500 researchers worldwide. An updated (and presumably finer-grained) climate model is expected to have a dataset in the range of 650 terabytes and the distributed archive of datasets related to this model is expected to be somewhere between 6 and 10 petabytes. To move such datasets around the ESnet network requires a lot more bandwidth and better protocols than Gigabit Ethernet.

To that end, about $3m is being spent on some more network and software engineers. Another $8m to $9m will be spent on a testbed for new network gear and services from telcos, and the remaining $50m or so will go to actually buying 100 Gigabit Ethernet switches and services for ESnet to link the more than 40 computational centers in the United States that do supercomputing in conjunction with the DOE.

Let the cat fighting among the Ethernet switch makers begin....

By the way, ARRA is a big deal for these labs. As you can see from this tally of 20 projects that have been partially funded by ARRA at the Berkeley Lab, ARRA is covering $173.7m of the total $241.5m being shelled out. If there was no ARRA, many of these projects would not have been done at all. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.