Feeds

DARPA awards $76.6m supercomputer challenge

Small scale ExtremeScale

Boost IT visibility and business value

If you were thinking about entering the US Defense Advanced Research Project Agency's ExtremeScale supercomputing challenge issued in March, you missed your chance. DARPA's awarded grants to four design teams, plus another that'll run benchmarks on the HPC prototypes.

The heavy hitters in the HPC community as well as in academia in the US were awarded grants to design prototype machines for the Ubiquitous High Performance Computing ExtremeScale challenge.

There are a lot of different goals, as we detailed in March, but the upshot is that DARPA wants a petaflops supercomputer, including networking, storage, and compute elements as well as cooling, to be crammed in a space a little larger than a standard server rack - 24 inches wide by 78 inches high and 40 inches deep - and consume only 57 kilowatts to power and cool the device.

The machine has to deliver a peak petaflops of performance and 50 gigaflops per watt sustained power efficiency while running the Linpack Fortran number-crunching test. The system has to be able to do single-precision and double-precision math, 16-bit, 32-bi, and 64-bit integer math, and chew on a streaming sensor array like you might have scanning the skies of incoming missiles.

DARPA has also asked for parallel programming to be made easier on these machines than it currently is on massively parallel CPU or CPU-GPU hybrids, for multiple ExtremeScale units to be linked together, and for the machines to run five specific workloads.

These include a massive streaming sensor data problem, a large dynamic graph-based informatics problem, a decision problem that includes search with hypothesis testing and planning, and two as-yet-unnamed applications culled from the IT bowels of the Department of Defense.

Nvidia and Intel are in

The IT community players that have been granted ExtremeScale UHPC contracts today by DARPA include one team led by Intel and another from Nvidia. Intel has not yet detailed who is on its team or what its award is, but Nvidia said it has tapped supercomputer maker Cray, the US Department of Energy's Oak Ridge National Laboratory, and six universities to be on its team.

Nvidia said it was awarded $25m to pursue the ExtremeScale impossible dream. Nvidia is still working out the final details on what those six universities will be doing.

Sources at Nvidia told us that the total ExtremeScale program is budgeted with $100m, but a DARPA spokesperson was still chasing down the numbers to confirm this when we went to press. On Wednesday morning, two days after the announcement, DARPA did the math and said the total award for the UHPC ExtremeScale contracts was $76.6m.

Oak Ridge and Nvidia are already working on hybrid CPU-GPU compute clusters, and last October announced the DOE had kicked in funds to study the use of Nvidia Tesla GPUs in x64 clusters.

Oak Ridge is also, of course, where the 1.76 petaflops Jaguar Cray Opteron-Linux cluster is housed as well. So, in essence, DARPA is paying for work that the Nvidia team was already pursuing piecemeal, but shaping it for a particular physical environment and code set. Anything worth selling once is worth selling two, or maybe three times.

Intel had aspirations for its ill-fated Larrabee GPU, but something went terribly wrong with that project and Larrabee was killed off as a discrete graphics processor in May and then the core guts of the project was reinvented as the Knights GPU co-processor in June.

The essential guide to IT transformation

Next page: First knights

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.