DARPA asks you to cram petaflops super into single rack

To dream the ExtremeScale dream

5 things you didn’t know about cloud backup

Looking for a software miracle

As if designing such a piece of hardware did not push the limits of materials science and physics, DARPA wants miracles out of the software stack on the ExtremeScale machine too. The machine has to have a "self-aware" operating system that can learn from how it is used and adapt to the "changing goals, resources, models, operating conditions, attacks, and failures" that might happen in the field and "mitigate the effects of attacks and failures, closing exploited vulnerabilities."

The system has to make parallel programming easier, so your typical application domain expert can use it without mucking about with parallelism, and the system has to be able to change its parallelism - the number of nodes, cores, and threads - on the fly while an application is running, coping with changing conditions. DARPA understands that to make this all work, propeller heads will have to come up with an entirely new execution model for applications. So do that too.

Before you finish up the design on your black box, DARPA wants you to consider how multiple UHPC machines might be linked together for scalability and resiliency.

DARPA has five applications that it wants to run on the prototype UHPC ExrtemeScale boxes: a massive streaming sensor data problem "resulting in actionable knowledge;" a large dynamic graph-based informatics problem; a decision problem that includes search, hypothesis testing, and planning; and two applications drawn from the Department of Defense stack, which will be selected after the UHPC program starts.

Like past DARPA HPC awards - which resulted in current systems being brought to market by IBM and Cray this year - the UHPC program has multiple phases, in this case four. Phase one lasts 24 months and will show the concepts behind the UHPC systems and their execution models, with phase two (also 24 months long) delivering a preliminary prototype. Phase three completes the system design and benchmarks, and phase four delivers a prototype system, compiler, and OS in a lab environment.

Interestingly, DARPA is not ponying up the hundreds of millions of dollars you might expect with the UHPC effort. In phase one and two, there are teams that will design UHPC systems and another set of teams that will design the benchmarks and data sets to test the machines. DARPA is allocating $3.25m for the first year of phase one and $5.25m for the second year for the developers; the UHPC testers get $1.75m per year. (Clearly, it is easier to come up with a test than come up with a system design.) In phase two, UHPC developers get $8.65m per year and testers get $2m per year.

DARPA has not divulged the budget for phases three and four of the UHPC ExtremeScale computing challenge. It expects to have five teams in phase one (composed of industry and university experts) and three teams in phase two. Three teams are expected to make the cut to phases three and four, with IT vendors taking the lead.

One last thing: DARPA would also like a little red wagon, a sailboat, and a pony. ®

5 things you didn’t know about cloud backup


Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?