Feeds

Powerful supercomp grid for US boffins

13.6 Teraflop system will feature McKinley processors

  • alert
  • submit to reddit

Next gen security for virtualised datacentres

Four US research centres are to be linked to create an interconnected series of Linux clusters capable of processing 13.6 trillion calculations a second.

The computer 'grid' system - known as the Distributed Terascale Facility (DTF) or TeraGrid supercomputer- will enable American boffins to share computing resources over the world's fastest research network.

The plan is that researchers will be able to draw on the resources of the computing grid in much the same way that consumers draw electricity from a power grid.

Funded by the National Science Foundation to the tune of $53 million, its backers hope the system will lead to breakthroughs in life sciences, climate modelling and other critical academic disciplines. Building and deploying the DTF will take place over three years.
IBM Global Services will deploy clusters of IBM Linux systems at the four DTF sites beginning in the third quarter of 2002. The servers will contain the next generation of Intel's Itaniium microprocessor, McKinley.

These will build upon two existing clusters of 1,300-plus Itanium and IA-32 processors already deployed at the National Center for Supercomputing (NCSA), one of the four hubs of the network.

IBM supercomputing software will handle cluster and file management tasks, but there is a commitment to use of open protocols with the project.

The system will have a storage capacity of more than 600 terabytes of data, or the equivalent of 146 million full-length novels.

The Linux clusters will be connected to each other via a 40Gbps a dedicated network, supplied by Qwest. This will link to Abeliene, the high-performance network that links more than 180 research institutions across the States.

The four hubs of the supercomputer will be the National Center for Supercomputing Applications, the San Diego Supercomputing Center, Argonne National Laboratory and the California Institute of Technology. ®

External links

The world's most powerful computational infrastructure (press release by the National Center for Supercomputing Applications)

Related stories

Life, the universe and supercomputers
NASA's new supercomp sits on a desktop
AMD cluster sneaks in Supercomputer top 500 list
Sun's Oz super computer goes horribly pear shaped
$10m super'puter to crunch genetic code

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?