Feeds

Bull, SGI tag team on Japanese nuke petaflopper

A whole lot of Sandy Bridge bullx

Next gen security for virtualised datacentres

French supercomputer maker Bull, working in conjunction with sometime partner and sometime rival Silicon Graphics, has landed a contract to supply Japanese fusion researchers with a supercomputer rated at nearly 1.3 petaflops of number-crunching oomph and based on a future "Sandy Bridge" Xeon processor from Intel.

The supercomputer will be installed at the International Fusion Energy Research Center (IFERC) in Rokkasho, Japan, and will be used by the Japan Atomic Energy Agency to conduct research related to fusion reactions. Specifically, the super – which does not yet have a nickname – will be based on Bull's homegrown bullx server nodes. (Yes, that is their proper name, so don't say the French don't have a sense of humor.) The cluster of bullx boxes will be used to simulate plasmas and controlled-fusion equipment.

The server is not just being used by Japanese researchers, but will be shared with European researchers through a partnership called the Broader Agreement that was inked in 2005 between JAEA and its counterpart in the European Union, Fusion for Energy (F4E).

That partnership has seen the EU kick in €340m in investment into IFERC, which is a pittance compared to the €16bn, 30-year investment that Europe and its partners – China, India, Japan, Korea, Russia, and the United States – expect to kick into the ITER (formerly the International Thermonuclear Experimental Reactor).

The French Commissariat à l'Energie Atomique (CEA) has created and installed a bullx cluster for the ITER project, and IFERC wants a similar machine, presumably to ease the porting of code and sharing of computing resources between the two fusion research facilities.

Rokkasho IFERC site

IFERC's fusion research facility under construction in Rokkasho, Japan

CEA is building the supercomputer for the Japanese nuke agency, and it's being assisted by its services partner, SGI Japan, which was just reabsorbed into parent Silicon Graphics last month, in the installation of the machine at the Rokkasho facility.

The exact feeds and speeds of the Japanese bullx super have not been revealed, but Bull said that the main compute portion of the system would be comprised of 4,410 two-socket Series B blade servers that would pack a total of 70,560 cores using a future Sandy Bridge Xeon processor. The way the math works out, that's an eight-core processor, which means it could be either a Sandy Bridge-EN for Socket-B2 server nodes or a Sandy Bridge-EP that plugs into Intel's Socket-R and offers more QuickPath Interconnect links and more memory scalability, if the rumors are right. With only 64GB of memory per server node – the announcement said more than 280 TB of aggregate memory across the nodes – I would guess JAEA is asking Bull to use the cheaper nodes.

Bull is also putting 36 of its Series S fat Xeon SMP server nodes (the current ones are based on the Xeon 7500 and Xeon E7 chips) and 38 of its Series R rack-mounted servers together to administer the cluster nodes and the Lustre file system that stores the data to be chewed on by the bullx super, and to control user access. The cluster will have 5.7PB of its own storage, as well as an external storage array weighing in at 50PB. The whole shebang will be connected using an InfiniBand network, and it's a reasonable guess that it will be based on the current QDR (40 Gb/sec) or future FDR (56Gb/sec) technology. The system will run Linux, of course.

The JAEA has also asked Bull to install 32 of its bullx Series R rack servers with GPU coprocessors for preprocessing and post-processing of data, and to graphically render the results of the simulations run on the machine.

Bull says that it expects installation to start in June of this year, and that it will design the electrical and cooling systems in the Rokkasho facility, and be responsible for the installation of the petaflopper as well as its maintenance and operation for a five-year period. Exactly how SGI Japan is assisting is unclear, but in its own statement SGI said it was doing the installation and maintenance.

The Japanese machine is expected to be operation in January 2012. Bull has built the Tera-100 super-node cluster using its S Series nodes for the CEA, which has a peak performance of 1.25 petaflops using 138,368 cores based on Intel's Xeon 7500 processors. The Japanese machine will be the third petaflops-class machine that Bull has built. ®

Gartner critical capabilities for enterprise endpoint backup

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?