Feeds

Fujitsu busts K super through 10 petaflops

When will this monster machine go commercial?

Next gen security for virtualised datacentres

The massive Sparc64-based K supercomputer built by Fujitsu for the Japanese government has been fully deployed and has, as hoped, broken through 10 petaflops of sustained performance, the first such machine to do so.

Fujitsu's time at the top of the HPC charts may be short-lived, however, with IBM and Cray firing up 20 petafloppers for the US government's Department of Energy labs next year.

IBM is building the "Sequoia" BlueGene/Q massively parallel Power A2 machine for Lawrence Livermore National Lab, and Argonne National Lab has picked up a 10 petaflops version of the BlueGene/Q machine. And Cray has just inked a deal with Oak Ridge National Laboratory to upgrade its Opteron-based "Jaguar" XT4 system to the "Titan" hybrid XK6 machine, which will mate the Opteron 6200 processors from Advanced Micro Devices with Tesla GPU coprocessors from Nvidia to reach its 10 to 20 petaflops of performance. (The scuttlebutt is that Oak Ridge will reach 20 petaflops, but the lab doesn't want to make any promises yet.)

Fujitsu K Supercomputer

The K supercomputer at Riken: We need Sparcs, lots of Sparcs

The K supercomputer was formerly known as Project Keisoku and was commissioned by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT). The original plan called for indigenous server makers NEC, Hitachi, and Fujitsu to share in the development and manufacturing of a 10 petaflops massively parallel supercomputer, which was dubbed the Next Generation Supercomputer – and which was supposed to have a mix vector processors from NEC and Hitachi and scalar processors from Fujitsu.

When the Great Recession hit, NEC and Hitachi, which had helped create the 6D mesh torus interconnect, called Tofu, for the K super, as well as done initial work on the vector machines, backed out of the deal, leaving Fujitsu to try to save the project with its eight-core, 2GHz "Venus" Sparc64-VIIIfx processors. Project Keisoku was originally projected to cost $1.2bn; it is unclear what the Japanese government actually paid.

The K super is located at the Rikagaku Kenkyusho (Riken) research lab in Kobe, Japan, and fully loaded, K has a stunning 864 server racks. The machine has 22,032 four-socket blade servers that have water cooling blocks on the processors and main memory in each blade.

That gives the machine a whopping 705,024 cores, which are running Linux, not Solaris, and which cannot run Crysis unless you put it in a parallel version of the WINE Windows runtime for Linux. On a Linpack Fortran parallel benchmark test run done in early October, the machine delivered 10.51 petaflops of sustained number-crunching performance; that was against a peak theoretical performance of 11.28 petaflops, thus yielding a 93.2 per cent execution efficiency on the machine – at least as far as Linpack is concerned. This is very good efficiency and rivals anything any supercomputer has ever done anywhere at any time.

Back in June, a mostly finished K machine posted 8.16 petaflops of sustained performance on a machine with only 17,136 nodes and 548,352 cores and came out on top of the Top 500 list issued at the International Supercomputing Conference in Dresden, Germany. The 10 petaflops rating should keep it on top for the November ranking that comes out at the SC11 conference in two weeks in Seattle.

The wonder is that Fujitsu has not started peddling baby K supers to customers other than the Japanese government. Fujitsu says that it is still working to develop and tune the Linux operating system running on the machines before K gets its final tune up in June 2012 and goes into full production in November 2012. Perhaps then Fujitsu will start selling K machines commercially. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?