Top 500 supers – The Dawning of the GPUs

Coiled for the 10 petaflops spring

Build a business case: developing custom apps

The politics of the Top 500

The Top 500 list of supercomputers is put together twice a year to pit the fastest 500 supercomputers in the world against each other regardless of processor, interconnect technology, operating system, or whatever. Erich Strohmaier and Horst Simon, computer scientists at Lawrence Berkeley National Laboratory, Jack Dongarra of the University of Tennessee, and Hans Meuer of the University of Manheim make the list, which is based on the Linpack Fortran benchmark test created by Dongarra and colleagues Jim Bunch, Cleve Moler, and Pete Stewart back in the 1970s to gauge the relative number-crunching performance of computers. The official Top 500 list came out in 1993, and this June 2010 compilation is the 35th edition of the list.

The Top 500 is not a particularly good gauge of what is going on in the totality of the HPC market, but it most definitely is a good indicator of technology shifts among the cutting-edge buyers of supercomputers that will in many cases trickle their way into the mainstream of HPC computing.

Supercomputing is often about politics as much as it is about actually doing simulations, and it is hard to miss how fast China is becoming a force in the petascale era. Which stands to reason, given the strength of the Chinese economy and its desire to excel in the sciences and bend science to industry, just like its peers in the more established economies have done for decades. There are now 24 systems on the Top 500 list, and the two mentioned above - Nebulae and Tianhe-1 - are not only Top 10 machines, they have enough Linpack performance to catapult China ahead of all the other countries in the world in terms of sustained performance installed. China has as many supers on the list as Germany, and it now ranks second in terms of aggregate computing power, with 9.2 per cent of the 32.4 petaflops of total power accounted for on the list.

The United States is still the biggest investor in Top 500-class machines, with 282 of the 500 machines (56.4 per cent) and just under 18 petaflops of floating point oomph in those boxes (55.4 per cent of the total math power on the list). The United Kingdom has 38 machines with a total computing power of 1.7 petaflops, giving it a little more than half the share of flops as China has. France has 29 machines with a total of 1.76 petaflops (5.4 per cent of the total power), while Germany's 24 boxes have 2.25 petaflops (6.9 per cent).

Japan, once a high flyer in the HPC realm, has backed off from its formerly aggressive stance, mainly because it does not have the billions of dollars or the political will to sustain a supercomputing program that can compete with the US and now China. There are 18 Japanese supers on the June 2010 Top 500 list, which have a total of 1.25 petaflops of aggregate performance.

In terms of architecture, 74 of the machines on the June 2010 list are massively parallel boxes with some kind of sophisticated interconnect, while two are constellation configurations and 424 are more generic clusters using InfiniBand or Ethernet. There are 242 machines that use plain old Gigabit Ethernet, and only two using 10 Gigabit Ethernet. There are 205 machines that use one speed or another of InfiniBand, with the rest being a mix of custom and proprietary interconnects such as IBM's Federation, Cray's SeaStar, and SGI's NUMAlink. There's only one vector machine still on the list, which is the 122.4 teraflops parallel NEC SX-9 super known as Earth Simulator, which was at the top of the list a decade ago. (Ranked 37 today). All of the remaining machines use scalar processors, although more and more of them are being augmented with co-processors.

As is typical on the Top 500 list, old gear doesn't stay afloat for long. Of the 500 boxes on the list, 62 were installed in 2008, 229 in 2009, and 183 in 2010. There are only 26 machines older than that on the list. But the turnover on the list has slowed, thanks to the economic slowdown and despite plenty of stimulus money being shelled out by governments in the United States, Europe, and China.

To get on the list this time around, your machine had to demonstrate at least 52.8 teraflops of punch on the Linpack test, up from 47.7 teraflops only six months ago. The aggregate computing power on the list continues to swell, too, to 32.4 petaflops, up from 27.6 petaflops six months ago and 22.6 petaflops in the June 2009 list.

Intel's processors continue to dominate the list, with 408 machines using either Xeon (403 systems) or Itanium (five systems) processors. AMD's Opterons are used in 47 machines, IBM's various Power chips are used in 42 machines, with the remaining machines being comprised of two Sparc boxes and that one NEC Earth Simulator behemoth. For Intel-based supers, there are still 182 machines using the old "Harpertown" quad-core Xeon 5400s in their L, E, and X variants as well as a bunch of older Xeon 5100 and 5300 processors in 30 other machines. None of these machines can last for very long on the list, given the energy efficiency of new servers based on six-core or twelve-core x64 processors.

The "Nehalem-EP" quad-core Xeon 5500 processors are in 184 machines, and there are already seven boxes using the new "Westmere-EP" six-core Xeon 5600s. There are already two boxes on the list using the high-end, eight-core "Nehalem-EX" Xeon 7500 processors, but they are relatively tiny. There are 31 boxes using quad-core Opterons, five using six-core Opterons, and five with the twelve-core Opterons. There are ten boxes using IBM's PowerPC chips in BlueGene boxes and another 18 using machines using Power6 or Power6+ chips. There's a smattering of Power5, Sparc, and Itanium in there, too.

By manufacturer, IBM is once again at the top of the lost in terms of system count and aggregate flops installed. IBM has 198 machines on the list (39.6 per cent of the total) and the IBM label is associated with 10.9 petaflops of performance (33.6 per cent of the total). Hewlett-Packard, which hasn't had a Top 10 system in a long time, still sells lots of clusters of modest size, and has 185 machines on the list (37 per cent of machines) for a total of 6.62 petaflops (20.4 per cent of the flops pie).

Cray has 21 systems on the list, with a total of 4.8 petaflops (4.2 per cent of machines, but 14.8 per cent of capacity), with Silicon Graphics having 17 boxes (3.4 per cent of machines and 6.6 per cent of capacity). Sun (now Oracle) has a dozen machines on the list, but it is hard to imagine that Oracle will be interested in pursuing HPC for the sake of being on the Top 500 list. If Oracle has plans for HPC beyond data analytics, it sure hasn't communicated this to the IT community. ®

Boost IT visibility and business value

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
VVOL update: Are any vendors NOT leaping into bed with VMware?
It's not yet been released but everyone thinks it's the dog's danglies
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
prev story


Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
The Essential Guide to IT Transformation
ServiceNow discusses three IT transformations that can help CIO's automate IT services to transform IT and the enterprise.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.