King K super: does it refute hybrid HPC model?

GPUs are still GPU-riffic

Remote control for virtualized desktops

ISC'11 It's been an eventful International Supercomputing Conference (ISC'11) in Hamburg. The Japanese sprang their K Computer on an unsuspecting HPC world, throwing down 8.126 Pflops on the table and raising the high-water performance mark by a factor of three.

Just as surprising was the fact that they did it the old-fashioned waywith semi-proprietary processors, a custom interconnect, and no fancy accelerators.

Was it only six months ago when the Chinese, with their 2.56 Pflop Tianhe system, appeared to have locked down the top spot for at least a year or more with heavy use of GPU accelerators?

This led many pundits (myself included) to say that the age of hybrid HPC was upon us, and that we probably wouldn’t see another non-hybrid system topping the chart anytime soon.

So is the K computer a signpost pointing to the resurgence of traditional CPU plus custom interconnect HPC? Or is it an aberration on the road to our hybrid future?

King K is hard to argue against. It has 93 per cent computational efficiency (RMax/RPeak), which is much more efficient than the rest of the top 10 systems, which sport numbers ranging from the low 40s to the low 80s.

Although it takes a staggering amount of power to run, almost 10 megawatts, it delivers 824 Mflops per watt, which makes it a close second on the top ten; it's barely edged out by the fifth largest system, the NEC/HP system at the Tokyo Institute of Technology.

And, as others have pointed out, this system isn't done growing - there are still roughly another 200 racks of room available. But I tend to think that the K computer is an superbly-executed aberration.

I think the biggest of the big supercomputers will ultimately be hybrid CPU + accelerator systems. In the final analysis, performance comes down to parallelism, cores, and core density.

Specialized cores, like those in GPUs, don't need all the trappings of general-purpose CPUs and can thus be crammed closer together. The NVIDIA Fermi GPU sports 512 cores running at 1.3 GHz, while Intel's Westmere has 6 cores running at 3.4GHz.

Clock for clock, that's an advantage of 32x in favor of the Fermi. One of the better CPU vs. GPU discussions is here, in the Top 10 Objections to GPU Computing Reconsidered.

It was authored by Dr. Vincent Natoli, a computational physicist who has spent 20 years working in HPC. In the article, he lays out the major arguments against GPU computing and responds to them with clear explanations and convincing logic.

It also serves as a primer on the value proposition behind the move toward hybrid HPC and is well worth a few minutes reading time. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Trio of XSS turns attackers into admins
prev story


Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?