Feeds

Top 500 supers - rise of the Linux quad-cores

Jaguar munches Roadrunner

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

I see your petaflops - and I raise you 10

Petaflops had become boring on the June 2009 list, and all eyes on the HPC community are on how they can push up to 10 petaflops and beyond and push to get funding to build such monstrous machines. While there are only two machines on the list that have broken through the petaflops barrier, everybody knows they can do it. It is just a matter of doing what others have done, or mixing it up a little.

Getting to 10 petaflops is no more trivial now than breaking 1 teraflops was in 1996 or 1 petaflops was in 2008. It takes a lot of changes in technology to make such big leaps. The teraflops barrier was broken with massive parallelism and fast interconnects, and the petaflops barrier was initially broken by a hybrid architecture pairing x64 processors and co-processors to boost their math performance.

The fact that the current top-end Jaguar machine does not use GPU or FPGA co-processors to get to over 2.3 petaflops of peak performance does not mean 10 petaflops will be attained with CPUs alone. Some HPC codes work well with CPU-only setups, and others will do better with the combination of CPU-GPU architectures. What HPC vendors need to do is get GPUs into the server nodes and more tightly connected to the CPUs they serve.

If you draw the projections (as the techies behind the Top 500 list have done), then sometime in late 2011 or early 2012, the fastest machine in the Top 500 list should be able to hit 10 petaflops and the aggregate performance on the list will be well above 100 petaflops. By sometime in 2015, a supercomputer will have to be rated at 1 petaflops or so just to make it on the list, if projections stay linear as they have since 1993, when the Top 500 list started.

On the current list, it takes 20 teraflops to rank at all, just so you can see how quickly Moore's Law and a lot of clever networking pushes HPC technology. Provided supercomputing centers can shift their codes to hybrid architectures, the price/performance of multicore x64 processors and their related GPUs is probably the horse to bet on. Exotic machines may have seen their heydays already. ®

Intelligent flash storage arrays

More from The Register

next story
729 teraflops, 71,000-core Super cost just US$5,500 to build
Cloud doubters, this isn't going to be your best day
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Microsoft adds video offering to Office 365. Oh NOES, you'll need Adobe Flash
Lovely presentations... but not on your Flash-hating mobe
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Mitigating web security risk with SSL certificates
Web-based systems are essential tools for running business processes and delivering services to customers.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.