Feeds

Top 500 supers - rise of the Linux quad-cores

Jaguar munches Roadrunner

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

I see your petaflops - and I raise you 10

Petaflops had become boring on the June 2009 list, and all eyes on the HPC community are on how they can push up to 10 petaflops and beyond and push to get funding to build such monstrous machines. While there are only two machines on the list that have broken through the petaflops barrier, everybody knows they can do it. It is just a matter of doing what others have done, or mixing it up a little.

Getting to 10 petaflops is no more trivial now than breaking 1 teraflops was in 1996 or 1 petaflops was in 2008. It takes a lot of changes in technology to make such big leaps. The teraflops barrier was broken with massive parallelism and fast interconnects, and the petaflops barrier was initially broken by a hybrid architecture pairing x64 processors and co-processors to boost their math performance.

The fact that the current top-end Jaguar machine does not use GPU or FPGA co-processors to get to over 2.3 petaflops of peak performance does not mean 10 petaflops will be attained with CPUs alone. Some HPC codes work well with CPU-only setups, and others will do better with the combination of CPU-GPU architectures. What HPC vendors need to do is get GPUs into the server nodes and more tightly connected to the CPUs they serve.

If you draw the projections (as the techies behind the Top 500 list have done), then sometime in late 2011 or early 2012, the fastest machine in the Top 500 list should be able to hit 10 petaflops and the aggregate performance on the list will be well above 100 petaflops. By sometime in 2015, a supercomputer will have to be rated at 1 petaflops or so just to make it on the list, if projections stay linear as they have since 1993, when the Top 500 list started.

On the current list, it takes 20 teraflops to rank at all, just so you can see how quickly Moore's Law and a lot of clever networking pushes HPC technology. Provided supercomputing centers can shift their codes to hybrid architectures, the price/performance of multicore x64 processors and their related GPUs is probably the horse to bet on. Exotic machines may have seen their heydays already. ®

Beginner's guide to SSL certificates

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Lenovo to finish $2.1bn IBM x86 server gobble in October
A lighter snack than expected – but what's a few $100m between friends, eh?
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.