Feeds

Facing up to parallelism

Multicore means today's HPC is tomorrow's general purpose

Next gen security for virtualised datacentres

It is, perhaps, one of those forgotten facts that computing is still a relatively young technology, made all the more poignant by the realisation that many of the people driving the High Performance Computing (HPC) business, like Burton Smith, Microsoft's technical fellow in charge of advanced strategies and policies for the company, have not only been round the track several times, but are very much still at the bleeding edge of the technology.

Smith's track record includes many years as chief scientist at Cray, but his views now are hardly stuck in the past, for he believes that the parallel processing technologies that have been developed round HPC are where the mainstream of computing technology now has to head.

"We are now at the point where we are breaking the Von Neumann Assumption that there is only one program counter that allows the proper ordering and scheduling of variables," he said. "Parallel programming makes this hazardous, but we are also now at the point where serial programs are becoming slow programs."

Driving this is the arrival of multicore processor chips into the mainstream of computing. As the only way to get more performance of a single threaded processor is to increase its speed, and the only way to do that is via increased power consumption and all the costs associated with it, multicore chips offer a different, but inherently parallel alternative to boosting performance, and performance has always been the chief characteristic of HPC systems. So the lessons learned there can now start to be applied in general purpose computing.

"Computing must be reinvented, but many of those who invented computing are still alive. We did it once and we can do it again," he said.

"Reinvention" is, however, a potentially scary word, and Smith is aware of the dangers. This is particularly the case where there is such a long-standing installed base of applications, process, and operating methods as found in the mainstream business computing arena.

"Reinvention" could make all of that obsolete almost over night. It is not a route that he favours, however. "One option with the move to parallelisation is to simply wipe the slate clean and start again with something new," he said. "This is what I call the Apple approach, where a great new technology is introduced with not much thought given to the pain it might cause users of an earlier technology. But we have to take existing users with us."

Smith used his keynote presentation at the recent International Supercomputing Conference in Dresden, to take a look at what is happening to computing as a whole. The fundamental, he suggested, is that uniprocessor performance is levelling off and instruction levels, power consumption and cache limitations are all "walls" that are now being hit. And the fact that we now have multicore processors doesn't change this if the architecture hasn't changed, which then means that they become difficult to program.

The Instruction Level Wall is constructed from the limits of the uniprocessor instruction architecture, which are now being reached. There are issues that restrict the level of concurrency possible in a system, such as control dependent computation and data dependent memory addressing, and they collectively limit such architectures to a few instructions per clock cycle.

The Power Wall is now coming into play more significantly. As an example he noted that it is possible to scale hardware by Sigma, but that the power will scale by Sigma as well. Scaling the clock frequency by Sigma is worse, for it scales the dynamic power by Sigma cubed.

The Memory Wall needs not only bigger cache sizes, but also the ability to cut the cache miss-rate in half. In addition, the actual size of the growth in cache capacity will be driven by the type of data being fetched and stored. The more complex, the greater the cache needs to be. For example, if the data is intended for dense matrix-matrix multiply functions, then the cache needs to be four times bigger. If it is for a Fast Fourier transform it has to be the square of the original cache to half the miss-rate. So there are issues here in not only increasing cache size, but also increasing the bandwidth and reducing the latency of the channel serving the cache.

HPC technologies have, over the years, developed solutions to these problems. But they have also suffered from being caught in something of a self-serving spiral. As Smith put it: "HPC systems have been the ones that run HPC applications, while HPC applications are the ones that run on HPC systems."

So it might have remained if it had not been for the application of dual-core, and now multicore, processors across the board. The same fundamental techniques of parallel processing now start to apply equally to mainstream business applications as to the most complex weather forecasting system.

Build a business case: developing custom apps

More from The Register

next story
Why has the web gone to hell? Market chaos and HUMAN NATURE
Tim Berners-Lee isn't happy, but we should be
Mozilla's 'Tiles' ads debut in new Firefox nightlies
You can try turning them off and on again
Microsoft boots 1,500 dodgy apps from the Windows Store
DEVELOPERS! DEVELOPERS! DEVELOPERS! Naughty, misleading developers!
'Stop dissing Google or quit': OK, I quit, says Code Club co-founder
And now a message from our sponsors: 'STFU or else'
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Uber, Lyft and cutting corners: The true face of the Sharing Economy
Casual labour and tired ideas = not really web-tastic
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
prev story

Whitepapers

Gartner critical capabilities for enterprise endpoint backup
Learn why inSync received the highest overall rating from Druva and is the top choice for the mobile workforce.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.