Feeds

Facing up to parallelism

Multicore means today's HPC is tomorrow's general purpose

The Power of One Brief: Top reasons to choose HP BladeSystem

It is, perhaps, one of those forgotten facts that computing is still a relatively young technology, made all the more poignant by the realisation that many of the people driving the High Performance Computing (HPC) business, like Burton Smith, Microsoft's technical fellow in charge of advanced strategies and policies for the company, have not only been round the track several times, but are very much still at the bleeding edge of the technology.

Smith's track record includes many years as chief scientist at Cray, but his views now are hardly stuck in the past, for he believes that the parallel processing technologies that have been developed round HPC are where the mainstream of computing technology now has to head.

"We are now at the point where we are breaking the Von Neumann Assumption that there is only one program counter that allows the proper ordering and scheduling of variables," he said. "Parallel programming makes this hazardous, but we are also now at the point where serial programs are becoming slow programs."

Driving this is the arrival of multicore processor chips into the mainstream of computing. As the only way to get more performance of a single threaded processor is to increase its speed, and the only way to do that is via increased power consumption and all the costs associated with it, multicore chips offer a different, but inherently parallel alternative to boosting performance, and performance has always been the chief characteristic of HPC systems. So the lessons learned there can now start to be applied in general purpose computing.

"Computing must be reinvented, but many of those who invented computing are still alive. We did it once and we can do it again," he said.

"Reinvention" is, however, a potentially scary word, and Smith is aware of the dangers. This is particularly the case where there is such a long-standing installed base of applications, process, and operating methods as found in the mainstream business computing arena.

"Reinvention" could make all of that obsolete almost over night. It is not a route that he favours, however. "One option with the move to parallelisation is to simply wipe the slate clean and start again with something new," he said. "This is what I call the Apple approach, where a great new technology is introduced with not much thought given to the pain it might cause users of an earlier technology. But we have to take existing users with us."

Smith used his keynote presentation at the recent International Supercomputing Conference in Dresden, to take a look at what is happening to computing as a whole. The fundamental, he suggested, is that uniprocessor performance is levelling off and instruction levels, power consumption and cache limitations are all "walls" that are now being hit. And the fact that we now have multicore processors doesn't change this if the architecture hasn't changed, which then means that they become difficult to program.

The Instruction Level Wall is constructed from the limits of the uniprocessor instruction architecture, which are now being reached. There are issues that restrict the level of concurrency possible in a system, such as control dependent computation and data dependent memory addressing, and they collectively limit such architectures to a few instructions per clock cycle.

The Power Wall is now coming into play more significantly. As an example he noted that it is possible to scale hardware by Sigma, but that the power will scale by Sigma as well. Scaling the clock frequency by Sigma is worse, for it scales the dynamic power by Sigma cubed.

The Memory Wall needs not only bigger cache sizes, but also the ability to cut the cache miss-rate in half. In addition, the actual size of the growth in cache capacity will be driven by the type of data being fetched and stored. The more complex, the greater the cache needs to be. For example, if the data is intended for dense matrix-matrix multiply functions, then the cache needs to be four times bigger. If it is for a Fast Fourier transform it has to be the square of the original cache to half the miss-rate. So there are issues here in not only increasing cache size, but also increasing the bandwidth and reducing the latency of the channel serving the cache.

HPC technologies have, over the years, developed solutions to these problems. But they have also suffered from being caught in something of a self-serving spiral. As Smith put it: "HPC systems have been the ones that run HPC applications, while HPC applications are the ones that run on HPC systems."

So it might have remained if it had not been for the application of dual-core, and now multicore, processors across the board. The same fundamental techniques of parallel processing now start to apply equally to mainstream business applications as to the most complex weather forecasting system.

Securing Web Applications Made Simple and Scalable

More from The Register

next story
Apple fanbois SCREAM as update BRICKS their Macbook Airs
Ragegasm spills over as firmware upgrade kills machines
HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
Don't panic though – Apple's backdoor is not wide open to all, guru tells us
Mozilla fixes CRITICAL security holes in Firefox, urges v31 upgrade
Misc memory hazards 'could be exploited' - and guess what, one's a Javascript vuln
NO MORE ALL CAPS and other pleasures of Visual Studio 14
Unpicking a packed preview that breaks down ASP.NET
Captain Kirk sets phaser to SLAUGHTER after trying new Facebook app
William Shatner less-than-impressed by Zuck's celebrity-only app
Cheer up, Nokia fans. It can start making mobes again in 18 months
The real winner of the Nokia sale is *drumroll* ... Nokia
EU dons gloves, pokes Google's deals with Android mobe makers
El Reg cops a squint at investigatory letters
Chrome browser has been DRAINING PC batteries for YEARS
Google is only now fixing ancient, energy-sapping bug
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Reducing security risks from open source software
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.