Feeds

Facing up to parallelism

Multicore means today's HPC is tomorrow's general purpose

Securing Web Applications Made Simple and Scalable

It is, perhaps, one of those forgotten facts that computing is still a relatively young technology, made all the more poignant by the realisation that many of the people driving the High Performance Computing (HPC) business, like Burton Smith, Microsoft's technical fellow in charge of advanced strategies and policies for the company, have not only been round the track several times, but are very much still at the bleeding edge of the technology.

Smith's track record includes many years as chief scientist at Cray, but his views now are hardly stuck in the past, for he believes that the parallel processing technologies that have been developed round HPC are where the mainstream of computing technology now has to head.

"We are now at the point where we are breaking the Von Neumann Assumption that there is only one program counter that allows the proper ordering and scheduling of variables," he said. "Parallel programming makes this hazardous, but we are also now at the point where serial programs are becoming slow programs."

Driving this is the arrival of multicore processor chips into the mainstream of computing. As the only way to get more performance of a single threaded processor is to increase its speed, and the only way to do that is via increased power consumption and all the costs associated with it, multicore chips offer a different, but inherently parallel alternative to boosting performance, and performance has always been the chief characteristic of HPC systems. So the lessons learned there can now start to be applied in general purpose computing.

"Computing must be reinvented, but many of those who invented computing are still alive. We did it once and we can do it again," he said.

"Reinvention" is, however, a potentially scary word, and Smith is aware of the dangers. This is particularly the case where there is such a long-standing installed base of applications, process, and operating methods as found in the mainstream business computing arena.

"Reinvention" could make all of that obsolete almost over night. It is not a route that he favours, however. "One option with the move to parallelisation is to simply wipe the slate clean and start again with something new," he said. "This is what I call the Apple approach, where a great new technology is introduced with not much thought given to the pain it might cause users of an earlier technology. But we have to take existing users with us."

Smith used his keynote presentation at the recent International Supercomputing Conference in Dresden, to take a look at what is happening to computing as a whole. The fundamental, he suggested, is that uniprocessor performance is levelling off and instruction levels, power consumption and cache limitations are all "walls" that are now being hit. And the fact that we now have multicore processors doesn't change this if the architecture hasn't changed, which then means that they become difficult to program.

The Instruction Level Wall is constructed from the limits of the uniprocessor instruction architecture, which are now being reached. There are issues that restrict the level of concurrency possible in a system, such as control dependent computation and data dependent memory addressing, and they collectively limit such architectures to a few instructions per clock cycle.

The Power Wall is now coming into play more significantly. As an example he noted that it is possible to scale hardware by Sigma, but that the power will scale by Sigma as well. Scaling the clock frequency by Sigma is worse, for it scales the dynamic power by Sigma cubed.

The Memory Wall needs not only bigger cache sizes, but also the ability to cut the cache miss-rate in half. In addition, the actual size of the growth in cache capacity will be driven by the type of data being fetched and stored. The more complex, the greater the cache needs to be. For example, if the data is intended for dense matrix-matrix multiply functions, then the cache needs to be four times bigger. If it is for a Fast Fourier transform it has to be the square of the original cache to half the miss-rate. So there are issues here in not only increasing cache size, but also increasing the bandwidth and reducing the latency of the channel serving the cache.

HPC technologies have, over the years, developed solutions to these problems. But they have also suffered from being caught in something of a self-serving spiral. As Smith put it: "HPC systems have been the ones that run HPC applications, while HPC applications are the ones that run on HPC systems."

So it might have remained if it had not been for the application of dual-core, and now multicore, processors across the board. The same fundamental techniques of parallel processing now start to apply equally to mainstream business applications as to the most complex weather forecasting system.

Bridging the IT gap between rising business demands and ageing tools

More from The Register

next story
KDE releases ice-cream coloured Plasma 5 just in time for summer
Melty but refreshing - popular rival to Mint's Cinnamon's still a work in progress
NO MORE ALL CAPS and other pleasures of Visual Studio 14
Unpicking a packed preview that breaks down ASP.NET
Secure microkernel that uses maths to be 'bug free' goes open source
Hacker-repelling, drone-protecting code will soon be yours to tweak as you see fit
Cheer up, Nokia fans. It can start making mobes again in 18 months
The real winner of the Nokia sale is *drumroll* ... Nokia
Put down that Oracle database patch: It could cost $23,000 per CPU
On-by-default INMEMORY tech a boon for developers ... as long as they can afford it
Another day, another Firefox: Version 31 is upon us ALREADY
Web devs, Mozilla really wants you to like this one
Google shows off new Chrome OS look
Athena springs full-grown from Chromium project's head
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.