Original URL: https://www.theregister.com/2007/07/02/smith_parallel_coming/

Facing up to parallelism

Multicore means today's HPC is tomorrow's general purpose

By Martin Banks

Posted in Software, 2nd July 2007 15:38 GMT

It is, perhaps, one of those forgotten facts that computing is still a relatively young technology, made all the more poignant by the realisation that many of the people driving the High Performance Computing (HPC) business, like Burton Smith, Microsoft's technical fellow in charge of advanced strategies and policies for the company, have not only been round the track several times, but are very much still at the bleeding edge of the technology.

Smith's track record includes many years as chief scientist at Cray, but his views now are hardly stuck in the past, for he believes that the parallel processing technologies that have been developed round HPC are where the mainstream of computing technology now has to head.

"We are now at the point where we are breaking the Von Neumann Assumption that there is only one program counter that allows the proper ordering and scheduling of variables," he said. "Parallel programming makes this hazardous, but we are also now at the point where serial programs are becoming slow programs."

Driving this is the arrival of multicore processor chips into the mainstream of computing. As the only way to get more performance of a single threaded processor is to increase its speed, and the only way to do that is via increased power consumption and all the costs associated with it, multicore chips offer a different, but inherently parallel alternative to boosting performance, and performance has always been the chief characteristic of HPC systems. So the lessons learned there can now start to be applied in general purpose computing.

"Computing must be reinvented, but many of those who invented computing are still alive. We did it once and we can do it again," he said.

"Reinvention" is, however, a potentially scary word, and Smith is aware of the dangers. This is particularly the case where there is such a long-standing installed base of applications, process, and operating methods as found in the mainstream business computing arena.

"Reinvention" could make all of that obsolete almost over night. It is not a route that he favours, however. "One option with the move to parallelisation is to simply wipe the slate clean and start again with something new," he said. "This is what I call the Apple approach, where a great new technology is introduced with not much thought given to the pain it might cause users of an earlier technology. But we have to take existing users with us."

Smith used his keynote presentation at the recent International Supercomputing Conference in Dresden, to take a look at what is happening to computing as a whole. The fundamental, he suggested, is that uniprocessor performance is levelling off and instruction levels, power consumption and cache limitations are all "walls" that are now being hit. And the fact that we now have multicore processors doesn't change this if the architecture hasn't changed, which then means that they become difficult to program.

The Instruction Level Wall is constructed from the limits of the uniprocessor instruction architecture, which are now being reached. There are issues that restrict the level of concurrency possible in a system, such as control dependent computation and data dependent memory addressing, and they collectively limit such architectures to a few instructions per clock cycle.

The Power Wall is now coming into play more significantly. As an example he noted that it is possible to scale hardware by Sigma, but that the power will scale by Sigma as well. Scaling the clock frequency by Sigma is worse, for it scales the dynamic power by Sigma cubed.

The Memory Wall needs not only bigger cache sizes, but also the ability to cut the cache miss-rate in half. In addition, the actual size of the growth in cache capacity will be driven by the type of data being fetched and stored. The more complex, the greater the cache needs to be. For example, if the data is intended for dense matrix-matrix multiply functions, then the cache needs to be four times bigger. If it is for a Fast Fourier transform it has to be the square of the original cache to half the miss-rate. So there are issues here in not only increasing cache size, but also increasing the bandwidth and reducing the latency of the channel serving the cache.

HPC technologies have, over the years, developed solutions to these problems. But they have also suffered from being caught in something of a self-serving spiral. As Smith put it: "HPC systems have been the ones that run HPC applications, while HPC applications are the ones that run on HPC systems."

So it might have remained if it had not been for the application of dual-core, and now multicore, processors across the board. The same fundamental techniques of parallel processing now start to apply equally to mainstream business applications as to the most complex weather forecasting system.

There is, of course, a great deal known about parallel programming and there are already two promising programming approaches that Smith is pursing. One is functional programming and the other is atomic memory transactions. Neither is a complete answer in itself, of course. Functional programming, for example, does not allow mutable states to exist, while atomic memory transactions implement dependence awkwardly. The use of such technologies in mainstream computing is new ground and he acknowledged that atomic memory transaction technology already has critics claiming it is doomed to be too slow. He also pointed out that this was still to be shown as a permanent condition.

He did highlight two functional programming languages, Sisal and NESL, for specific mention, however. "Critics say that functional languages are inefficient, but these two are excellent counter-examples. On Cray systems they could run as fast as Fortran."

One of the issues that confronts programmers moving into the parallel processing world is the role of transactions in the management of invariants.

"Invariants are a program's conservation laws," he observed. "And there are rules of data structure, or state, integrity that need to be observed." These rules were developed in the paper Verifying properties of parallel programs: An axiomatic approach by Susan Owicki and David Gries, which sets out the following law:

If statements p and q preserve the invariant I and they do not "interfere", their parallel composition { p|| q} also preserves I.

Transactions then play their part as set out by Leslie Lamport and Fred Schneider in their paper The Hoare Logic of CSP, And All That:

If p and q are performed atomically, i.e. as transactions, then they will not interfere.

As Smith observed: "Although operations seldom commute with respect to state, transactions give us commutativity with respect to the invariant, and it would be nice if the invariants were available to the compiler if programmers can provide them readily."

For Microsoft programmers he pointed to the new C# and Visual Basic enhancements to be found with LINQ (Language Integrated Query) project, a set of extensions to the .NET Framework that add such capabilities as language-integrated query, set, and transform operations and can operate on data in memory or in an external database.

His view is that, for the immediate future, all the major styles of parallel programming should be supported. These include both functional and transactional styles, data parallel and task parallel, message passing and shared memory, declarative and imperative, and implicit and explicit styles.

"To cover all these will require more than one language, but then we use multiple languages today as it is. What is important is that the parallelism and locality are exposed to the compiler, so that the compiler can adapt them for the target system."

Here, he suggested that the ability to work with heterogeneous processors in any infrastructure would be an important capability. "We will need independence from the idiosyncrasies of the machine."

He also pointed delegates to the language interoperability available with .NET as a help. This would help provide automatic parallelisation, which he said many have already suggested is demonstrated failure. "What failed is parallelism discovery, particularly in-the-large. There is now a need to package parallelism, which means not worrying about how many cores are available at any one time or about whether you need to recompile the application for a different number of cores."

When it comes to debugging parallel applications, Smith suggested that setting conditional data breakpoints and ad-hoc data perusal are likely to be two important techniques for developers to learn. The former is a technique which stops the application if an invariant fails to be true, while the latter is a form of data mining application. Application tuning will also be important, particularly in identifying where there is insufficient parallelism available.

As a call-to-arms in facing up to what he sees as inevitable, he told delegates: "We have to rethink the basics of computing, but thanks to HPC we have a good starting point. It does mean, however, that many applications will have to be re-modeled and re-engineered from the strategy downwards." ®