This article is more than 1 year old

Facing up to parallelism

Multicore means today's HPC is tomorrow's general purpose

There is, of course, a great deal known about parallel programming and there are already two promising programming approaches that Smith is pursing. One is functional programming and the other is atomic memory transactions. Neither is a complete answer in itself, of course. Functional programming, for example, does not allow mutable states to exist, while atomic memory transactions implement dependence awkwardly. The use of such technologies in mainstream computing is new ground and he acknowledged that atomic memory transaction technology already has critics claiming it is doomed to be too slow. He also pointed out that this was still to be shown as a permanent condition.

He did highlight two functional programming languages, Sisal and NESL, for specific mention, however. "Critics say that functional languages are inefficient, but these two are excellent counter-examples. On Cray systems they could run as fast as Fortran."

One of the issues that confronts programmers moving into the parallel processing world is the role of transactions in the management of invariants.

"Invariants are a program's conservation laws," he observed. "And there are rules of data structure, or state, integrity that need to be observed." These rules were developed in the paper Verifying properties of parallel programs: An axiomatic approach by Susan Owicki and David Gries, which sets out the following law:

If statements p and q preserve the invariant I and they do not "interfere", their parallel composition { p|| q} also preserves I.

Transactions then play their part as set out by Leslie Lamport and Fred Schneider in their paper The Hoare Logic of CSP, And All That:

If p and q are performed atomically, i.e. as transactions, then they will not interfere.

As Smith observed: "Although operations seldom commute with respect to state, transactions give us commutativity with respect to the invariant, and it would be nice if the invariants were available to the compiler if programmers can provide them readily."

For Microsoft programmers he pointed to the new C# and Visual Basic enhancements to be found with LINQ (Language Integrated Query) project, a set of extensions to the .NET Framework that add such capabilities as language-integrated query, set, and transform operations and can operate on data in memory or in an external database.

His view is that, for the immediate future, all the major styles of parallel programming should be supported. These include both functional and transactional styles, data parallel and task parallel, message passing and shared memory, declarative and imperative, and implicit and explicit styles.

"To cover all these will require more than one language, but then we use multiple languages today as it is. What is important is that the parallelism and locality are exposed to the compiler, so that the compiler can adapt them for the target system."

Here, he suggested that the ability to work with heterogeneous processors in any infrastructure would be an important capability. "We will need independence from the idiosyncrasies of the machine."

He also pointed delegates to the language interoperability available with .NET as a help. This would help provide automatic parallelisation, which he said many have already suggested is demonstrated failure. "What failed is parallelism discovery, particularly in-the-large. There is now a need to package parallelism, which means not worrying about how many cores are available at any one time or about whether you need to recompile the application for a different number of cores."

When it comes to debugging parallel applications, Smith suggested that setting conditional data breakpoints and ad-hoc data perusal are likely to be two important techniques for developers to learn. The former is a technique which stops the application if an invariant fails to be true, while the latter is a form of data mining application. Application tuning will also be important, particularly in identifying where there is insufficient parallelism available.

As a call-to-arms in facing up to what he sees as inevitable, he told delegates: "We have to rethink the basics of computing, but thanks to HPC we have a good starting point. It does mean, however, that many applications will have to be re-modeled and re-engineered from the strategy downwards." ®

More about

TIP US OFF

Send us news


Other stories you might like