Feeds

A postcard from Intel in Lisbon

Intel says 'parallel or perish'

Top 5 reasons to deploy VMware with Tegile

photo of James Reinders, Intel Evangelist

James Reinders says that, soon,

a programmer who doesn't

"think parallel" won't

be a programmer.

Here are Reinders' rules for multiprogramming, in my words, and as I see them:

  1. Think parallel first. Don't even contemplate bolting on parallel processing capabilities afterwards.
  2. Code to express the parallel nature of the problem. Don't write thread management code – this is the equivalent of writing in C# or Java instead of Assembler.
  3. Don't tie threads to particular processors. You don't want to write programs that only run properly on a particular number of cores.
  4. Plan to scale through increased workload. Amdahl's Law often limits the performance gain you can get from parallel processing applied to a fixed-size workload (there is usually some significant serial part of the process which can't easily be parallelised); but Gustafson observed that if you increase the workload, the serial part of the process often remains fixed and parallel processing then lets you get through the much bigger workload with similar performance.
  5. Only create programs which can arbitrarily add tasks to the workload, so if more processors become available, the workload can take advantage of them
  6. Only write programs that can run serially, mainly because (assuming that all new PCs will be multicore) they'll then be easier to debug. However, for the time being your programs will still be expected to run OK on legacy single processor machines - and remember that a program optimised for multiprocessors will usually run more slowly on a uniprocessor, so be aware of this and don't rush headlong into coding for multicore architectures.

And some of the tools which Intel thinks will help you follow these rules are:

  • The OpenMP standard, which bolts efficient parallelising onto C++ and Fortran compilers using compiler hints. This is what Sutter calls "industrial strength duct tape", but it works.
  • Threaded Building Blocks – C++ algorithms for scalable threading (Reinders seems very confident in this tool).
  • Thread Profiler - which highlights potential performance bottlenecks.
  • Thread Checker - which detects latent race conditions and potential deadlocks.

But now a note of caution. Parallel processing has always been a holy grail of computing (although Intel came to it late, perhaps). Many of the issues talked about in this conference I've met before – on multiprocessor mainframes (the most efficient way to achieve parallel processing in practice may be the mainframe job scheduler).

I've told good programmers to think about the consequences of running on multiprocessors, only to be told that "the compiler will look after it" (in general, it can't). And I've seen the results of programmers forgetting that their code can run on several processors and, in production, things may sometimes run in the wrong order as a result. This seldom shows up in test as, even if several processors are available to the test system, the chances are that you don't process enough data to see the latent race conditions, which tend to appear when the system is overloaded.

I've had to deal with the consequences of programmers deciding that they can do locks better than IBM and coding them for themselves (the application I'm thinking of was very fast – for a while, until the consequences of never releasing locks became apparent).

This stuff seems to be hard, so we're going to need very good tools and more training. And probably, much better adherence to good development process.

Do I think that parallel processing of this sort is the way of the future? Yes, emphatically, if you run on Intel or similar models it's the only way (it seems to me) to scale computer processor power effectively. Although whether we need to scale computer processor power or whether lots of specialised small computers, another kind of parallel processing, will work better, might be another question.

Reinders tried to make the point that parallelism was intuitive. His example was the queue – it's really quite intuitive that if you have a long queue, you just need more people on the desks servicing it. Simple. But this can hide a lot of complexity – if you have more desks and shorter queues checking in at Heathrow, things go faster. But you don't expect to get past check-in and find several people are assigned to one seat.

This is a trivial example, but move back a bit and airlines have gone bust because their booking systems couldn't cope with the essentially parallel activity of selling seats in an aeroplane at travel agents across the country. Planes flying three quarters full with spare capacity to cover "collisions" for seats – or upgrading overbooked passengers for travel on the next flight - can get expensive.

Do I think that parallelism is intuitive? "Only up to a point, Lord Copper". The consensus among the speakers at the conference was that this would be a revolution in thinking comparable with the OO revolution or structured programming. And (rather like OO) it will probably only become routine once the "old guard" dies off and a new generation of graduates that knows no other way of thinking takes over. ®

Providing a secure and efficient Helpdesk

More from The Register

next story
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.