Intel rallies rivals on parallel programming education
Sequential is so over
Intel has enlisted chip rivals to push for making parallel programming a higher priority on computer science courses.
Intel will kick-off its campaign at Supercomputing 08 in Austin, Texas next week, during a Monday session called There Is No More Sequential Programming. Why Are We Still Teaching It?.
Representatives from AMD, Nvidia, and Sun Microsystems will join Intel along with individuals from academia and the open source movement to discuss how the industry can get universities to break with their attachment to traditional sequential programming.
The panel will also be used to set up a working group that "will develop and recommend a practical means for creating an undergraduate curriculum with parallelism at its core", Intel said.
Intel said a shift to parallel programming is essential given that "all major manufacturers have moved to a many core architecture and current generation CPU, GPU or ASIC designs cannot be efficiently programmed without knowledge of parallel programming".
No one, of course, is arguing about the need for strategies to deal with programming multiprocessor chip architectures. There is, however, some controversy over how this should be achieved.
Next Monday's discussion looks set to be interesting. Intel has posted a set of questions submitted by some unidentified participants and disagreement is already evident.
Some think this is "too hard" a subject to teach while others believe sequential programming is a prerequisite for parallel programming. Those who cannot make it to Austin can register here for a Webinar later in the week. ®
The concept of a global clock with double buffering just doesn't cut it. Global clocks are slow. Why should one part run slow if I can run other parts faster? Then you've got register/cache/memory speed issues. If we adopt your solution we end up running at the speed of the slowest *possible* bottleneck instead of the slowest bottleneck.
On the hardware front I reckon we'll end up with a bunch of non-homogoneous cores with homogeneous instruction sets running on a fast IO interconnect.
On the software front we'll end up with some form of multi-threading/multi-process using either NUMA shared memory or Message Parsing. Developers will just have to get used to the fact that programming is hard and and that the things you learnt in your Computer Science degree are actually useful.
BTW the sure sign of a kook is when they say algorithms are dead then present another algorithm.
the answer is in the functional programming languages, the lisp strain is about to get its day.
The thing to realise about threads is that a process is wrapped around a thread.
You cannot get rid of threads really, they are the bases of how a program executes.
A process wraps the thread and acts as a shield with which to run the thread through. Now when you add extra threads into a process you create problems, it is that simple, the problems are things like race conditions, non determinable results if the architecture changes etc etc. They are quite fundamental problems, and they exist at the design level.
Concurrency via lite weight process, state machines, functional style, and interprocess communication is probably going to be the winner here it is Erlang, Haskell that should emerge as the next gen languages. Though python has a multiprocess module just recently released, but it will come down to style, you will have to code for concurrency not expect the compiler or environment to work it out.
"This approach is ideal for graphical programming and the use of plug-compatible components. Just drag them and drop them, and they connect themselves automatically. This will open up programming to a huge number of people that were heretofore excluded."
This is such an old sales pitch. How many times have we all heard this one, seriously?
Even Java was going to do this... Like, 10 years ago. And now rejigging the way parallelism works is going to do it?
Nobody who spouts such obvious, insane, stupid crap should be trusted.