Boffins automate code optimisation for multi-core CPUs
Don't get too worried, devs, this is only for some dynamic optimisation algos ... for now
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University reckon they've found a way to automatically adapt dynamic optimisation code for multi-core CPUs, potentially removing the need for hand-coding of some tricky problems.
As explained by the CSAIL and in the paper Deriving Divide-and-Conquer Dynamic Programming Algorithms using Solver-Aided Transformations (PDF), dynamic optimisation or dynamic programming code is a coding technique that breaks big problems down into lots of little sub-problems, then stores the results of each sub-problem so they only need to be solved once. The technique often involves solving the sub-problems in parallel, a trick to which multi-core CPUs are very well suited.
But the kind of people who design dynamic programming tend to be domain experts – MIT mentions biologists and economists – who possess some coding skill but probably have not been trained in the nuances of coding for multi-core CPUs. Or they might just not have time to code for multi-core CPUs: MIT says “hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.”
The technique described in the paper, dubbed “Bellmania” addresses that complexity by automating the process of optimising dynamic programming algorithms for parallel operations on multi-core CPUs.
The paper says that tests of the technique produce “performance … comparable to that of the best manually optimized code.”
Which is good for those who use dynamic programming, but perhaps less so for developers who currently make a crust helping them to make code efficient.
The paper is more optimistic, with the authors saying their results offer “... hope that end-users with some mathematical background will be able to use the system without the steep learning curve that is usually associated with proof assistants. This can be a valuable tool for algorithms research.” ®
Sponsored: What next after Netezza?