Original URL: https://www.theregister.com/2007/08/15/hpc_programming_tools/

High-level tools offer key to HPC

Programming the multiverse

By Phil Manchester

Posted in Software, 15th August 2007 09:55 GMT

Last summer at the University of Southampton, Microsoft sponsored an engineering summer school for senior school students to demonstrate just how easy it is to harness the power of high performance computing (HPC). In a couple of days they created a supercomputer which they used to design and simulate an aircraft.

"They had no knowledge of computing or aeronautics. They started with a pile of boxes and some circuit boards, designed and built a 24-node multi-processor supercomputer, loaded up the software, and modelled an aircraft," says Dr Mike Newberry, product manager for HPC at Microsoft UK.

"This gives you a sense of where the future of supercomputing is going. Anyone with a bright idea and a basic knowledge of computing can create something."

Microsoft started to take HPC seriously about 18 months ago and has been working furiously to stake a space in this fast growing market. Newberry, for example, describes .NET as "HPC ready" and even humble old Excel is being re-engineered to exploit multicore architectures. Microsoft also began a big push on its Windows Compute Cluster Server 2003 earlier this year.

This will come as no surprise to those who follow the HPC market. IDC says the HPC server market is growing faster than any other sector with sales exceeding $10bn in 2006 and forecast to hit $14bn by 2010.

Two good reasons for the fast growth are the falling cost of HPC hardware and, perhaps more important, improvements in high level software to exploit the power of multi-processor architectures.

As a result, Newberry predicts an expansion in HPC use beyond its historical roots in areas such as weather forecasting and advanced science. "I think we will see a great leap forward and HPC will become a part of everyday computing. Areas such as medicine and risk calculation in financial services are exploring the potential."

In addition to the historic high hardware cost, HPC has also suffered from a lack of software tools to develop applications which can gain real benefits from multiple processing architectures and parallel processing. New programming languages - or variations on existing ones - have emerged in the last couple of years and the drive to increase computing power beyond single processor architectures has stimulated the current efforts to re-tool.

"In the past people did not bother to parallelise applications because advances in microprocessor power have followed Moore's Law. But those days are over and now developers have to think about how they build applications to get the best from multi-core and multiple processor systems," says Jean-Marc Denis, business manager of Bull's HPC unit.

He goes on to say that the key is to build tools which can insulate application developers from the complexities of multi-processor architectures: "We need tools - not at the developer level but at the level of the cluster with a good knowledge of the cluster architecture. The resource manager needs to know details such as how many processors and what memory requirements there are."

Graham Twaddle, CEO and chief architect at Corporate Modelling, agrees that high-level tools will provide the key to HPC. He is working with financial services sector clients to apply advanced business modelling techniques to HPC architectures.

Twaddle worked with Michael Jackson on modelling techniques in the 1990s and recognised the difficulties of re-casting applications for parallel architectures early on. "We were looking to build traditional batch applications to run under Unix on machines such as Sequent and Bull. But we found it is much harder to fragment an application to run on parallel architectures."

Twaddle's approach is to adapt traditional workflow modelling techniques to help build applications for a parallel processing environment. "We thought about how we could deploy applications to a Grid and saw that we could model batch processes in the same way we would model a business process - but without any human interaction. This way we can generate application code, the workflow, and the message flow for a Grid architecture."

He says Microsoft's Windows Compute Cluster still does not currently have all the technology he would like to see - but hinted that this was in the pipeline. "We are still waiting for some workflow features to be available - but they will be here soon."

Twaddle sees the combination of low costs and high-level application generation tools pushing HPC firmly into the mainstream and notes that the power available from multiple processor architectures verges on the unbelievable.

"A 25-node Grid gives throughput of something like 1.6 teraFLOPS. By comparison a 1982 Cray X-MP delivered a theoretical 200 megaFLOPS from each of its two processors - which makes the 25-node grid equivalent to more than 4000 X-MPs. X-MPs apparently used to cost $15 million - not including disks. Around the same time frame, a 4MHz Sinclair Z80 was capable of around 0.001 megaFLOPS, so we could alternatively think in terms of 1.6 billion Sinclair ZX81s." ®