This article is more than 1 year old

Massed x86 ranks 'blowing away' supercomputer monoliths

Dell pitches modular parallel processors

Dell has claimed it is simplifying supercomputing by replacing complex monolithic, proprietary architecture Cray-like machines with modular ones, using racks of industry-standard components.

In fact, in one way it's helping to complicate supercomputing, because writing parallelised code is so hard. But the massed ranks of x86 processors are blowing the Crays away and cloud-based HPC supply is on the horizon.

This message came across at a Dell-hosted supercomputer conference in London with UK and mainland Europe academic supercomputer users. Obviously the pitch was that simplified supercomputing with Dell provides better performance and better value. Presenters provided their snapshots of supercomputing experience, covering the search for planets in space as well as a analysing health statistics against genome data for inherited disease tendencies

"Most issues in science today are computational in nature," was the claim made by Josh Claman, Dell's EMEA public sector business head. Many scientific problems, if not most, need modelling and analysis carried out on a computer to check the theory. Supercomputing, or high-performance computing (HPC) is becoming a broad-based activity as a result. If it can be lowered in cost and made more available, then it will help science move forward.

The academic presenters, all involved with Dell-based HPC datacentres, agreed with that sentiment, being compute power-hungry service providers with budget problems.

There was much comparison of then and now to show how aggregate performance has rocketed in a kind of accelerated Moore's Law way. We heard of a leading 235 gigaflop supercomputer in 1998 contrasted with a 10 petaflop one being built now in Japan*. This, Claman claimed, was half the compute power of the human brain.

We are now in the fourth phase of supercomputer design with dense compute power in many, many clustered nodes built from commodity hardware components. A typical supercomputer today in European academia is a cluster built from racks of 30 1U multi-core Intel servers connected by InfiniBand or 10gigE, and running Linux, with a file system such as Lustre, using 200TB or more of SATA disk storage. Data is striped across the drives to get the bandwidth needed by using lots of spindles at once.

Claman said these enabling technologies are driving the broadening of supercomputing accessibility. Dell has recently been selling a cluster a day for an average price of £99,000 with an average performance of 1.4Tflops.

Where does it start?

A supercomputer starts when a multi-core scientific workstation is not enough. GPUs (graphics processing units) can be good for HPC because they are built to run many, many operations in parallel.

It means there are two types of supercomputer: the single box containing lots and lots of cores and/or graphics processing cores, compared to the clustered multi-node setup, with each node having SMP (symmetric multi-processing) processors. Some HPC applications are best suited to one or the other architecture.

Dell people see hybrid clusters developing with nodes equipped with multiple GPU cores as well as SMP cores. The programming task is characterised by the need to use many, many cores in parallel. This is getting beyond the resources of research scientists whose job is research, not writing code. An IBM supercomputer could have 1,000 cores with many applications only using a subset. The software people have to get better at writing code to use all these cores.

One user said his lab replaced a 5-year old, €1m Cray with a 4-socket Dell machine costing €60,000 and didn't tell his users. They asked him what had happened to the computer, as their jobs were running faster. He said the black art has been taken out of running these systems and the lifecycle costs of power and cooling and so forth radically reduced.

Paul Calleja, the director of Cambridge University's HPC lab, said he runs his supercomputer facility as a chargeable service, based on costed core hours, to its users. "All public sector managers know the dark days are coming, ones with zero growth budgets." He and his colleagues have to produce large efficiency gains and invest the savings in new resources. There will be no other sources of funds to buy new kit.

He bought a Dell HPC box in 2006 on a value for money basis. It has 2,300 cores in 600 Dell servers with an InfiniBand connection fabric. It replaced a Sun system which was ten times slower and cost three times as much to run. His Dell set-up cost £2m, weighs 20 tonnes, needs 1,000 amps of power and delivers 20Tflops. At one time it was the fastest academic machine in the UK.

Racks are laid out in a hot aisle/cold aisle arrangement.

More about

TIP US OFF

Send us news


Other stories you might like