This article is more than 1 year old

Massed x86 ranks 'blowing away' supercomputer monoliths

Dell pitches modular parallel processors

Supercomputing weather forecast: it's going to become cloudy

Calleja is also thinking of cloud computing. He makes a clear distinction between the cloud, with computing delivered as service, and grid computing with applications split across computing grids, across geo-clusters for example. He's not keen on this because of the need for massive data set transfers amongst other things.

There is an 8,000-core Dell HPC system in Holland which is idle at night and he could, in theory, rent some of that capacity and supply it to his users. They already use what is, in effect, a cloud HPC service from his data centre in Cambridge, with datasets stored in the cloud. Altering or adding another source of HPC cores, to be accessed over links from outside the firewall, would essentially make no difference to them.

The only change they would notice would be that their research budgets go further, since the core hours they buy would be cheaper. This assumes that needed data sets could be sent to the Dutch HPC centre somehow.

Calleja is also thinking of offering HPS services to users outside Cambridge University, both to other academic institutions and to small and medium businesses needing an HPC resource for financial modelling, risk modelling, automotive and pharmaceutical applications. He is looking at putting commercial multi-gigabit fibre feeds in place, outside the academic networks, to support this.

If he can sell core hours to more clients, then his running costs go down, and his core/hour prices also go down. A couple of other universities are already looking into the idea of using Cambridge HPC resources in this way. Calleja also gets three to four enquiries a month from SMEs about his data centre's HPC facilities.

He is not alone here. The academic JANET network is looking into a shared service model of operation.

If Calleja had profits from supplying cloud HPC services then he could afford more kit. He reckons that there is a sweet spot between university HPC data centres and larger regional HPC sites and his Cambridge data centre could grow to fill it.

The logic here is to build ever larger supercomputers with more and more powerful cores, perhaps backed up with GPUs. These would be operated at a high utilisation rate by delivering highly-efficient parallelised code resources to users, billed by the core hours they use. By keeping data sets inside the HPC lab it is, ironically, becoming another example of a re-invented mainframe approach: an HPC glass house. ®

* A gigaflop is one thousand million floating point operations a second. A petaflop is one million billion such operations a second.

More about

TIP US OFF

Send us news


Other stories you might like