Feeds

Massed x86 ranks 'blowing away' supercomputer monoliths

Dell pitches modular parallel processors

Security for virtualized datacentres

Supercomputing weather forecast: it's going to become cloudy

Calleja is also thinking of cloud computing. He makes a clear distinction between the cloud, with computing delivered as service, and grid computing with applications split across computing grids, across geo-clusters for example. He's not keen on this because of the need for massive data set transfers amongst other things.

There is an 8,000-core Dell HPC system in Holland which is idle at night and he could, in theory, rent some of that capacity and supply it to his users. They already use what is, in effect, a cloud HPC service from his data centre in Cambridge, with datasets stored in the cloud. Altering or adding another source of HPC cores, to be accessed over links from outside the firewall, would essentially make no difference to them.

The only change they would notice would be that their research budgets go further, since the core hours they buy would be cheaper. This assumes that needed data sets could be sent to the Dutch HPC centre somehow.

Calleja is also thinking of offering HPS services to users outside Cambridge University, both to other academic institutions and to small and medium businesses needing an HPC resource for financial modelling, risk modelling, automotive and pharmaceutical applications. He is looking at putting commercial multi-gigabit fibre feeds in place, outside the academic networks, to support this.

If he can sell core hours to more clients, then his running costs go down, and his core/hour prices also go down. A couple of other universities are already looking into the idea of using Cambridge HPC resources in this way. Calleja also gets three to four enquiries a month from SMEs about his data centre's HPC facilities.

He is not alone here. The academic JANET network is looking into a shared service model of operation.

If Calleja had profits from supplying cloud HPC services then he could afford more kit. He reckons that there is a sweet spot between university HPC data centres and larger regional HPC sites and his Cambridge data centre could grow to fill it.

The logic here is to build ever larger supercomputers with more and more powerful cores, perhaps backed up with GPUs. These would be operated at a high utilisation rate by delivering highly-efficient parallelised code resources to users, billed by the core hours they use. By keeping data sets inside the HPC lab it is, ironically, becoming another example of a re-invented mainframe approach: an HPC glass house. ®

* A gigaflop is one thousand million floating point operations a second. A petaflop is one million billion such operations a second.

New hybrid storage solutions

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.