Feeds

Bright Computing bursts HPC to EC2 clouds

A veritable – and virtual – cluster

Beginner's guide to SSL certificates

SC11 If you want to do cloud bursting in an HPC environment, the last thing you want to do is try to manage the movement of running workloads from your own cluster out to a compute cloud like Amazon's EC2 compute cloud. Bright Computing, the maker of the Bright Cluster Manager, would go so far as to say that its HPC cluster control freak is the only thing that should be trusted to do such work.

Ahead of the SC11 supercomputing conference in Seattle next week, the company is previewing Bright Cluster Manager 6.0, an upcoming release that will not only be able to manage a virtual cluster running on the Amazon EC2 cloud, but will also be able to augment the performance of an internal HPC cluster with EC2 capacity when and if customers decide they need to radically speed up the running of a particular design, simulation, or calculation.

Matthijs van Leeuwen, CEO at Bright Computing, tells El Reg that companies in the life sciences who have no desire to invest in large compute clusters but who want to run their simulations faster are the obvious first customers looking to be able to burst from a baby cluster they have in their office out to EC2. In some life sciences research organizations, the only local machine is a laptop or a workstation and the only cluster that will be used is one on a public cloud like EC2.

Engineering and design firms that typically only need compute capacity sporadically – and usually lots of it for short periods of time when they are running up against a product deadline – are also a perfect fit for cloud bursting out to the EC2 public cloud.

It really depends on the data set and how quickly the data can be moved out to the public cloud. For the Monte Carlo simulations that are the heart of risk analysis for financial transactions, the data sets are small and can be uploaded to EC2 in a matter of 10 minutes or so over a fast internet link. Data sets for product design can be mailed to Amazon on tape or disk, and ditto for oil and gas companies who have very large data sets but sometimes run out of computing capacity for the big jobs.

But data movement is not the issue. "It's not just getting the data across, but getting it across before the job needs to run that matters," says Van Leeuwen.

With Bright Cluster Manager 6.0, the head node you have on your own internal HPC cluster is rigged to manage the virtual compute nodes running out on the EC2 cloud just as if they were running on your own cluster. Out on EC2, the virtual nodes are hooked up to a cloud director node, which is a proxy for the head node and which gets a single image of the compute node software stack that is deployed to EC2 VMs as needed by the head node. Transferring this software image is not instantaneous; it can take as much as an hour. But once it is uploaded to the EC2 public cloud, it is a matter of minutes to fire up a node running the stack. Then Bright Cluster Manager just sees it as another node and uploads data for the nodes to use ahead of them being called upon to perform calculations.

The external nodes are part of an encrypted virtual private network, the same one that is used on local nodes, and they share a single DNS namespace that allows for the fastest link to be found between any two nodes – physical or virtual. The workload manager can be told to run certain jobs only on local nodes, other jobs only on cloudy nodes, or mixed jobs that can span either local or cloud nodes depending on the overall load on the hybrid cluster.

Bright Computing is going to ship Bright Cluster Manager 6.0 in January 2012. Van Leeuwen says that the company is working on linking into other public clouds, but declined to name names.

Pricing for the cloudy side of the software has not been set yet, but the plan is to charge a small incremental fee atop the EC2 slice price. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
IT crisis looming: 'What if AWS goes pop, runs out of cash?'
Public IaaS... something's gotta give - and it may be AWS
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
BT claims almost-gigabit connections over COPPER WIRE
Just need to bring the fibre box within 19m ...
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.