Bright Computing bursts HPC to EC2 clouds
A veritable – and virtual – cluster
SC11 If you want to do cloud bursting in an HPC environment, the last thing you want to do is try to manage the movement of running workloads from your own cluster out to a compute cloud like Amazon's EC2 compute cloud. Bright Computing, the maker of the Bright Cluster Manager, would go so far as to say that its HPC cluster control freak is the only thing that should be trusted to do such work.
Ahead of the SC11 supercomputing conference in Seattle next week, the company is previewing Bright Cluster Manager 6.0, an upcoming release that will not only be able to manage a virtual cluster running on the Amazon EC2 cloud, but will also be able to augment the performance of an internal HPC cluster with EC2 capacity when and if customers decide they need to radically speed up the running of a particular design, simulation, or calculation.
Matthijs van Leeuwen, CEO at Bright Computing, tells El Reg that companies in the life sciences who have no desire to invest in large compute clusters but who want to run their simulations faster are the obvious first customers looking to be able to burst from a baby cluster they have in their office out to EC2. In some life sciences research organizations, the only local machine is a laptop or a workstation and the only cluster that will be used is one on a public cloud like EC2.
Engineering and design firms that typically only need compute capacity sporadically – and usually lots of it for short periods of time when they are running up against a product deadline – are also a perfect fit for cloud bursting out to the EC2 public cloud.
It really depends on the data set and how quickly the data can be moved out to the public cloud. For the Monte Carlo simulations that are the heart of risk analysis for financial transactions, the data sets are small and can be uploaded to EC2 in a matter of 10 minutes or so over a fast internet link. Data sets for product design can be mailed to Amazon on tape or disk, and ditto for oil and gas companies who have very large data sets but sometimes run out of computing capacity for the big jobs.
But data movement is not the issue. "It's not just getting the data across, but getting it across before the job needs to run that matters," says Van Leeuwen.
With Bright Cluster Manager 6.0, the head node you have on your own internal HPC cluster is rigged to manage the virtual compute nodes running out on the EC2 cloud just as if they were running on your own cluster. Out on EC2, the virtual nodes are hooked up to a cloud director node, which is a proxy for the head node and which gets a single image of the compute node software stack that is deployed to EC2 VMs as needed by the head node. Transferring this software image is not instantaneous; it can take as much as an hour. But once it is uploaded to the EC2 public cloud, it is a matter of minutes to fire up a node running the stack. Then Bright Cluster Manager just sees it as another node and uploads data for the nodes to use ahead of them being called upon to perform calculations.
The external nodes are part of an encrypted virtual private network, the same one that is used on local nodes, and they share a single DNS namespace that allows for the fastest link to be found between any two nodes – physical or virtual. The workload manager can be told to run certain jobs only on local nodes, other jobs only on cloudy nodes, or mixed jobs that can span either local or cloud nodes depending on the overall load on the hybrid cluster.
Bright Computing is going to ship Bright Cluster Manager 6.0 in January 2012. Van Leeuwen says that the company is working on linking into other public clouds, but declined to name names.
Pricing for the cloudy side of the software has not been set yet, but the plan is to charge a small incremental fee atop the EC2 slice price. ®