Feeds

Bright Computing bursts HPC to EC2 clouds

A veritable – and virtual – cluster

Choosing a cloud hosting partner with confidence

SC11 If you want to do cloud bursting in an HPC environment, the last thing you want to do is try to manage the movement of running workloads from your own cluster out to a compute cloud like Amazon's EC2 compute cloud. Bright Computing, the maker of the Bright Cluster Manager, would go so far as to say that its HPC cluster control freak is the only thing that should be trusted to do such work.

Ahead of the SC11 supercomputing conference in Seattle next week, the company is previewing Bright Cluster Manager 6.0, an upcoming release that will not only be able to manage a virtual cluster running on the Amazon EC2 cloud, but will also be able to augment the performance of an internal HPC cluster with EC2 capacity when and if customers decide they need to radically speed up the running of a particular design, simulation, or calculation.

Matthijs van Leeuwen, CEO at Bright Computing, tells El Reg that companies in the life sciences who have no desire to invest in large compute clusters but who want to run their simulations faster are the obvious first customers looking to be able to burst from a baby cluster they have in their office out to EC2. In some life sciences research organizations, the only local machine is a laptop or a workstation and the only cluster that will be used is one on a public cloud like EC2.

Engineering and design firms that typically only need compute capacity sporadically – and usually lots of it for short periods of time when they are running up against a product deadline – are also a perfect fit for cloud bursting out to the EC2 public cloud.

It really depends on the data set and how quickly the data can be moved out to the public cloud. For the Monte Carlo simulations that are the heart of risk analysis for financial transactions, the data sets are small and can be uploaded to EC2 in a matter of 10 minutes or so over a fast internet link. Data sets for product design can be mailed to Amazon on tape or disk, and ditto for oil and gas companies who have very large data sets but sometimes run out of computing capacity for the big jobs.

But data movement is not the issue. "It's not just getting the data across, but getting it across before the job needs to run that matters," says Van Leeuwen.

With Bright Cluster Manager 6.0, the head node you have on your own internal HPC cluster is rigged to manage the virtual compute nodes running out on the EC2 cloud just as if they were running on your own cluster. Out on EC2, the virtual nodes are hooked up to a cloud director node, which is a proxy for the head node and which gets a single image of the compute node software stack that is deployed to EC2 VMs as needed by the head node. Transferring this software image is not instantaneous; it can take as much as an hour. But once it is uploaded to the EC2 public cloud, it is a matter of minutes to fire up a node running the stack. Then Bright Cluster Manager just sees it as another node and uploads data for the nodes to use ahead of them being called upon to perform calculations.

The external nodes are part of an encrypted virtual private network, the same one that is used on local nodes, and they share a single DNS namespace that allows for the fastest link to be found between any two nodes – physical or virtual. The workload manager can be told to run certain jobs only on local nodes, other jobs only on cloudy nodes, or mixed jobs that can span either local or cloud nodes depending on the overall load on the hybrid cluster.

Bright Computing is going to ship Bright Cluster Manager 6.0 in January 2012. Van Leeuwen says that the company is working on linking into other public clouds, but declined to name names.

Pricing for the cloudy side of the software has not been set yet, but the plan is to charge a small incremental fee atop the EC2 slice price. ®

Remote control for virtualized desktops

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.