This article is more than 1 year old

Amazon adds GPUs to EC2 HPC clouds

Where's InfiniBand networking?

SC10 VMware's ESX Server virtualization and Amazon's EC2 cloud got their starts among developers frustrated with the time and money it took to get physical infrastructure approved and set up so they could monkey around with their code. And it is a safe bet to guess that hybrid CPU-GPU clusters are going to get some serious action now that Amazon has plugged in Nvidia's GPU co-processors into its Cluster Compute instances.

Amazon Web Services, the cloud computing subsidiary of the online retailing giant, rolled up and rolled out its virtual HPC clusters back in July. These HPC instances are similar to regular extra large virtual server slices on the EC2 cloud, except Amazon guarantees that the slices are built using a two-socket x64 server with Intel's Xeon X5570s running at 2.93 GHz and 8 MB of on-chip cache memory. Those processors are a year and a half old, which is not a big deal because the GPUs are going to be doing a lot of the math anyway.

This Cluster Compute instance is rated at an aggregate of 33.5 EC2 compute units in the Amazon virtual server relative performance scheme and presents 23 GB of virtual memory and 1.69 TB of disk capacity to the HPC application running atop it. This is four times the extra large EC2 slice in terms of compute units, according to Amazon.

The virtual HPC slices run in 64-bit mode, which is necessary to address more than 2 GB of memory in a node. The HPC slices are unique in that programmers know the exact iron underneath the slice (not so for other slices on EC2) and that they are also linked together with 10 Gigabit Ethernet links.

Peter De Santis, who spoke to El Reg about the addition of GPUs to the HPC slices, said that there was "a lot of excitement about experimenting with GPUs," and that is why Amazon has very quickly added this feature to the Cluster Compute instances. Customers are concerned with the power and thermal issues of putting GPUs in their own systems as well as the high cost of buying them, and are interested in giving the idea a whirl on EC2 first before making any commitments to actually invest in GPU iron.

In most cases, adding GPUs to clusters will require new servers, which were never designed to have multiple hot components in them. Because of the hassle, it may turn out that a lot of Amazon's virtual ceepie-geepie users start out on virtual HPC clusters with GPUs and just stay there, never investing in the iron at all and preferring to rent time on Amazon's cloud.

The Cluster Compute instances run a Linux operating system atop Amazon's homegrown variant of the Xen hypervisor for x64 servers; but other than that, according to De Santis, Amazon is not adding any special software to the stack. Amazon is also not managing the cluster images or the cluster job scheduling on behalf of customers. The idea is to provide the raw nodes and let customers deply x64-based cluster provisioning, management, and job scheduling tools as they would internally.

De Santis would not divulge how large the virtual HPC cluster is that the company has set up in its data center in Northern Virginia, but says to date customers are running virtual clusters with dozens to hundreds of nodes. (These are without GPUs.) NASA's Jet Propulsion Laboratory is experimenting with the Cluster Compute instances to do image processing for the images it collects from its exploration robots, just to name one customer.

More about

TIP US OFF

Send us news


Other stories you might like