Amazon adds GPUs to EC2 HPC clouds
Where's InfiniBand networking?
SC10 VMware's ESX Server virtualization and Amazon's EC2 cloud got their starts among developers frustrated with the time and money it took to get physical infrastructure approved and set up so they could monkey around with their code. And it is a safe bet to guess that hybrid CPU-GPU clusters are going to get some serious action now that Amazon has plugged in Nvidia's GPU co-processors into its Cluster Compute instances.
Amazon Web Services, the cloud computing subsidiary of the online retailing giant, rolled up and rolled out its virtual HPC clusters back in July. These HPC instances are similar to regular extra large virtual server slices on the EC2 cloud, except Amazon guarantees that the slices are built using a two-socket x64 server with Intel's Xeon X5570s running at 2.93 GHz and 8 MB of on-chip cache memory. Those processors are a year and a half old, which is not a big deal because the GPUs are going to be doing a lot of the math anyway.
This Cluster Compute instance is rated at an aggregate of 33.5 EC2 compute units in the Amazon virtual server relative performance scheme and presents 23 GB of virtual memory and 1.69 TB of disk capacity to the HPC application running atop it. This is four times the extra large EC2 slice in terms of compute units, according to Amazon.
The virtual HPC slices run in 64-bit mode, which is necessary to address more than 2 GB of memory in a node. The HPC slices are unique in that programmers know the exact iron underneath the slice (not so for other slices on EC2) and that they are also linked together with 10 Gigabit Ethernet links.
Peter De Santis, who spoke to El Reg about the addition of GPUs to the HPC slices, said that there was "a lot of excitement about experimenting with GPUs," and that is why Amazon has very quickly added this feature to the Cluster Compute instances. Customers are concerned with the power and thermal issues of putting GPUs in their own systems as well as the high cost of buying them, and are interested in giving the idea a whirl on EC2 first before making any commitments to actually invest in GPU iron.
In most cases, adding GPUs to clusters will require new servers, which were never designed to have multiple hot components in them. Because of the hassle, it may turn out that a lot of Amazon's virtual ceepie-geepie users start out on virtual HPC clusters with GPUs and just stay there, never investing in the iron at all and preferring to rent time on Amazon's cloud.
The Cluster Compute instances run a Linux operating system atop Amazon's homegrown variant of the Xen hypervisor for x64 servers; but other than that, according to De Santis, Amazon is not adding any special software to the stack. Amazon is also not managing the cluster images or the cluster job scheduling on behalf of customers. The idea is to provide the raw nodes and let customers deply x64-based cluster provisioning, management, and job scheduling tools as they would internally.
De Santis would not divulge how large the virtual HPC cluster is that the company has set up in its data center in Northern Virginia, but says to date customers are running virtual clusters with dozens to hundreds of nodes. (These are without GPUs.) NASA's Jet Propulsion Laboratory is experimenting with the Cluster Compute instances to do image processing for the images it collects from its exploration robots, just to name one customer.
On the ceepie-geepie front, Amazon is starting out with the obvious choice, plunking two of Nvidia's M2050 single-wide fanless GPU co-processors into each virtual server node (which is a complete physical node, as it turns out). The M2050s are rated at 1.03 teraflops each doing single-precision floating point calculations and 515 gigaflops doing double-precision math. So the CPU Cluster instance gives you 1.03 of raw teraflops DP oomph.
An early tester of the GPU Cluster instance is a company called Mental Images, which creates software to do rendering and 3D Web services, has tested the scalability of the Amazon virtual ceepie-geepie up to 128 nodes and has run tests that show at that scalability, a virtual cluster offers about 90 percent of the scalability of an in-house cluster. (That's a pretty small virtualization overhead, all things considered.)
De Santis would not say when Amazon might make the FireStream fanless GPU co-processors available in its GPU cluster clouds, but AMD's FireStream single-wide 9350 is a relatively inexpensive device at $799 for a GPU that does 2 teraflops SP and 400 gigaflops, so some customers might ask for it. Then again, with the FireStream GPU co-processors not supporting ECC scrubbing on their integrated GDDR5 memory, maybe no one will ask for them until there is ECC on them.
De Santis said that Amazon will "listen to customers" when making its technology choices, and that includes field prorgammable gate arrays (FPGAs), which are just getting interesting and somewhat affordable in the HPC space. If customers find they need more bandwidth between the nodes in the clusters, you can bet Amazon will upgrade to InfiniBand - probably long before FireStream or FPGA accelerators make it into the virtual HPC server slices.
Here's the funny bit. In an on-demand scheme, a regular generic quadruple extra large instance with fat memory that most closely resembles the raw Cluster Compute slice costs $2.10 per hour, while the Cluster Compute instance costs only $1.60 per hour. (HPC customers are going to lock up lots of nodes to run their jobs, and hence the volume discount is sort of built in.) To add two Nvidia M2050 GPUs to the Cluster Compute instance boosts the price up to $2.40 per hour. So basically, if you are an HPC customer, Amazon is guaranteeing the iron configuration and tossing in a teraflops of GPU power for 30 cents an hour.
That is a deal that lots and lots of people won't be able to resist - which is the whole point. ®