Feeds

Amazon adds GPUs to EC2 HPC clouds

Where's InfiniBand networking?

Choosing a cloud hosting partner with confidence

On the ceepie-geepie front, Amazon is starting out with the obvious choice, plunking two of Nvidia's M2050 single-wide fanless GPU co-processors into each virtual server node (which is a complete physical node, as it turns out). The M2050s are rated at 1.03 teraflops each doing single-precision floating point calculations and 515 gigaflops doing double-precision math. So the CPU Cluster instance gives you 1.03 of raw teraflops DP oomph.

An early tester of the GPU Cluster instance is a company called Mental Images, which creates software to do rendering and 3D Web services, has tested the scalability of the Amazon virtual ceepie-geepie up to 128 nodes and has run tests that show at that scalability, a virtual cluster offers about 90 percent of the scalability of an in-house cluster. (That's a pretty small virtualization overhead, all things considered.)

De Santis would not say when Amazon might make the FireStream fanless GPU co-processors available in its GPU cluster clouds, but AMD's FireStream single-wide 9350 is a relatively inexpensive device at $799 for a GPU that does 2 teraflops SP and 400 gigaflops, so some customers might ask for it. Then again, with the FireStream GPU co-processors not supporting ECC scrubbing on their integrated GDDR5 memory, maybe no one will ask for them until there is ECC on them.

De Santis said that Amazon will "listen to customers" when making its technology choices, and that includes field prorgammable gate arrays (FPGAs), which are just getting interesting and somewhat affordable in the HPC space. If customers find they need more bandwidth between the nodes in the clusters, you can bet Amazon will upgrade to InfiniBand - probably long before FireStream or FPGA accelerators make it into the virtual HPC server slices.

Here's the funny bit. In an on-demand scheme, a regular generic quadruple extra large instance with fat memory that most closely resembles the raw Cluster Compute slice costs $2.10 per hour, while the Cluster Compute instance costs only $1.60 per hour. (HPC customers are going to lock up lots of nodes to run their jobs, and hence the volume discount is sort of built in.) To add two Nvidia M2050 GPUs to the Cluster Compute instance boosts the price up to $2.40 per hour. So basically, if you are an HPC customer, Amazon is guaranteeing the iron configuration and tossing in a teraflops of GPU power for 30 cents an hour.

That is a deal that lots and lots of people won't be able to resist - which is the whole point. ®

New hybrid storage solutions

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.