Zunicore adds GPUs to clouds
Cloudy child follows hosting parent into HPC
Zunicore, the cloudy infrastructure division of Peer 1 Hosting, is going ceepie-geepie hybrid and making its cloud suitable for parallel supercomputing workloads that are goosed by GPU coprocessors.
The GPU-assisted cloud capacity is in beta testing now and will be opened up to commercial customers in July, a spokesperson at Zunicore tells El Reg.
Zunicore already peddles cloud capacity based on x86 coprocessors via an easy-to-use self-service portal that lets customers buy virtualized CPU, memory, disk, and network capacity on an hourly basis and pool it together. Zunicore does not offer preconfigured and static virtual server images, like Amazon with its EC2 cloud, but lets you set the virtual capacity you need. The Zunicore cloud fabric also has autoscaling to dial capacity up and down as the workload demands, which normal virtual server hosting available through its Peer 1 parent does not.
The Zunicore cloud was launched last November by Peer 1, which has 18 data centers in Canada (it is headquartered in Vancouver, British Columbia and its stock trades on the Toronto Stock Exchange), the US, and the UK. Since its launch last fall, Zunicore has added 3,700 customers to its infrastructure cloud (those numbers are through the end of March).
Like other cloud providers, Zunicore has built its fluffy infrastructure on two-socket Xeon servers, and the base hardware underlying virtual machines comes with 32GB of main memory, 400GB of local disk capacity a Gigabit Ethernet private and public network, and a 10 Gigabit Ethernet option if you want to create a virtual cluster in the pool of Zunicore machines.
Now, if you want to put an Nvidia Tesla M2050 GPU into the server, and you are willing to wait for 15 minutes for it to be configured, Zunicore can now offer you a ceepie-geepie setup. You can be billed hourly or monthly for the GPU capacity, just like with other resources on the Zunicore cloud.Zunicore will be adding support for the faster M2090 GPU coprocessors at some future date.
In terms of operating systems Zunicore's cloud supports Red Hat Fedora and Enterprise Linux, CentOS (the clone of RHEL), the raw Debian Linux and its commercialized Canonical Ubuntu Linux, plus Gentoo Linux. You can also use FreeBSD Unix and Microsoft's Windows Server 2003 and 2008 in its many edition permutations.
At the moment, Zunicore has 50 servers pre-configured with Tesla GPUs and can scale it up to 200 nodes. The M2050 has 515 gigaflops of raw double-precision floating point math oomph. With one card per server, you're talking just under 26 teraflops of oomph on 50 nodes, and with all 200 nodes, that is 103 teraflops.
If Zunicore could get two GPUs into a server (which is not necessarily a foregone conclusion), you could possibly double that up to 206 teraflops. None of that math counts the x86 processing, which adds a bit more on top of this.
Zunicore is also allowing you to configure GPUs on barebones servers that have not been virtualized if you want, and of course you can load the cluster manager and workload manager of your choice on the pool of hybrid CPU-GPU machines.
Pricing on the Zunicore GPU options have not been announced yet. You can set the number of cores, the amount of memory and the amount of storage capacity on the SAN to whatever you want, but just for the sake of gauging the price, a Zunicore virtual machine with one core, 1GB of memory, and 50GB costs $45 per month or 6 cents per hour.
A large configuration with eight cores, 15.5GB of memory, and 620GB of SAN storage costs $311 per month or 43 cents per hour. You don't pay for inbound network bandwidth, but you do have at pay an addition 12 cents per gigabyte for data leaving the Peer 1 network from "your" machines.
Zunicore's parent, Peer 1, actually beat Amazon into the CPU-GPU hosting space when it launched its GPU Cloud a few weeks after Amazon launched its HPC cloud instances but many months before Amazon added GPUs. Peer 1 slapped some S1070 and M2050 GPUs in its Toronto and London data centers. The machines with M2050s ran $1,000 per GPU per month.
Earlier this month, SoftLayer, another cloud provider based in Dallas, Texas, added a Tesla M2090 (rated at 665 gigaflops) to its cloudy infrastructure for making virtual HPC clusters. A dedicated server with a single Xeon E5-2620 processor and 16GB of memory costs $500 per month at SoftLayer, and adding the Tesla M2090 and a 500GB SATA disk boosts the price to $879 per month. That's more or less on par with what Peer 1 is charging for hosted CPU-GPU capacity.
It will be interesting to see what the hourly capacity will be on a Zunicore two-socket server with two GPUs when it is available in July. ®
Sponsored: Benefits from the lessons learned in HPC