Google downshifts App Engine to infrastructure cloud

Half the cost of the competition

Internet Security Threat Report 2014

Google I/O Microsoft just downshifted its Azure platform cloud so it could support raw virtual machines and any old applications companies want to cram into them, and now Google has followed suit with Compute Engine.

Announced today at the Google I/O extravaganza in San Francisco by Urs Hölzle, senior vice president of infrastructure at the Chocolate Factory, Compute Engine gives Google customers what they have been asking for: Virtual machines that run on Google's vast infrastructure. And it gives Google something it wants: yet another product that can generate revenue and profits from its many large data centers scattered around the globe.

Google Compute Engine

To illustrate the power of Compute Engine, Hölzle talked about the Cancer Regulome Explorer application created by the Institute for Systems Biology, which used to run its genome-matching algorithms, used in cancer research, on an internal cluster with 1,000 cores. On this machine, the Genome Explorer app took 10 minutes to find a match on a particular segment of the chromosomes between two samples.

After a few days, the Cancer Regulome Explorer application was ported to a mix of App Engine and Compute Engine, with 2,000 cores dedicated to the job, and just by doubling the capacity, it was able to make connections about every two seconds or so.

"Anyone with large-scale computing needs can now access the same infrastructure with Compute Engine virtual machines," said Hölzle. "And this infrastructure comes with a scale, and a performance, and a value that is un paralleled in the industry since you can benefit from the efficiency of Google's data centers and our experience using them."

While Hölzle was talking, the live demo genome app was quietly scaling out, and eventually was running on over 770,000 cores in one of Google's data centers, and the links were popping up faster than you could see.

"That is cool, and that is how infrastructure as a service is supposed to work," Hölzle boasted.

Without naming any names, Hölzle also bragged that Google could provide raw virtual machines and the raw compute, storage, and networking capacity underneath them at a much better price than its rivals – up to 50 per cent more compute than other infrastructure cloud providers.

"So you don't have to choose between getting the best performance and getting the best price," explained Hölzle. "We worked very hard for the past decade to lower the cost of computing, and we are passing these savings on to you."

Compute Engine is in limited preview right now, and you can sign up for it here. Google is suggesting that the initial uses of the infrastructure cloud are for batch jobs like video transcoding or rendering, big data jobs like letting Hadoop chew on unstructured data, or traditional and massively parallel HPC workloads in scientific and research fields.

At the moment, Compute Engine fires up a pool of virtual Linux instances, in this case either Ubuntu 12.04 from Canonical or the RHEL-ish CentOS 6.2, but it is not clear what virtual machine container Google is using. Presumably, it is the same VM that Google uses internally for its own code. Google did not say when or if it would support other Linuxes or Microsoft's Windows.

You store data on Google's Cloud Storage, and you can use ephemeral storage for the VMs as they run as well as persistent storage to hold data sets. You can make the persistent storage read only as well, which means that it can't be messed with and that it can be shared by multiple Linux instances on the Google infrastructure.

Compute Engine has a service level agreement for enterprise customers, assuring a certain uptime, but what exactly that level of uptime is not in the developer's guide. Google warns that it may take the service down for periodic maintenance during the limited preview, and also that it is only supporting Compute Engine in the US, not in its data centers in Europe or Asia/Pacific, during the preview. Google has an open and RESTful API stack for Compute Engine and is working with Puppet Labs, Opscode, and RightScale to integrate their cloudy management tools with Google's infrastructure

Google Compute Engine is pretty simple in terms of configuration and pricing. The basic VM slice has one virtual core, 3.75GB of virtual memory, and 420GB of disk capacity on it and is rated at 2.75 Google Compute Engine Units (GCEUs), which is not explained well in terms of how that relates to a real-world server slice.

The point is, Google guarantees a certain level of performance per virtual core, no matter what the underlying iron is and its virtualization layer ensures this. You can fire up virtual machines with 1, 2, 4, or 8 virtual cores, and the memory, disk, and performance all scale with the cores in a linear fashion. (Well, the disk is a little better than linear.) The starter VM costs 14.5 cents per hour and the price also scales linearly – it works out to 5.3 cents per GCEU per hour.

It costs nothing to upload data into the Compute Engine service, and you can move data between cloud services within the same geographical region for free as well. If you want to move data to a different zone in Google's data center infrastructure within the same region, you have to pay a penny per gigabyte, and ditto if you want to move to a different region within the US.

Exporting data to your facilities runs you between 8 and 12 cents per gigabyte per month in the Americas and EMEA regions on a sliding scale that decreases the price as you ramp up the terabytes. It will cost from 15 to 21 cents per gigabyte per month to move data out of the Google cloud to you in the Asia/Pacific region. Persistent storage runs 10 cents per GB per month.

Bootnote: Persistent storage is priced per month, not per hour, as this story originally and erroneously suggested. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story


Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.