Feeds

Google network lord questions cloud economics

Does Amazon make sense at 100% use?

Next gen security for virtualised datacentres

Vijay Gill — one of the brains that oversees Google's epic internal network — has questioned the economics of so-called cloud computing. Or least, the sort of cloud computing practiced by Amazon.com, whose EC2 service offers up instant access to compute power via the interwebs. If your infrastructure is in use around the clock, rather than just here and there, he argues, it may be cheaper to own and operate your own gear.

"Think of it as taking a taxi vs. buying a car to make a trip between San Francisco and Palo Alto," the Google senior manager of production network engineering and architecture writes on his personal blog. "If you only make the trip once a quarter, it is cheaper to take a taxi. If you make the trip every day, then you are better off buying a car. The difference is the duty cycle. If you are running infrastructure with a duty cycle of 100%, it may make sense to run in-house."

And to prove his point, the Google has put together a model that compares the costs of AWS and a colocation setup where you own the gear — at a 100 per cent duty cycle. The model assumes that managing AWS requires just as much effort as managing your own hardware, which may seem a stretch, but Gill backs this assumption with what seems to be a quote from a friend.

"You’d be surprised how much time and effort i’ve seen expended project-managing one’s cloud/hosting provider — it is not that different from the effort required for cooking in-house automation and deployment," the quote reads. "It’s not like people are physically installing the OS and app stack off CD-ROM anymore, I’d imagine whether you’re automating AMIs/VMDKs or PXE, it’s a similar effort."

Gill says his model errs on the high-end for colocation prices — "so the assumptions there are conservative," he says — but according to his calculations, AWS is still considerably more expensive. Here's his bottom line for one particular Amazon scenario here:

Vijay Gill AWS 100 duty cost model

And the end result of the matching 100-per-cent-duty-cycle colocation setup:

Vijay Gill colo 100 duty cost model

You can browse the complete model here.

As one commenter on Gill's blog points out, the Google man uses Amazon's standard pricing for instances, rather than its "reserved pricing." If you know how many instances you're going to need, you can reserve them ahead of time at a lower price, and if you're assuming 100 per cent utilization, it makes sense to assume reserved instances.

The standard prices are for those moments when you need more juice right now. EC2 is short for Elastic Compute Cloud. It scales as you need it to scale. And as other commenters point out, Gill's model doesn't show the worth of Amazon's elasticity. "The 'elastic' component is a very appealing component of the EC2 (or any 'cloud') system," a commenter writes. "Being able to turn up/down single machines as needed is a huge capital expenditure burden that vanishes from the books, and hopefully the number of systems running then more closely matches income generated by those CPU cycles rather than just serving to warm up a big windowless building somewhere."

But again, Gill is assuming 100 per cent utilization. With his model, all CPUs cycles are needed. He's merely arguing that in such cases, colocation may be the better option. But in a way, his model may actually highlight the worth of EC2 and its elastic nature. How many data centers actually maintain a 100 per cent duty cycle? And as the duty cycle percentage drops, doesn't EC2 become cheaper and cheaper by comparison?

"Even a 40% reduction in traffic for 40% of the day (not unreasonable in services that cater to specific geographies) would start to make EC2 look a bit more competitive," one commenter argues. "I do agree that for sizable installations that see steady load, that a self-operated data center makes the most sense, but the devil is in the details on any calculations."

EC2 needn't suit everyone. Just some. ®

Build a business case: developing custom apps

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Best practices for enterprise data
Discussing how technology providers have innovated in order to solve new challenges, creating a new framework for enterprise data.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?