Google network lord questions cloud economics
Does Amazon make sense at 100% use?
Vijay Gill — one of the brains that oversees Google's epic internal network — has questioned the economics of so-called cloud computing. Or least, the sort of cloud computing practiced by Amazon.com, whose EC2 service offers up instant access to compute power via the interwebs. If your infrastructure is in use around the clock, rather than just here and there, he argues, it may be cheaper to own and operate your own gear.
"Think of it as taking a taxi vs. buying a car to make a trip between San Francisco and Palo Alto," the Google senior manager of production network engineering and architecture writes on his personal blog. "If you only make the trip once a quarter, it is cheaper to take a taxi. If you make the trip every day, then you are better off buying a car. The difference is the duty cycle. If you are running infrastructure with a duty cycle of 100%, it may make sense to run in-house."
And to prove his point, the Google has put together a model that compares the costs of AWS and a colocation setup where you own the gear — at a 100 per cent duty cycle. The model assumes that managing AWS requires just as much effort as managing your own hardware, which may seem a stretch, but Gill backs this assumption with what seems to be a quote from a friend.
"You’d be surprised how much time and effort i’ve seen expended project-managing one’s cloud/hosting provider — it is not that different from the effort required for cooking in-house automation and deployment," the quote reads. "It’s not like people are physically installing the OS and app stack off CD-ROM anymore, I’d imagine whether you’re automating AMIs/VMDKs or PXE, it’s a similar effort."
Gill says his model errs on the high-end for colocation prices — "so the assumptions there are conservative," he says — but according to his calculations, AWS is still considerably more expensive. Here's his bottom line for one particular Amazon scenario here:
And the end result of the matching 100-per-cent-duty-cycle colocation setup:
You can browse the complete model here.
As one commenter on Gill's blog points out, the Google man uses Amazon's standard pricing for instances, rather than its "reserved pricing." If you know how many instances you're going to need, you can reserve them ahead of time at a lower price, and if you're assuming 100 per cent utilization, it makes sense to assume reserved instances.
The standard prices are for those moments when you need more juice right now. EC2 is short for Elastic Compute Cloud. It scales as you need it to scale. And as other commenters point out, Gill's model doesn't show the worth of Amazon's elasticity. "The 'elastic' component is a very appealing component of the EC2 (or any 'cloud') system," a commenter writes. "Being able to turn up/down single machines as needed is a huge capital expenditure burden that vanishes from the books, and hopefully the number of systems running then more closely matches income generated by those CPU cycles rather than just serving to warm up a big windowless building somewhere."
But again, Gill is assuming 100 per cent utilization. With his model, all CPUs cycles are needed. He's merely arguing that in such cases, colocation may be the better option. But in a way, his model may actually highlight the worth of EC2 and its elastic nature. How many data centers actually maintain a 100 per cent duty cycle? And as the duty cycle percentage drops, doesn't EC2 become cheaper and cheaper by comparison?
"Even a 40% reduction in traffic for 40% of the day (not unreasonable in services that cater to specific geographies) would start to make EC2 look a bit more competitive," one commenter argues. "I do agree that for sizable installations that see steady load, that a self-operated data center makes the most sense, but the devil is in the details on any calculations."
EC2 needn't suit everyone. Just some. ®
Sponsored: Hyper-scale data management