Feeds

BlueLock: Risky cloud business

Admitting the imbalance is a good start

Reducing the cost and complexity of web vulnerability management

VMworld One of our first meetings at VMworld was with BlueLock, who have the distinction of being one of a small handful of cloud service providers participating in VMware’s big vCloud Datacenter initiative. We spent a bit of time grilling Pat O’Day, BlueLock CTO, in their booth and learned some new things about the cloud value proposition.

Full disclosure: I’m underwhelmed by the cloud concept. To me, private clouds are the cat’s ass (meaning: good), but they aren’t a mystery – they’re just a combination of sophisticated and robust virtualization with IT best practices.

One of the best things private clouds do is get business units out of the business of making IT decisions. In optimum private cloud instances, business units get to define an app and a set of SLAs but then leave all of the other decisions (platform, virtualized vs non-virtualized, etc.) to the data center.

With public clouds, I’m a bit of a heretic. I’m not quite a ‘cloud denier,’ but I think the cloud use case for anything above the smallest of SMBs is much more constrained than what most of the vendor and pundit communities seem to think.

To me, putting any important enterprise workload on a public cloud is a risk that most businesses will be loath to take on. Why? There is an imbalance of risk. A security breach or an outage, or even below-par performance, can spell big problems for the customer – but, typically, only a minor loss of revenue (at worst) for the cloud provider.

I hit BlueLock’s O’Day with these concerns, with my main attack focused on quality of service and indemnification in case of business loss. My example was an outage in the public cloud datacenter; his initial defense cited redundant power, running the gamut from system power supplies… to backup generators… to having 10,000 hyperactive first graders under contract to jog on power-generating treadmills. (Okay, I made that last one up, but is it such a bad idea?)

I continued to press O’Day, asking how I, as a customer, would be made whole in the case of a significant outage in the cloud. His response was a breath of fresh air: he was the first cloud advocate to openly admit and discuss the huge imbalance of risk between the cloud provider and customer.

Most cloud providers will, in the case of an outage or disruption, apologize profusely and cheerfully refund a pro-rated share of the monthly fee. The customer, on the other hand, is looking at a check that represents only a tiny fraction of the damage they’ve incurred due to the outage.

O’Day discussed how they negotiate unique SLAs with their clients and how there are mechanisms in place that address some of the risk on the customer side. For example, clouds residing in BlueLock datacenters can be captured and placed in escrow with a third party, meaning that they can be quickly deployed to another cloud provider (or internally) if BlueLock were to suddenly disappear.

What really caught my interest was when he mentioned the ability to cover business risk with insurance policies. These policies can be configured to cover the liability arising from a protracted outage or, assumedly, something like a security breach that is the fault of the cloud provider and results in business loss. There is going to be a lot of fine print to read, understand, and negotiate to get to the point where you are comfortable that your risk is covered, but it’s good to know that it’s available.

I don’t think that we’ll see enterprises put anything important out onto public clouds without the ability to reduce their risk via these mechanisms. And I have yet to talk to a sizeable enterprise datacenter chief (I mean the enterprise is sizeable, not the datacenter chief) who is enthusiastic about putting key workloads onto public clouds.

There is, of course, a cost to insurance and to writing/monitoring SLAs as well, plus the cost of using the cloud and ensuring that there is enough capacity to meet your needs. This is a complex topic that bears a lot of examination.

Once you add in all the costs inherent in running an enterprise workload in the cloud, will it still cost you less money than running it on your own gear in your own house? I have my doubts about that.

To me, some things are slam dunks for clouds. Web hosting and contracting extra capacity to handle short-term spikes due increased to demand or testing needs are two places where public clouds make a lot of sense.

But using public clouds as an ongoing home for a large portion of enterprise apps? I’m not so sure. It begins to look a lot more like a traditional outsource rather than a hip, revolutionary (and cool!) usage model that will change the world as we know it.

Choosing a cloud hosting partner with confidence

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.