This article is more than 1 year old

Ten years on: How did that cloud strategy pan out?

How to avoid vendor lock-in

In the driver's seat

Rackspace Hosting is happy to sell you some compute, storage and networking capacity to buy your cloud. You can install it on the Rackspace public cloud or you can have a slice of the Rackspace Cloud and install it in your own data center.

You might need this for data security reasons, running the OpenStack cloud control freak and, if you choose, relying on what Rackspace calls Fanatical Support which also supports the Rackspace public cloud.

Tony Campbell led the cloud software development teams at Rackspace for nearly a decade and was then tapped to create training and certification operations relating to OpenStack a year and a half ago.

He says: "The first thing you need to ask yourself is do you really want to be in the business of running that cloud? It's not that a racing car driver doesn't know how to change the tires but he wants to focus on winning the race. You can outsource all of that to a pit crew, and that is where companies like Rackspace come into play."

The other big question you need to consider, says Campbell, is if your workloads will actually benefit from the rapid scaling up and scaling down of capacity across clusters of machines that are provided by a cloud controller.

"Ask yourself this: does your application require that kind of capability?" he says. "If the app is built to handle that, then cloud is a good move. If I am just trying to save some hardware space or drive up utilisation, then virtualization is a good move."

Rackspace is telling customers to watch their costs when they build their first cloud. "I don't think people are doing that math. And on a small scale cloud, that's not an issue," says Campbell.

"But when your cloud grows, costs will become important very quickly. The bigger you grow, the more you will need to pay attention to that."

Take a deep breath

The first thing Red Hat wants you to do when considering your first cloud is to take a deep breath. Then take another one. Take your time. Think about where you want your data center to be 10 years from now. Because the decisions you make on this first cloud will have a profound and lasting effect.

The server and storage platforms you have to keep limit your choices. Your data center is probably more like a brownfield post-industrial zone than a greenfield.

You want to be able to bring as many platforms as possible into this cloudy world as you can and in as open way as possible.

If there is one thing you know your company is not going to do after spending so much on creating applications on disparate platforms it is to rip and replace them onto some new cloud stack. If you didn't dump everything and move to Unix, or Windows and Linux, you aren't going to do it with cloud.

That is why Red Hat advocates an approach called open hybrid cloud. It is unapologetic about the fact that the situation is more complex than those peddling vCloud, OpenStack, CloudStack or SmartCloud would have you believe.

As far as Che is concerned, the other option is "cloud in a box", which means picking a particular cloud controller, say OpenStack, and building a cloud for a particular application or set of similar applications.

If you build a private cloud, you still need a way to integrate various private and public clouds together and manage them, as well as managing physical servers that are still running applications. And that is where Red Hat's CloudForms management platform comes in.

An open hybrid cloud puts customers in control

Red Hat CloudForms is one example of an open hybrid cloud product, It supports multiple virtualization and public cloud providers – running on Red Hat Enterprise Virtualization, VMware's vSphere and Amazon's EC2 Web Services. According to Red Hat, an open hybrid cloud puts the customers in control of their cloud infrastructure and allows them to select the right infrastructure for the right job.This supplies a flexibility not offered in a cloud in a box, Red Hat argues.

"The challenge is that if your solution can work with only one hypervisor and the set of clouds that are compatible with it, what are you going to do with your other hypervisors such as Hyper-V and KVM?" says Che.

"What are you going to do with developers who are going out to Amazon? What are you going to do with physical systems? You are basically turning your virtualization silo into a cloud silo and you are making it a whole lot easier to manage, but you are still not solving that fundamental problem of IT complexity."

There is another larger and potentially more difficult issue. Just as cloud controllers have their own APIs, so do the frameworks that support modern, webby applications such as Heroku, CloudFoundry and Force.com. And you have to take these APIs into account too.

"You have to think about whether you will be able to write your applications in the languages and frameworks of your choice, and deploy them to the cloud of your choice," says Che.

"This becomes a huge issue once you move beyond worrying about how you are going to manage your cloud to how you are going to put applications on it. The app has to be portable – meaning you use the same release manager, the same deployment tools and so on, or you will never take advantage of that portability."

And thereby avoid vendor lock-in. This could give you a headache now, but you might save yourself some headaches later.

Next page: Think of everything

More about

TIP US OFF

Send us news


Other stories you might like