Original URL: http://www.theregister.co.uk/2013/05/17/hybrid_cloud_definitions_analysis/

What is the difference between Virt and Cloud?

Pin your ears back and we'll tell you

By Timothy Prickett Morgan

Posted in Cloud, 17th May 2013 09:04 GMT

There's a lot of talk – some might say hot air – about cloud computing, what it is and what it is not. Ask 10 people and you will probably get 15 answers.

Take the formal definition of cloud put forward by the National Institute of Standards and Technology (NIST), the section of the US Department of Commerce that for more than a century has been obsessed with measurements and definitions.

It took 15 revisions and nearly three years for NIST to come up with its formulation of what constitutes a cloud. It was released in September 2011 and the grousing has continued ever since about how this, that and the other needs to be added to the definition.

High five

NIST defines cloud as having five essential characteristics, three service models and four delivery models.

The five essential characteristics are on-demand self-service, broad network access, resource pooling, rapid elasticity and measured usage.

The service models include cloudifying the infrastructure, platform or application software layers and exposing these as services to customers with those above characteristics and with increasing levels of abstraction away from the underlying servers, storage, switching and systems software.

NIST recognises private clouds (built for exclusive use), public clouds (run by a service provider with capacity and services shared by multiple tenants), and community clouds (organised around a group of users rather than a particular technology).

Hybrid cloud in the NIST definition is a mixture of any two distinct infrastructures but which offers cloud bursting, load balancing and portability across different kinds of clouds.

Virtualization, by which we mean abstracting server compute and memory capacity as well networking I/O and storage capacity, either residing in those servers or in external arrays, is obviously the key means to enable resource pooling.

It's API hour

"But there is more to it than that," says Tony Campbell, director of OpenStack training and certification operations at Rackspace.

"With OpenStack, we are really big on APIs. We think that for it to be a cloud, everything has to be accessible via an API. This allows developers to write applications for desktops, mobile devices or whatever thin or thick clients they like because the APIs expose all of that functionality. So virtualization without an API – not cloud."

In Campbell's augmented definition, elasticity, or the ability to fire up more virtual machines or fatter ones on a hypervisor, is not sufficient.

"The cloud has spoiled us," he says. "We know we can click on a dashboard and instantly have access to more resources. And we are addicted to that. Standing up bare metal, installing a hypervisor and releasing virtual machines on it – and that process taking several days – is no longer acceptable."

So speeding up virtualization and access to virtual CPU, memory, I/O, and storage capacity is, for some, also part of the cloud definition.

VMware, which is trying to extend its dominance in x86 server virtualization into a similar juggernaut position in cloud computing with its vCloud Suite, wants to add network and storage virtualization to the definition of what comprises a cloud.

Distant memory

"Virtualization is simply the abstraction of compute and memory, and in its current instantiation at the cluster level. Cloud computing – done right – is about going beyond those two constraints to the full set of data centre services," says Neela Jacques, director of product marketing for VMware's cloud infrastructure suite.

“You truly have to virtualize networking and storage arrays. We have to take the concepts that started with virtualization and take them up to the nth level – being able to load balance across clusters and going beyond just compute and memory."

To some people, says Jacques, cloud is different from what NIST, Rackspace, VMware and their peers would generally agree on. Vendors with expertise in system management and provisioning tools want to solve the complexity issue, which is in all data centers and the reason companies are willing to engage in cloud computing in the first place.

They want to hide the complexity in one thin layer that sits between the end-user and then script all of the resources on the disparate infrastructure to work together.

Vendors taking this approach to cloud include BMC, CA Technologies, IBM, and Cisco Systems with its acquisitions of NewScale and Tidal Software.

"If you are a management vendor, cloud looks an awful lot like management," says Jacques.

What management vendors want you to swallow is that the world is complex and you are not going to be able to simplify it

"They have a CMDB (configuration management database), they have extensive orchestration, they have a support desk and a catalogue already. Basically, they go with what they know.

"What management vendors would like you to swallow is the idea that the world is complex, it is always going to be complex and you are not going to be able to simplify it, so what you should do is buy a scripting platform so you can provision to any one of those things."

VMware has made investments to help its position in the hybrid cloud arena, particularly with the acquisition of DynamicOps in July. It is unabashed that it wants to be the dominant cloud provider and that for most customers today, hybrid cloud means VMware inside the firewall and Amazon EC2 on the outside.

"When we talk about cloud, customers have a basic virtualized environment and we want to make that environment better," says Jacques.

"That means increasing performance so more workloads can move onto hypervisors, and supporting new technologies like SR-IOV. It is already superior to the physical world but we have to make it easier, which is what the vCloud Suite is all about.

“For VMware, the biggest impact we can have is to deliver the best platform for all apps, and that is where we put 80 per cent of our efforts. We recognise, however, that people have environments beyond that and we are making investments via what we are building as well as acquisitions to cover more of them."

Why hybrid cloud and virtualization is different

That may not be a denial of hybrid cloud computing but it is not a strong endorsement either. You need to look for a company such as Red Hat for a strong statement about hybrid cloud and why virtualization is different.

"With virtualization, I am trying to take existing applications and servers and make them more efficient," says Bryan Che, general manager for the cloud business unit at Red Hat.

"You want to drive up the density, putting lots and lots of virtual machines on as few servers as possible, and still give yourself flexibility to deal with heterogeneous hardware.

“With cloud computing, I am trying to build for elasticity, to make my infrastructure scalable instead of trying to concentrate everything on one small rack of servers. It is about giving users fast access to something reasonably efficient."

Open to all

If anyone is banging the drum about hybrid, it is Red Hat. The company started the Deltacloud API stack to create a layer of transformation software that lets all the different public clouds and private cloud be controlled from a single console such as Red Hat's own CloudForms.

Red Hat understands the complexity of internal IT operations out there in the real world, according to Che.

"The public cloud providers have the luxury of being able to stamp out thousands or tens of thousands of servers at a time, all based on the same infrastructure and all running the same software," he says.

"They don't have to worry about legacy applications, they don't have a lot of heterogeneous infrastructure that they are managing.

"Enterprises, on the other hand, have been virtualizing their infrastructure and that has created a bunch of management challenges such as virtual machine sprawl.

“They have different virtualization clusters all over the place, running on different hardware, usually managed by different groups. They have multiple hypervisors.

"Developers are going into Amazon and other public clouds and they have no idea how to make it consistent or portable across those environments. And, the majority of their workloads are still running on physical systems."

The issue, as far as Red Hat sees it, is that it is hybrid capability that makes a cloud more than just server virtualization. It is not just an option but a requirement to meet the definition.

And by hybrid, Red Hat means something more sophisticated than building a private cloud based on a particular virtualization hypervisor and then finding a compatible public cloud to burst onto in short order if you need more compute capacity.

Cloud bursting is a niche case right now anyway, appropriate only for workloads with modest data sets and lots of compute (some high-performance applications are like this.)

Not everyone will agree on that definition of hybrid cloud, of course. But it is something Red Hat strongly believes that companies need to consider – before they get too far into any one set of cloud technologies.

"No one is going out and instantly transforming their entire data centre into a cloud," concedes Che.

"They start with one particular environment, one set of workloads and so on. But the important thing is which approach did you take? If you started out with something that was fundamentally open, and then even if your initial deployment is only on top of VMware or a particular set of hardware, I am able to extend that cloud over new infrastructure and I don't create these silos all over again."

Climate change

That is why Red Hat has spent some time creating Deltacloud APIs and CloudForms cloud management tools. It is trying get customers to think beyond the one little cloud they are building today and see the wider weather pattern they are creating across their data centers and public cloud partners.

El Reg would add yet another twist to the cloud definition, albeit an idealised tweak that would be nice if it were true: you can't call it a cloud unless you can get off it.

Perhaps Sir Mick would agree. But the hybrid nature that Red Hat is espousing (and that VMware believes will be only for special cases that account for maybe 20 per cent of the workloads on x86 servers, as Linux does today) is certainly not something other vendors talk about much.

They admit there is a slew of different equipment, operating systems, hypervisors and public cloud capacity being used. But they are not working together to allow application and data portability across corporate data centres, between data centres and public clouds, and across public clouds in a way that would give IT shops true flexibility.

Perhaps the problem is too tough and vendors just want to call something a cloud to get it past the bean counters.

We will get to a fully virtualized, automated data center eventually, and by that time we probably won't call it cloud any more. We will just call it computing. Or, perhaps more hilariously, processing.

In the meantime, the distinctions between well-established server virtualization and evolving cloud computing will be important. ®