This article is more than 1 year old

When hyperconvergence meets the cloud (but who will need it?)

Not for the big boys or small

Hyperconvergence is a term that's being bandied about all over the place. Whatis.com tells us that it's “a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor”.

In short, it's about having an entire virtual server infrastructure in a box. The idea is that you can start small with one device but then add further similar units to scale the power and resilience - but without having to know how to manage network kit, storage kit, server kit and the hypervisor independently.

This is a super concept if you're building what some people call a “private cloud” (a term I dislike: it's actually just an on-premise virtual server infrastructure that someone's given a new name to).

Unless your organisation is of a certain size, the idea of a modular setup that you can add to as needs increase is an attractive one since the support skills required are inevitably less than having specialists who can tune the server, storage, network and hypervisor layers of a component product.

What about the cloud: Is hyperconvergence relevant? And do we care (that is, is the relevance of hyperconvergence relevant)? Before we disappear up our own orifices of relevance, let's have a look at the cloud approaches available to us.

The basic end

At the basic end of cloud computing we have simple setups where you create your servers based on the number of virtual CPUs you want, the amount of memory you need and the amount of disk space.

These are fine in the average case, and you'll generally get the level of resource you asked for as the hypervisor will deal with the sharing of the underlying physical resources (which you don't have to know anything about) between you and the other people using that hardware. There may be times when multiple customers have resource spikes, and in such cases you won't get your full allocation, but the majority of the time all will be as expected.

Do hyperconverged appliances fit here within the cloud provider's setup? Maybe, but if you're trying to share resources (notably RAM and particularly CPU) this can generally be done more efficiently on a smaller number of big servers than a larger number of smaller boxes.

Doesn't say it can't be done, but if you have customers that want hefty piles of CPU (for a database server, say) then the maximum CPUs you can allocate will generally be the number that fit in a physical enclosure.

The middle ground

If you need a bit more control or assurance over your virtual resources, you can look to one of the providers that gives actual guarantees over the resources you have.

As you'll know if you've ever run up a virtualisation infrastructure in-house, hypervisors generally allow you to specify a minimum resource allocation for one or more virtual machines, so that you know that those machines will always be guaranteed the resource they've been allocated. In fact if you overallocate the hardware resources as guaranteed minima to the VMs, anything that can't nab its allocation will simply refuse to boot.

Rather than allocating resource to a specific VM you'd create a container with guaranteed resource and then let the customer share it as he or she sees fit (VMware call it a Virtual Data Centre, for example) but the idea's there - you're getting guaranteed resource. And you'll pay for it since the service provider's model for keeping stuff cheap is to allow it to be shared between customers using automated optimisation software.

In terms of hyperconvergence opportunities for the service provider, the suitability's not unlike the previous example, as all you're really doing is wrapping some more resource guarantees around the hardware.

Guaranteed hardware

I have an intense dislike of the term “bare metal cloud”. Because it's not the cloud. It's a dedicated server that is entirely allocated to you, and there's no hypervisor level because it's a physical server. It's where managed hosting has been since I was a child (well, nearly). It's hard to see how hyperconvergence fits in this respect, because the point is you're not virtualising and sharing the hardware.

So it works on-prem ...

Hopping back for a moment to the on-prem concept we mentioned earlier … on-premise setups are like home stereo systems. If you're a connoisseur/expert, you can buy a bunch of separate, best-of-breed modules with a top-end set of speakers and get a sound that's second to none.

If you're the average person in the street who can't tell the difference, you can buy an all-in-one hyperconverged unit that's not quite as loud and probably doesn't have quite the reproduction but which is absolutely fine in the average case, and which (okay, stretching the analogy here) you can hook to identical models to make a bigger noise.

And in the cloud?

In the cloud … well, you don't actually care. If you go for “bare metal cloud” you haven't got a cloud setup, you've got a managed server setup. If you go for a traditional model and you demand guaranteed memory, disk and CPU allocations then you actually don't care how it's achieved as long as it's achieved.

A large provider is likely to want to milk every last CPU cycle out of its equipment and hypervisor, and will have a carefully designed infrastructure made up of best-of-breed storage, network, server and virtualisation layers built in a carefully designed way and constantly monitored for optimisation.

In the middle, a small to medium cloud provider may well choose to use hyperconverged appliances to implement its setup because it's easy and inexpensive to get started and can then scale to a decent extent without ridiculous loss of performance.

In short

Hyperconvergence in the cloud? Well, at the low end it's not going to be all that relevant because that's the market where the vendor has socking big servers sharing resource between as many customers as possible. At the high end it's all bare metal so the entire virtualisation concept is largely irrelevant.

It does fit in the mid-range providers whose financial model is to scale the infrastructure with demand. We'll inevitably see higher and higher power appliances with better and better interconnectivity that allows them to perform increasingly well in clustered. And it fits quite well with the model where the vendor provides guaranteed CPU/memory resource, albeit with a need to consider how resilience is provided.

As far as the customer is concerned: they don't really care as long as it does what it says on the tin. ®

More about

TIP US OFF

Send us news


Other stories you might like