This article is more than 1 year old

Server virtualisation is not enough

What needs to change to build a private cloud?

The server computing architecture that lies at the heart of what we now refer to as private cloud has been considered several times over. The game-changer – the capability that makes private cloud possible today – is virtualisation, which enables logical workloads to be fully abstracted from machines.

The famous Moore’s Law has helped, in that today’s processors are more than capable of running multiple instances of an operating-system-plus-application combo.

Do the math

Add clever software that enable virtual instances to roll on and off a server like cars travelling on Eurostar, and you have the starting point for the private cloud.

It would be a mistake, however, to think that server virtualisation can do it all by itself. It’s a question of maths: if you’re looking to achieve consolidation ratios of five, ten, 20 virtual servers to one physical machine, the figures need to be supported by the entire IT architecture within and outside the server, not just the CPU.

Take RAM, for example. While traditional workloads may use only a fraction of available processor cycles, they could use a maximum of physical memory.

This is not only because operating systems tend to work the memory as hard as they can, but also because poorly written applications can insist on loading unnecessary functionality, or fail to de-allocate memory when it is no longer required.

Failing memory

You may not need ten times as much memory to run ten virtual machines, but you do need to think about how much you should be putting in each server. Some, but not all, virtualisation solutions allow for over-provisioning of memory – that is, the pre-allocation of a maximum quantity of memory that may or may not be required in practice.

You still need to size your physical RAM in advance, however, particularly for private cloud environments where, in theory, you don’t know in advance what you will want to run where.

Get the processor and RAM right, and the next thing you need to think about is server I/O. Again, the calculation is simple: if you have ten computers, say, all running on the same box, what happens when they all want to transmit data or access storage at the same time?

Server technologies have been advancing to respond to the parallelism needs of multiple virtual machines, such as Intel’s 7500 (formerly Boxboro) PCI bus chipset, which was designed with virtualisation in mind.

Message in a bottleneck

The server’s internal bus architecture is just the start of a sequence of electronics that leads to disk-based storage, all of which needs to take into account potentially increased throughput. Any of the links in the chain can become the bottleneck.

So scale the servers right, then look at networking requirements, then storage. While network engineers might say it’s simply a case of deploying bigger boxes that can support gigabit Ethernet, few would deny the issues that emerge in the storage layer.

You might easily end up backing up the same information twice

We will discuss storage in more detail in another article, but now let's look at backups as a simple, yet blindingly obvious, example of the challenges to be faced.

Most applications need to be backed up, as do operating systems, and indeed entire virtual machines. Aside from the fact that it would be a huge mistake to back up all virtual instances on a single server at the same time, you might quite easily end up backing up the same information twice, putting further pressure on I/O.

Intelligent design

In further articles we will also discuss management, policy, business alignment and all that jazz. For now the question is: given the complexity of today’s IT environments, the variations in size and scale of the applications we want to deploy, uncertainty about usage levels and so on, is it possible to define a generic private cloud that will cope with anything the organisation might throw at it?

The answer is in principle yes, but only if careful consideration, planning and design has been applied to all the links in the chain.

It is not just about having super-fast kit. Some pretty simple decisions can be made, such as locating data storage as close as possible to associated virtual machine storage, or defining a staggered backup policy that won’t bring a server down.

However dynamic IT becomes, the private cloud can never be a magic bullet that overcomes poor architectural decision-making. ®

More about

TIP US OFF

Send us news


Other stories you might like