Feeds

Server virtualisation is not enough

What needs to change to build a private cloud?

Secure remote control for conventional and virtual desktops

The server computing architecture that lies at the heart of what we now refer to as private cloud has been considered several times over. The game-changer – the capability that makes private cloud possible today – is virtualisation, which enables logical workloads to be fully abstracted from machines.

The famous Moore’s Law has helped, in that today’s processors are more than capable of running multiple instances of an operating-system-plus-application combo.

Do the math

Add clever software that enable virtual instances to roll on and off a server like cars travelling on Eurostar, and you have the starting point for the private cloud.

It would be a mistake, however, to think that server virtualisation can do it all by itself. It’s a question of maths: if you’re looking to achieve consolidation ratios of five, ten, 20 virtual servers to one physical machine, the figures need to be supported by the entire IT architecture within and outside the server, not just the CPU.

Take RAM, for example. While traditional workloads may use only a fraction of available processor cycles, they could use a maximum of physical memory.

This is not only because operating systems tend to work the memory as hard as they can, but also because poorly written applications can insist on loading unnecessary functionality, or fail to de-allocate memory when it is no longer required.

Failing memory

You may not need ten times as much memory to run ten virtual machines, but you do need to think about how much you should be putting in each server. Some, but not all, virtualisation solutions allow for over-provisioning of memory – that is, the pre-allocation of a maximum quantity of memory that may or may not be required in practice.

You still need to size your physical RAM in advance, however, particularly for private cloud environments where, in theory, you don’t know in advance what you will want to run where.

Get the processor and RAM right, and the next thing you need to think about is server I/O. Again, the calculation is simple: if you have ten computers, say, all running on the same box, what happens when they all want to transmit data or access storage at the same time?

Server technologies have been advancing to respond to the parallelism needs of multiple virtual machines, such as Intel’s 7500 (formerly Boxboro) PCI bus chipset, which was designed with virtualisation in mind.

Message in a bottleneck

The server’s internal bus architecture is just the start of a sequence of electronics that leads to disk-based storage, all of which needs to take into account potentially increased throughput. Any of the links in the chain can become the bottleneck.

So scale the servers right, then look at networking requirements, then storage. While network engineers might say it’s simply a case of deploying bigger boxes that can support gigabit Ethernet, few would deny the issues that emerge in the storage layer.

You might easily end up backing up the same information twice

We will discuss storage in more detail in another article, but now let's look at backups as a simple, yet blindingly obvious, example of the challenges to be faced.

Most applications need to be backed up, as do operating systems, and indeed entire virtual machines. Aside from the fact that it would be a huge mistake to back up all virtual instances on a single server at the same time, you might quite easily end up backing up the same information twice, putting further pressure on I/O.

Intelligent design

In further articles we will also discuss management, policy, business alignment and all that jazz. For now the question is: given the complexity of today’s IT environments, the variations in size and scale of the applications we want to deploy, uncertainty about usage levels and so on, is it possible to define a generic private cloud that will cope with anything the organisation might throw at it?

The answer is in principle yes, but only if careful consideration, planning and design has been applied to all the links in the chain.

It is not just about having super-fast kit. Some pretty simple decisions can be made, such as locating data storage as close as possible to associated virtual machine storage, or defining a staggered backup policy that won’t bring a server down.

However dynamic IT becomes, the private cloud can never be a magic bullet that overcomes poor architectural decision-making. ®

Beginner's guide to SSL certificates

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Seattle children’s accelerates Citrix login times by 500% with cross-tier insight
Seattle Children’s is a leading research hospital with a large and growing Citrix XenDesktop deployment. See how they used ExtraHop to accelerate launch times.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.