Feeds

Plan ahead to make virtualisation work

Head for a better life

Intelligent flash storage arrays

The road to hell, they say, is paved with good intentions, and never more so than when it comes to virtualisation.

Many companies embark on virtualisation because they think it will make IT better, cheaper and faster. There is no denying that it helps initially, reducing costs through consolidating servers and making other areas such as rebuilds and backup easier.

But our research shows that unless steps are taken early on to manage the shift that accompanies virtualisation, then the outcome can actually be a more complex and fragile infrastructure that doesn’t respond well to change.

A common result is that companies reach a natural plateau where their skills, tools and operational processes are overwhelmed by virtual machine sprawl and unpredictability.

With this in mind, we will focus on some of the key lessons gleaned from those who have already suffered the pain of virtualisation and emerged victorious.

Now’s your chance

For a start, it is important early on to change the footing on which IT projects are planned. In the world of physical systems, hardware and software are usually funded as part of a dedicated project budget.

Virtualisation breaks this dependency and is an opportunity to separate the underlying hardware from the end customer – but unless you take advantage of this shift you risk losing control. Rather than seizing the initiative to provide better services, you may find cost cutting is imposed.

One way to approach this was outlined to me by a CIO who foresaw that virtualisation was the ideal pretext to change the way IT provided services to the business.

Rather than just consolidating the company’s systems, passing the savings back and looking like a hero in the short term, he fought to work the anticipated cost reduction into a business case for investing in something more future-proof.

He proposed the creation of a new virtualised service pool containing servers, storage and networking, with licensing optimised for highly virtualised workloads. All of this was underpinned by integrated management and comprehensive monitoring and reporting.

This enabled him to go back to the application owners knowing what it cost to provide IT services, both physically and virtually.

Dive into the pool

Instead of force-fitting applications onto highly consolidated servers, the IT department gave service owners a choice: they could continue to fund their own projects and systems in the old manner using dedicated kit, or they could run them in the new virtual pool.

The cost difference between the two meant that unless there was some compelling counter argument, most services quickly moved to the new virtual infrastructure, which was more manageable and flexible than the old static one.

Costs and service expectations can become a political hot potato

This highlights two other areas to consider when choosing the virtual infrastructure route. The first is that when things are shared, costs and service expectations can quickly become a political hot potato.

Complete visibility is needed about what is being delivered and what it costs to do so when demonstrating to the business the implications of various requests.

Our research has shown that putting in place at least a basic billing or cost-reporting capability can go a long way towards creating a much better experience all round.

Shared troubles

The second point is that when it comes to service delivery in a virtualised infrastructure, nothing matters more than the experience at the point of consumption.

Whatever service level agreements are in place on individual components of the service, what really needs to be monitored and managed is what is actually being delivered to the business.

We touched on this briefly in the previous article in this series, but few companies have proactive service monitoring in place.

When the service is provided by dedicated physical systems they can be sized reasonably effectively and don’t have to contend all the time for resources to meet targets.

But when things are shared and changed regularly, failure to take ithe potential impact into account can cause real issues for users or customers.

Without timely feedback, it is difficult to see the real service situation until the phone starts ringing. ®

Andrew Buss is service director at Freeform Dynamics

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
WHY did Sunday Mirror stoop to slurping selfies for smut sting?
Tabloid splashes, MP resigns - but there's a BIG copyright issue here
Spies, avert eyes! Tim Berners-Lee demands a UK digital bill of rights
Lobbies tetchy MPs 'to end indiscriminate online surveillance'
How the FLAC do I tell MP3s from lossless audio?
Can you hear the difference? Can anyone?
Google hits back at 'Dear Rupert' over search dominance claims
Choc Factory sniffs: 'We're not pirate-lovers - also, you publish The Sun'
EU to accuse Ireland of giving Apple an overly peachy tax deal – report
Probe expected to say single-digit rate was unlawful
Inequality increasing? BOLLOCKS! You heard me: 'Screw the 1%'
There's morality and then there's economics ...
While you queued for an iPhone 6, Apple's Cook sold shares worth $35m
Right before the stock took a 3.8% dive amid bent and broken mobe drama
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.