Feeds

Plan ahead to make virtualisation work

Head for a better life

Internet Security Threat Report 2014

The road to hell, they say, is paved with good intentions, and never more so than when it comes to virtualisation.

Many companies embark on virtualisation because they think it will make IT better, cheaper and faster. There is no denying that it helps initially, reducing costs through consolidating servers and making other areas such as rebuilds and backup easier.

But our research shows that unless steps are taken early on to manage the shift that accompanies virtualisation, then the outcome can actually be a more complex and fragile infrastructure that doesn’t respond well to change.

A common result is that companies reach a natural plateau where their skills, tools and operational processes are overwhelmed by virtual machine sprawl and unpredictability.

With this in mind, we will focus on some of the key lessons gleaned from those who have already suffered the pain of virtualisation and emerged victorious.

Now’s your chance

For a start, it is important early on to change the footing on which IT projects are planned. In the world of physical systems, hardware and software are usually funded as part of a dedicated project budget.

Virtualisation breaks this dependency and is an opportunity to separate the underlying hardware from the end customer – but unless you take advantage of this shift you risk losing control. Rather than seizing the initiative to provide better services, you may find cost cutting is imposed.

One way to approach this was outlined to me by a CIO who foresaw that virtualisation was the ideal pretext to change the way IT provided services to the business.

Rather than just consolidating the company’s systems, passing the savings back and looking like a hero in the short term, he fought to work the anticipated cost reduction into a business case for investing in something more future-proof.

He proposed the creation of a new virtualised service pool containing servers, storage and networking, with licensing optimised for highly virtualised workloads. All of this was underpinned by integrated management and comprehensive monitoring and reporting.

This enabled him to go back to the application owners knowing what it cost to provide IT services, both physically and virtually.

Dive into the pool

Instead of force-fitting applications onto highly consolidated servers, the IT department gave service owners a choice: they could continue to fund their own projects and systems in the old manner using dedicated kit, or they could run them in the new virtual pool.

The cost difference between the two meant that unless there was some compelling counter argument, most services quickly moved to the new virtual infrastructure, which was more manageable and flexible than the old static one.

Costs and service expectations can become a political hot potato

This highlights two other areas to consider when choosing the virtual infrastructure route. The first is that when things are shared, costs and service expectations can quickly become a political hot potato.

Complete visibility is needed about what is being delivered and what it costs to do so when demonstrating to the business the implications of various requests.

Our research has shown that putting in place at least a basic billing or cost-reporting capability can go a long way towards creating a much better experience all round.

Shared troubles

The second point is that when it comes to service delivery in a virtualised infrastructure, nothing matters more than the experience at the point of consumption.

Whatever service level agreements are in place on individual components of the service, what really needs to be monitored and managed is what is actually being delivered to the business.

We touched on this briefly in the previous article in this series, but few companies have proactive service monitoring in place.

When the service is provided by dedicated physical systems they can be sized reasonably effectively and don’t have to contend all the time for resources to meet targets.

But when things are shared and changed regularly, failure to take ithe potential impact into account can cause real issues for users or customers.

Without timely feedback, it is difficult to see the real service situation until the phone starts ringing. ®

Andrew Buss is service director at Freeform Dynamics

Choosing a cloud hosting partner with confidence

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.