This article is more than 1 year old

Data-centre procurement ain't what it used to be

Adjust your funding models

The cost of supplying IT services inside businesses has never been more visible, with much marketing attention focusing on the question “Why aren’t you using cloud-based services instead of running your own systems?”

More than ever, IT departments are having to justify their funding and show they are doing a good job. Just how will financing and budget models need to change in the coming years as business pressures on IT services continues to ramp up?

For the past two or three decades the bulk of major IT infrastructure spend has been directed at new or upgraded applications, resulting in data centres filling up with servers, each having its own storage system, operating in isolation and running a single piece of business software.

Degrees of separation

Even as IT technology has developed to allow servers to run multiple virtualised and shared applications and storage platforms, many organisations have continued to operate their computer systems as a series of separate islands.

In addition to imposing the straitjacket of narrowly defined budgets, business managers are often reluctant to allow server and storage hardware allocated to their department or division to be shared with their peers.

The result is that many organisations are unable to fully optimise their resources and operate anywhere close to full capacity.

The inconsistent systems landscape is difficult to manage, with its reliance on labour-intensive administration and little automation of routine processes. The associated IT procurement tends to be complex and time consuming.

Most data centres have been designed to deliver IT services that are expected to run for several years. Coupled with stringent change controls and testing, this creates situations in which systems have been installed before much real service experience has been gathered. Many have spare capacity built in from day one to provide headroom for growth.

Run to keep up

Today, however, business needs may change very rapidly, placing great pressure on IT to respond to new requests with little time to plan. On top of this, organisations are operating under financial conditions and external oversight that make it difficult to spend money on resources that may not be used for months or even years.

Figure 1 below summarises the major drivers reported as important in the evolution of IT service provision and data-centre modernisation.

Figure 1

Modern businesses' need for IT to deliver services rapidly to an expanding range of devices and users will result in fundamental changes to the way solutions are procured and operated.

The scope of these changes will over time extend to the core technologies in the data centre and the management tools used to keep things operational.

Let’s look in more detail at some of the specifics.

Resource optimisation: x86/x64 Servers

The systematic overprovisioning of IT systems and their resulting under-utilisation has been one of the main drivers behind x86 server virtualisation.

Many are now recognising the value of creating pools of resources that can operate much closer to their potential capacity. Private cloud architectures are relevant here.

Server virtualisation has already lead many organisations to change the way they procure systems. Early projects were often based on their ability to optimise hardware and software acquisition costs, but virtualisation can also yield many other benefits, such as improved systems and service resilience.

Such systems often also deliver, though this is harder to quantify, lower power consumption, higher staff productivity and reduced software licensing costs.

The fly in the ointment, however, is that obtaining investment for shared infrastructure can be a challenge. It takes a lot more people to say “yes” when the proposal on the table is for something that will become a corporate asset rather than being owned and accounted for in a single department or division.

Storage

Organisations are facing the considerable challenge of storing ever greater volumes of data while providing access to it from an expanding portfolio of devices at all times.

Like server infrastructure, until recently most storage was acquired to support a specific business requirement or application. This has resulted in data centres frequently housing separate islands of under-used storage systems.

To address both the rapid growth of storage and the rising cost of holding data, organisations are increasingly looking to virtualisation and tools that throttle storage growth, such as data deduplication, archiving and storage tiering.

But again, procuring storage capacity that is shared among various user groups poses a challenge to typical project or cost-centre funding models.

Software

As IT infrastructure becomes more flexible through server and storage virtualisation, software licensing models will also have to evolve if maximum business value is to be delivered.

This will cause many headaches as IT professionals and vendors search for licensing terms and conditions able to cater for rapid growth and a contraction of software usage.

Some have promoted pay-per-use models as an obvious solution: the organisation pays for its software usage in line with consumption.

But this approach is notoriously difficult to budget for. Most businesses operate on carefully planned budgets fixed in advance, making variable charging difficult to manage.

The already complex world of software licence management, with its often unwieldy terms and conditions, is bound to become even more difficult to navigate in the years ahead. But it offers great scope for cost reductions.

Few organisations use the most cost-effective software licences available. Nor do they have anything like an accurate picture of the software they have deployed, what is its usage and what support and maintenance charges are being incurred for software that is no longer used.

Systems integration

The need to procure servers, storage, networking and software that can be modified rapidly as end-user requirements change is already causing some organisations to investigate different ways of acquiring IT resources.

Systems integration and optimisation are an area where IT professionals always expend considerable efforts, so the question arises: is it better to try and build systems from distinct pools of servers, storage and networking, or easier and faster to buy pre-configured solutions with all the elements already assembled in the box?

The advantage of acquiring pre-built solutions such as IBM PureSystems, HP Converged Infrastructure, or systems from VCE, Dell and other major suppliers is that many of the basic interconnect challenges are removed.

More importantly, such systems usually come with management tools designed to allow the system to be administered from a single console, potentially by a team of IT generalists rather than storage, server and networking specialists.

The issue for some is that such solutions are relatively large and designed to support multiple workloads. This can, again, make it difficult to procure such systems with project-based funding.

Budgets and funding

At least three possible approaches to addressing the procurement and budgeting problem are commonly encountered.

The first is to buy small systems through established project funding models and expand them as additional projects come up.

The second is to use established project budgets to acquire a larger system than necessary to support the first project, potentially through vendor or external financing plans rather than a straight cash purchase.

The third is to completely revamp business and IT financial relationships and allow all IT spending to be considered at an infrastructure level with enhanced reporting on resource usage.

Clearly each of these options has its advantages and drawbacks. One major hurdle concerns the fact that comparatively few organisations seek to acquire their IT systems via any other route than outright upfront purchases (see figure 2).

Figure 2

The fact that the majority of IT equipment acquisitions are directly purchased and that the use of any form of financing is very unusual is yet another illustration of the grip that the project model of funding holds.

If shared IT resource pools are to become more widely accepted, it may well be that IT reporting will need to change significantly.

Although many organisations consistently indicate that there is strong resistance to the adoption of any form of chargeback or even showback reporting, it is hard to see how some form of business unit usage of resources in a shared IT infrastructure can be avoided.

It’s good to share

The efficient delivery of IT services will depend more and more on the use of shared pools of IT resources, running inside or outside the data centre.

Procuring such resources will require fundamental changes to the way IT is funded, and ultimately will affect the entire relationship between IT and the business. Funding models and reporting processes will require significant modification.

Systems, people and process change within IT will be reflected in the way systems and services are paid for and acquired.

The time for wholesale change is not far away and the period of alteration in IT working practices will be difficult. But the benefits to the business available through the flexible and efficient use of shared IT resources are such that changes are inevitable.

The biggest challenge of all is that there is not much established best practice available to guide organisations on the adjustments they need to make to funding and governance models.

We might well be heading for an era of Darwinian evolutionary change, with many things being tried, some successfully, others less so. ®

  • Tony Lock is programme director at Freeform Dynamics

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like