Business bureaucracy vs dynamic IT
Choosing the right path
Workshop According to many pundits, here’s the plan for the next generation data centre: we can go to a dynamic infrastructure, with on-demand applications running in our private cloud, and an elastic cloud out there waiting for our applications if we run out of capacity. Sounds too good to be true? It is easy to get caught up in the hype, so getting to grips with what is really happening can help to understand the right way to proceed and what some of the business implications are.
First things first – it’s important to recognise the key role virtualisation plays in the vision of the flexible “server-scape” of the future, acting as a platform enabler on top of which much can be added. Virtualisation is being embraced by most, if not all, IT departments but this usage is mainly devoted to consolidation of workloads rather than dynamic IT. Some companies have made the leap from consolidation to resource pools or automated workload migration resulting in continuous optimisation, but the numbers remain small. Looking ahead, according to the numbers we’re currently crunching as part of this program, this picture may not change all that much. Many companies feel that they have essentially completed their virtualisation projects, and only a minority say that they will see a significant increase in usage in the coming couple of years.
Rather than virtualisation being seen as a step on the way towards dynamic IT, there may be a fork in the path, with a firm default based on using virtualisation for server consolidation, rather than the basis for dynamic IT models. Given that dynamic IT is being touted as the best thing since sliced bread, why is it that so few are embracing the vision?
One cause of this conservatism is to do with the overall relationship of IT with other areas of the business. In particular, it is how IT deals with and interacts with the “back office” functions – for example, those dealing with procurement and accountancy capabilities that enable things like cross-charging to happen. These functions are as much a determinant of whether dynamic IT is a suitable strategy for the business, as the technical readiness of the IT itself.
When we look at the companies that have moved forward with virtualisation beyond server consolidation, and then look at their approach to procurement and accounting policies, some interesting results emerge. We can group companies into three buckets for convenience, based on whether they have adapted their procurement and accounting policies and processes to cope with running a flexible server infrastructure. These are those that have not yet started to do so, those that have some level of partial integration, and those that have comprehensive integration already in place.
Two points of interest emerge. The first is, organisations that have changed their procurement policies are more likely to head down the dynamic IT path than those who have not. While this makes sense, the second point is that less than a quarter of organisations we researched have done so. “Why is this important?” you may ask. The answer is that, unless an organisation changes its behaviour across the board, then things will be done in the same vein as they have always been done: there is no halfway house. Where companies have good integration in place, they are much more likely to embrace all aspects of virtualisation. These companies are far ahead in not only consolidation efforts, but also the adoption of virtual server pools and use of continuous optimisation. It is the combination of the ability to both procure and bill for IT equipment and services flexibly that opens the doors of the business to dynamic IT.
Given that virtualisation is a stepping stone to the cloud, we know from a number of research studies that most companies are still quite resistant to the use of external cloud providers, with the vast majority having no plans or activities in this area. But where there is partial or full integration of dynamic IT principles with procurement and accounting practice, the level of resistance is less, and companies are more open about using externally sourced ‘elastic’ services rather than purchasing additional servers outright.
Coming back to the point of the discussion, it is one of direction. Despite the rhetoric you are hearing, your organisation may see virtualisation as a means to a tactical end, which effectively rules out the longer path to dynamic IT anyway. But if you are looking beyond server consolidation by virtualisation, you will need to be prepared to tackle the inevitable bottlenecks.
It is clear that without integrating fully with the business and procurement and billing, moving to dynamic IT will be a slow and onerous process which will undermine its potential dividends. In this case, the best approach may well be to focus on completing the consolidation effort and move on to other pressing priorities. But if dynamic IT is the plan for the future of IT in the company, getting the integration with other parts of the business right in parallel will be an essential element of success. ®
The other resistance...
to dynamic IT (in terms of being able to fire up extra VMs quickly to cover load spikes, rapid failover, and so on) is simply the difficulty of it, as well as apps that simply wouldn't benefit. If the IT department doesn't have political problems with it, but simply don't have apps that would benefit, then they won't look into it.
Failover -- most of these cloud systems rely on running applications that support failover internally, and then either running extra copies "idling", or firing up extra VMs if an existing VM fails. This isn't nice and transparent like IBM mainframes in a parallel sysplex (that is a setup of 2 or possibly more mainframes setup for failover). It can actually detect a fault (including a CPU mis-executing instructions -- it has two parallel pipelines with comparators to fail a CPU where the two pipelines disagree), it can stop the VM at that exact clock, and migrate the whole VM to another CPU on the same machine, or another mainframe in the sysplex -- transparently. Compared to that, having VMs detect faults, or some external VM detect other VMs crashed, making sure there isn't a half-completed transaction, starting up another VM, and having it take over that transaction, it's complex and error-prone.
For that matter, I think some of these IT departments (particularly ones using VMs to consolidate machines) will have some server apps that are designed to run a single copy, not run on a cluster of machines. Inter-process and inter-machine communications, locking, and so on, I bet quite a few server apps just don't bother. So, all the failover (unless it's clean like IBM's), and all the capability of firing off extra VMs for capacity, will not do a thing for them.
Yeah you missed...
> organisations that have changed their procurement policies are more likely to head down the dynamic IT path than those who have not
Some charge by service, some don't.
> most companies are still quite resistant to the use of external cloud
Because service companies can't be relied upon http://www.theregister.co.uk/2010/09/10/microsoft_bpos_apology/
Who writes this stuff?
> virtualisation is a stepping stone to the cloud
(that splashing sound was someone stepping off the stepping stone, falling through the cloud and ending up in the river).
it gets even better:
> unless an organisation changes its behaviour across the board, then things will be done in the same vein as they have always been done
What all this verbiage boils down to is that some IT operations are using virtualisation. Some of those use it to host lots of old servers on fewer, larger boxes and some use it to provide flexibility when they need extra capacity.
Have I missed anything?