This article is more than 1 year old

Virtualisation in two years’ time

Anyone have GPS and a good road map?

Lab It is always somewhat tricky forecasting the future, never mind doing so in IT. Even in an area like “virtualisation” it is difficult to give a black and white picture of just what your IT will look like as virtualisation moves beyond the pilots, specific workloads and test/dev environments that today form the technologies' strongholds.

So, where to begin? Let’s start with the best known area of virtualisation tools, namely the x86 server space. We know from recent Reg research that virtualisation is now the most influential factor in the architecting and evolution of server estates. Another report shows us that virtualisation is now considered suitable for a wide range of production uses and is beginning to be utilised to support an increasing array of IT services.

From this we can safely see that, love it or loathe it, virtualisation will become an inevitable element of the mainstream IT landscape. Over the course of the next couple of years more and more business applications will be run as virtualised systems, thereby improving overall application availability and reducing downtime.

As real world experience is garnered here we can then expect to see, very much in line with vendor expectations and marketing, a slow movement towards the creation of x86 server pools and maybe even a more generic “internal cloud” approach.

This goal will not be achieved without some casualties, however – and indeed, given the realities of how IT exists today, neither is it inevitable for all organisations. It will require there to be significant adaption to corporate departmental politics, for example. It will require budget reform which will in turn need IT leaders who can both see the big picture and communicate it well (we’re not ruling out the use of big sticks either).

Will x86 server virtualisation make things less complex, or any simpler? We doubt it. Already we’re seeing the signs of virtual server proliferation, for example. And a common theme in IT is that better mechanisms might free up resources in the short term, but these can very quickly get clogged up.

As long as IT remains a finite commodity, it will be used to the maximum, with or without virtualisation. Couple this with the fact that not every organisation is top of the class when it comes to IT management skills, and you get an idea of the challenges to come.

Moving on from x86 servers, we can also expect to see Desktop virtualisation, in its many forms, take root. Away from the existing Citrix / Windows terminal server user cases, research results suggest a wider adoption of desktop virtualisation will move into businesses of all sizes. The same research indicates many perceived benefits in this area.

Indeed, there are signs that desktop virtualisation may have a significant role to play going forward. It is our belief that desktop virtualisation offers a politically acceptable way of bringing the last almost completely unmanaged, and therefore expensive, part of the IT infrastructure under consolidate, audited control.

Benefits in terms of cost improvement, higher levels of security and availability will help bring desktop virtualisation to many desktops over the next few years. Many of which may not even notice it happening. However, considerable efforts to position solutions and educate IT professionals on effective ways of deploying different approaches is an absolute requirement here for desktop virtualisation to become commonplace.

Traditionally, such locked down approaches have been less favoured than the more flexible, one-physical-PC-per-user model. But it may be (we speculate here) that organisations already comfortable with server virtualisation can build on the platform to offer virtualised desktops to select groups of users. Desktop virtualisation need not be a big bang.

Moving on to the costly matter of storage and its management, I see that storage virtualisation will grab a handhold, once again in “silos” before becoming anything like flexible storage pools where all hardware platforms are “hidden” from view. But again there is much in the way of education to be undertaken, as well as a need for some work on encouraging vendors to open up their outlook on interoperability, particularly if storage and server virtualisation models are to work in harmony.

Virtualisation offers a great deal, but we know from your experiences that these are still early days. The ‘vision’ of virtualisation is breaking the bond between physical resources and logical workloads, such that the latter can exist, unfettered and secure. It’s a nice dream – and we have no problem with VMs running on whichever platform is most appropriate to the job in hand, from mobile phones to mainframes, from desktop clients to clouds.

The real question, though, is one of adoption. In this business, things rarely follow the dream, rather, they get subsumed into the continually evolving infrastructure, wherever it is sourced. However, every now and again a technology comes along and takes us by surprise.

Whether you believe we’ll all be running VMs on our signet rings, beaming our cloud-based interfaces directly onto our retinas, or just running business as usual with a bit more flexibility on what runs where, do let us know in the comments section below.

More about

TIP US OFF

Send us news


Other stories you might like