Original URL: https://www.theregister.com/2008/04/16/virtualization_perspectives/

Virtualization: nothing new on the Sun or the mainframe

Novacaine for the corporate brain?

By Dale Vile, Freeform Dynamics

Posted in On-Prem, 16th April 2008 11:49 GMT

The Freeform perspective IT industry veterans are often heard complaining that the new fangled stuff youngsters are raving about today is just a rehash or repackaging of old familiar things that have been around for years.

As an old timer myself, it's something I can definitely relate to, and one of the areas that evokes this kind of feeling is virtualisation.

If you listen to the VMware disciples, and the recent buzz report tells us there are quite a lot of these, you would get the impression that partitioning of servers into virtual machines is a revolutionary idea contributed to mankind by the latest generation of whizz kids and entrepreneurs.

Meanwhile, those that have been involved with mainframes and other "traditional" server environments are left scratching their heads and wondering how the concepts behind this current craze are any different to the facilities they have been taking for granted for the last 20 or 30 years.

It's an observation that is difficult to argue with if you focus just on capability, and indeed if you were that way inclined, you could probably also make the case that server partitioning in a traditional platform environment is significantly more mature than the latest incarnation everyone is talking about today. There is a big difference, however, that is responsible for the phenomenal growth in virtualization related activity over the past few years – and that's the nature of what's being virtualized - commodity x86 servers.

Why is this important? Well, because the problem being solved is different, at least in the first wave of mass virtualization activity we are seeing. Whereas virtualization in a traditional server environment was historically concerned with the planned and premeditated partitioning of big boxes to optimise the use of powerful and expensive assets, x86 virtualization has mostly been concerned with cleaning up the fragmented sprawling mess of under-utilised commodity kit that has accumulated over the years as a new server was provisioned for each new application that was brought on stream. To put it another way, x86 virtualization has been very much akin to a pain killer, and as most organisations of any size were suffering, its use just exploded.

The end result is that virtualization of x86 servers has in a very short time accelerated beyond the virtualization of other platforms, to the point where we can now genuinely consider it a mainstream technology.

This was apparent from recent feedback gathered from the Reg Technology Panel, which is summarised here. It is also clear from this research that those adopting virtualization solutions in an x86 environment are happily using the technology for business critical applications.

This last point brings us on to where virtualization is going and the changes we might see looking forward. There will come a point, some time over the next two to three years, when most of the server consolidation activity that has driven uptake so far will have played out, as the historical fragmentation and wasted resource will largely have been dealt with. So what happens then?

One development we are already seeing as large multi-way x86 boxes become more powerful, is a reinstatement of the traditional planned, premeditated approach to server partitioning we referred to earlier. All of that "old school" experience then starts to become important, which could be fun to watch as the veterans say: "Step aside son, and let me show you how we grown-ups were doing it before you were born". Well, maybe not, but it's a nice thought for us old timers, and does highlight that management of large central systems is a different game to managing small footprint commodity environments.

The real game that will emerge, however, is leveraging virtualization in the context of the drive towards more dynamic and flexible system landscapes. The ability to build, clone, rapidly deploy and freely move images of virtual machines on demand opens up lots of possibilities for building truly responsive systems that can cope well with both fluctuating resource demands and frequently changing business requirements.

In the meantime, if you are interested in learning more about the state of play today and some of the drivers for current activity in the virtualization space as a whole, then check out the research report, which can be downloaded here. ®