Virtualization: nothing new on the Sun or the mainframe
Novacaine for the corporate brain?
The Freeform perspective IT industry veterans are often heard complaining that the new fangled stuff youngsters are raving about today is just a rehash or repackaging of old familiar things that have been around for years.
As an old timer myself, it's something I can definitely relate to, and one of the areas that evokes this kind of feeling is virtualisation.
If you listen to the VMware disciples, and the recent buzz report tells us there are quite a lot of these, you would get the impression that partitioning of servers into virtual machines is a revolutionary idea contributed to mankind by the latest generation of whizz kids and entrepreneurs.
Meanwhile, those that have been involved with mainframes and other "traditional" server environments are left scratching their heads and wondering how the concepts behind this current craze are any different to the facilities they have been taking for granted for the last 20 or 30 years.
It's an observation that is difficult to argue with if you focus just on capability, and indeed if you were that way inclined, you could probably also make the case that server partitioning in a traditional platform environment is significantly more mature than the latest incarnation everyone is talking about today. There is a big difference, however, that is responsible for the phenomenal growth in virtualization related activity over the past few years – and that's the nature of what's being virtualized - commodity x86 servers.
Why is this important? Well, because the problem being solved is different, at least in the first wave of mass virtualization activity we are seeing. Whereas virtualization in a traditional server environment was historically concerned with the planned and premeditated partitioning of big boxes to optimise the use of powerful and expensive assets, x86 virtualization has mostly been concerned with cleaning up the fragmented sprawling mess of under-utilised commodity kit that has accumulated over the years as a new server was provisioned for each new application that was brought on stream. To put it another way, x86 virtualization has been very much akin to a pain killer, and as most organisations of any size were suffering, its use just exploded.
The end result is that virtualization of x86 servers has in a very short time accelerated beyond the virtualization of other platforms, to the point where we can now genuinely consider it a mainstream technology.
This was apparent from recent feedback gathered from the Reg Technology Panel, which is summarised here. It is also clear from this research that those adopting virtualization solutions in an x86 environment are happily using the technology for business critical applications.
This last point brings us on to where virtualization is going and the changes we might see looking forward. There will come a point, some time over the next two to three years, when most of the server consolidation activity that has driven uptake so far will have played out, as the historical fragmentation and wasted resource will largely have been dealt with. So what happens then?
One development we are already seeing as large multi-way x86 boxes become more powerful, is a reinstatement of the traditional planned, premeditated approach to server partitioning we referred to earlier. All of that "old school" experience then starts to become important, which could be fun to watch as the veterans say: "Step aside son, and let me show you how we grown-ups were doing it before you were born". Well, maybe not, but it's a nice thought for us old timers, and does highlight that management of large central systems is a different game to managing small footprint commodity environments.
The real game that will emerge, however, is leveraging virtualization in the context of the drive towards more dynamic and flexible system landscapes. The ability to build, clone, rapidly deploy and freely move images of virtual machines on demand opens up lots of possibilities for building truly responsive systems that can cope well with both fluctuating resource demands and frequently changing business requirements.
In the meantime, if you are interested in learning more about the state of play today and some of the drivers for current activity in the virtualization space as a whole, then check out the research report, which can be downloaded here. ®
@Novacaine for the corporate brain?
Please, update your information !! Novacaine is so yesterday. Lanacaine at the very least. Even my butcher of a fangman used Lanacaine to extract some teeth (and the subsequent extraction of a hefty wad from the wallet/credit card) !! The dentistry may be painless but the subsequent extraction wasn't !!
Quality of egg basket carrying people - the real deal
I agree a lot with this article and that virtualisation is really old hat. Difference is that it is available on commodity hardware.
With all systems there is a small difference say 20% which gives the server hardware leverage and a competitive edge. But this 20% of expensive but extra reliability due to special h/w e.g. memory mirroring and things like dual internal crossbars, gives 80% of the value. A badly managed mainframe with unskilled operators has availability equivalent of a desktop PC. A high end unix server now has all the RAS features of a mainframe e.g. RAM memory mirroring and instruction retry. But you need good people to manage those high end unix boxes. Mainframes and Unix servers have been virtualisating/consolidating small systems for a long time. Doing what VMware does for a long time. Are not the VMware developers old unix/mainframe people. The equation is hw + people skills + virtualisation s/w.
Also, as all old timers know, virtualisation has a cost, the overhead of the virtualisation layer. We should not forget this.
So it is the quality of the support staff which make systems reliable. Outsources manage this as they try to never touch/change a system. The trick is to be able to change/upgrade systems and add new apps while maintaining availability. Datacenter grade Unix servers have allow hot swap and dynamic changes on systems which is ideal for a changing virtualisation environment also hardware to cope with failure of chips, memory, i/O etc. So very little risk here. I do not see this available yet on commodity, why we do not have it on X86, these extra features cost money. You get what you pay for.
So bleeding edge is virtualisation on X86, but put all your print servers on one X64 box and you have a single point of failure e.g. risk. Can't print anymore, save the trees, this is green IT. Virtualise with commodity low paid, unskilled staff on commodity hardware and you are treading a fine line. All eggs in one basket with people not used to carrying eggs.
How long will this fashion continue, I reckon on another 18 months, virtualisation is already about 18 months into it's cycle. Most IT fashions last for about 3yrs.
What is the next fashion, easy, black. Black is the new black.
What will the future be like, simple it will be different.
Varying degrees of separation
Perhaps Dale will clarify: what I find interesting about consolidation and virtualisation today is the continuum of degrees of separation which can now be achieved:
Resource Management (most Unices)
Hypervisor based hardware virtualisation
It is particularly the middle two which are of interests: the former allows separate "virtual systems" within a single OS instance, while the latter in its various guises allows commodity hardware to be sliced up for multiple virtual hosts.