How do you choose a hypervisor?
Flexibility leads to complex choices
Workshop We know that the majority of organisations that we survey have adopted server virtualisation to support their server consolidation activities, and are reaping benefits.
However, there is more to server virtualisation than simply supporting the consolidation of workloads onto a reduced set of servers. Moving beyond mere consolidation brings the potential of even greater advances - resource pooling, workload migration and high availability to name just a few. Underpinning the move to virtualisation is the hypervisor. Choosing one should be simple, or so you would have thought. Pick one and standardise on it – what could possibly go wrong?
Looking at where we have got to can help illuminate some of the issues around choosing the virtualisation platform. A large number of companies have either substantially completed their Windows server consolidation projects or have it as a current priority. Linux server consolidation does tend to lag a little behind Windows, but is following the same basic trend.
Looking beyond server consolidation, we see that few companies have moved beyond consolidation and implemented advanced virtualisation technologies, such as resource pooling or dynamic provisioning and workload management. Most companies seem content to benefit from consolidation rather than investing further in advanced virtualisation, which could involve a much greater – and often unrecognised – change.
On the app side of things, it is murkier, because there is such a variety. But what does come through strongly is that things like application or departmental and workgroup servers are high on the list, while further down the list are the more “core” infrastructure apps like email, ERP, CRM and database management systems.
So what is this telling us from a hypervisor perspective? Let’s consider the options - broadly speaking, there are three main vendors to choose from – VMware, Microsoft and Citrix - with a host of other vendors also offering suitable products. Depending on the virtualisation need, any number of them may be suitable. There are free or near-to-free versions – with support at extra cost - at the entry level, ranging up to eye-watering prices at the other end. Most support a good range of supported guest operating systems, many with enhanced support such as para-virtualisation drivers or integration environments to increase performance. So the choice will really come down to a combination of what suits for the budget, applications, performance, skills and hardware that are available.
We know many companies prefer to choose a single server vendor, or to keep the choice to a small, manageable number. Does this mean you should choose one hypervisor vendor and stick with it though thick and thin? The answer is that standardising on a single hypervisor will be a noble aim, but it will likely also be a losing battle.
Many of the workloads that are being virtualised are at the department level. Many companies also have many servers that are located and managed outside of the data centre in local or regional offices. In the absence of a standardised virtualisation solution mandated by central IT, virtualisation in these situations will be “by the back door”, resulting in a variety of platforms and tools being implemented, often selected solely on the basis of familiarity. This is not necessarily a bad thing, but one that needs to be recognised and monitored as it arises.
It is possible to take a more prescriptive approach to virtualisation in this situation, by having an approved list of solutions that may be used or supported to limit the scope of choice, perhaps with guidelines on choosing the appropriate solution and gaining approval for the installation. Doing so limits the risk that comes from allowing free choice to reign and helps to pull back the fragmentation in virtualisation that may happen over time. Taking this approach may also make it much simpler to manage things like physical-to-virtual migrations or moving virtual systems between virtualisation vendors’ solutions as business requirements change.
As we move more into the computing core of the data centre, things become more complex - particularly as we consider implementations of advanced features of virtualisation. This complexity tends to focus attention on the “main hypervisor platform” to achieve the level of integration and management required - but there may still be many exceptions that require supporting alternative hypervisors.
Virtualisation solutions also have different strengths and weaknesses. Some are strong in ultimate hardware performance, others in areas such as high-availability and workload migration. This may dictate that certain workloads, if virtualisation is a must, will demand a certain hypervisor, or even hypervisor and hardware combination. If this is different to the preferred choice, then multi-vendor virtualisation will inevitably result.
Certain applications often depend on a defined “stack” that could include multiple elements, the net effect of which may specify the hypervisor required. If this is not the hypervisor of choice, the question moves onto risk; with the option of running the application on the standard hypervisor, but unsupported against skilling up and implementing another hypervisor to run with vendor support.
The price points of different solutions vary widely. Trying to move all workloads to run under a single platform may not make enough sense from an operations perspective to offset the increased licensing costs of the all-singing, all-dancing high-end platform. Having a tiered approach with price points and feature sets that are better tuned to different types of applications may be a better way forward.
In a nutshell what we are trying to say is that standardising on a single hypervisor may result in paying more than is needed for licences, hardware and support or for staff operations. Having a choice, even if it is quite tightly limited, allows a variety of price points or performance levels to be met without impacting too significantly on virtualisation operations, risk exposure or manageability. ®
The inclusion of multiple hypervisors will lead to higher staffing support costs, additional command and control management platforms, and less standardization on hardware platforms. In other words, having more than one perpetuates the sins of IT past. It doesn't simplify business solution delivery; using multiples lends itself to more silos of stuff.
Everything that is discussed regarding multiple price points in the article is possible from a single hypervisor perspective, it just takes some forethought to how service delivery and licensing is managed. If you want a pool of resources that doesn't offer the latest and greatest in DR or HA capabilities, then the end user (or department) shouldn't get charged for it. Doesn't mean you have to change hypervisors, just change what add-ons are purchased for that pool of resources. By doing so, you stay standardized across the datacenter and regional offices without the need to support an ever increasing ball of complexity.
Virtualization done right is hard enough on it's own. Organizations should avoid falling into the trap of defining their virtualization tools as tactical solutions to localized problems. It's much larger than that. Virtualization done well is a strategic position that maximizes standardization, ROI, and time to market.