The art of optimising VM performance
Physical and virtual hiccups
Lab While it is unfair to say (as many vendors do), that server virtualisation will take over the world during the course of the next fifteen minutes, we know from the readers of The Register that ever-expanding numbers of virtual machines (VMs) are being spun up by organisations large and small.
A primary driver for early server virtualisation projects has been to consolidate footprint onto a smaller number of physical servers, each running multiple VMs. But are virtual systems making best use of the resources at their disposal?
You also told us that the routine management of virtualised systems can be problematic, to put it lightly. But considering the drive for server consolidation in particular, there has been nowhere near as much discussion around how to configure the physical resources allocated to VMs at run time. The same goes for sizing physical host servers in order to optimize service delivery while keeping control of costs.
So how does an administrator set about specifying the amount of RAM, disk space and I/O for each virtual machine? A good starting point is to monitor the physical resource consumption of the original servers hosting the applications over their typical work cycles. This information can then form the basis for working out just how much resources the corresponding VMs will consume, after adding in the requirements of the virtual server software itself.
So while some testing and experimentation can provide an idea of the needs of each individual VM the next question becomes one of working out which VMs to run on which physical servers. Getting this right can be tricky, especially in terms of working out how to satisfy the I/O requirements of the combined virtual servers and, more particularly, to ensure that all of the networking needs are adequately resourced.
Of course, real life is rarely so accommodating and we know that many of you report significant challenges in getting resource allocation right. There is a risk that virtual machines running on your systems have been allocated more physical resources than they actually require, especially if management tools cannot provide visibility into opportunities for optimising resource consumption.
If you have any examples of how to manage the physical resources allocated to each VM we would be very interested to hear what you do. Your comments in the past have indicated that VMs tend to be set up once and then left alone until something changes the status quo, thus leaving open the potential that physical resources may have been allocated but not used.
We also need to remember that today the majority of environments run their virtual servers in a relatively static mode. Virtualisation holds the promise of flexibility to cater for changing workloads and business demands, but so far few organisations have taken advantage of this capability. In order to do this properly, a way of dynamically allocating physical resources to virtual machines is required. We’d like to hear about how you’re making all this work, whether you’re doing it manually or if you’re one of those ‘dynamic’ types. ®
Common sense really applies. We all know that servers generally don't need anywhere near the resources allocated to them 95% of the time - hence why virtualisation is popular.
Bog standard Windows 2008 box generally has 2Gb RAM "allocated", 1 CPU assigned and shares a 4Gbps pipe with about 8 other servers. Anything a bit heavy (multiple roles, maybe an appserver for Oracle forms etc.) then it gets another 2Gb RAM and another CPU.
For managing the resource then VMWare VCentre does all that for me really. I split the farm into 3 priority groups and technically any spare resouce can be allocated to a demanding VM guest on demand should it require more juice.
If the host doesn't have enough then it gets moved via DRS to another host who can provide the relevant power...
don't forget the host platform
Funny that monitoring the host platform isn't mentioned at all, everyone's head must be stuck in a cloud. While the settings of a specific app on a monolithic server will read one thing, your new virtual platform should have multiple cpus and wads of ram to support those virtual servers. Exceed its capabilities and more than just a single application could go down.
I run 14 VMs on 1 quad core with 8GB of ram. While any one of the individual VMS won't impact load, gradually all of the systems tend to eat up ram and swap, eventually leading to system thrashing. A simple reboot of the host system takes care and gradually I'll move some of those systems to a new vm host. hopefully...
Too long, didn't read:
"The primary benefit of virtualisation is ease of provisioning of new servers. Use it."
Tony, you say that "A good starting point is to monitor the physical resource consumption of the original servers hosting the applications over their typical work cycles" which I agree with.
The problem comes when translating this to the virtual world:
Provided your application is distributable (if not, why?!?!), you will have to break a monolithic physical server into smaller pieces, then scale up the number of servers used *as a reaction to changes in demand*. Remember, getting a clone of a server (should be) is a simple click of a button, not a protracted ordering of hardware followed by installation woes. Why not use this?
The way to move to "the cloud" is by breaking up your apps and scaling according to demand.
... I suppose Monolithic Servers that are required for a short period of time will still fit the bill, but an always on VM that takes up most of a node may as well simply *be* that node.