This article is more than 1 year old

Excessively fat virtual worlds – come on, it's your guilty secret

Does my server look big in this?

Now that virtualisation is seen as a robust and mature technology, managers and administrators are looking to reduce server deployment and management costs further.

One area of potential cost reduction is reclaiming unused or under-utilised infrastructure capacity. Most virtual estates that have grown organically over the years are chock full of virtual machines that are massively over-specified and therefore inefficient. This problem isn’t always the doing of the administrator but can result from a number of factors.

You can cut the size of your estate: I know, as I’ve seen compute footprints slashed by 60 per cent. That’s a figure to admire, because it means saving a bunch of hardware and related costs.

How do you hit that or similar numbers? The road to recovery starts with understanding, so before I get to the “how”, it’s worth looking at why you ended up with a fat compute estate in the first place.

In my experience, these are the five most common mistakes and factors:

  • Failing to appropriately resize physical infrastructure when migrating to virtual environments. When performing physical-to-virtual migrations, inexperienced administrators often fail to review or even take into account the utilisation of the physical machine for a reasonable time period to gain indicative compute requirements.

    I have seen huge physical machines with in excess of 16 cores and 64GB RAM directly ported into a virtualised environment and sit there using almost zero resources.

  • Software vendors frequently overstate resource requirements. Vendors seem to have an unfounded aversion to virtualisation and tend to over-specify requirements “just to be sure”. This isn’t the 1990s, where you just buy a host to run an application and leave it to tick away and waste 90 per cent of the CPU and RAM.

  • Business functions demanding CPU and RAM reservations to ensure that their application never runs short of resource capacity. In a well-monitored, proactive environment, this is less likely to happen, however.

    Such a situation results from the perception that shared resources are in short supply and that the machine should be treated the same as a physical host where over-specifying is rampant, due to the long-winded hardware upgrade process.

    In a virtual environment, adding additional CPU, RAM and disk can be done with zero downtime in the right circumstances.

  • The illusion that virtual machines have no cost. This is perhaps the most dangerous type of thinking, but also appears most pervasive among a lot of management.

    People often think that virtual machines are not “real” and that resources are completely elastic and have no real cost.

    When virtualisation hypervisor software typically costs several thousand dollars per socket, servers can cost upwards of $16,000 per host – not to mention expensive SAN storage, networking, power and cooling – it is anything but cheap.

  • Business or application owners that feel they need to maintain additional capacity for month-end or year-end type activities. A lot of legacy applications are designed around one or two machines that can’t scale like newer three-tier elastic designs.

    This means that potentially massive amounts of compute are wasted for most of the year.

    I have seen situations where businesses have demanded new capacity and it sits there, powered on, waiting for year end so that some monster virtual machines can be powered on to do big number crunching. Once complete, they are just powered down again. The ultimate in wasting resources.

More about

TIP US OFF

Send us news


Other stories you might like