Original URL: http://www.theregister.co.uk/2011/07/13/vmware_esxi_5_0_analysis/

VMware taxes your virtual memory

The hidden cost of vSphere 5

By Timothy Prickett Morgan

Posted in Cloud, 13th July 2011 04:11 GMT

Analysis The rumored feeds and speeds of the latest ESXi 5.0 hypervisor at the heart of VMware's just-announced vSphere 5.0 server virtualization stack were pretty much on target and something that customers will applaud.

But no one had heard about VMware's new pricing model for the vSphere 5.0 software, which attaches a fee to the use of vSphere on each socket in a physical server as well as on the amount of virtual memory that a hypervisor makes use of.

The latter is a big change, and one that is bound to get IT shops out there playing with their the back of their drinks napkins and calculators to see how the price change will affect their virtualization budgets.

VMware Monster VM

VM monster: How VMware sees its new hypervisor

But before we get into the price chances with the vSphere 5.0 server virtualization tools, let's go over the feeds and speeds of the ESXi hypervisor. Remember, there is no ESX Server hypervisor with the management console bundled in any more; ESX Server 4.1 was the last release of that style of hypervisor from VMware. By removing the console, ESX goes from around 2GB to 100MB in the 5.0 release, and more importantly, has a lot fewer elements to secure and patch.

Now VMware is down to one hypervisor, and that simplifies testing and certification for ISV partners as well as making the freebie ESXi hypervisor the heart of the vSphere stack. One bare-metal hypervisor is enough, and the wonder is why VMware didn't make this change before.

Virtual line of succession

It is hard to remember how primitive the original ESX Server 1.0 bare metal hypervisor was when it came out in 2001. It was arguably the best x86 hypervisor out there, and it didn't have much scalability at all, and hence the product was limited to its design goal, which was to help automate the development and testing environments where production applications are born but don't live.

The virtual machines that could run inside the ESX Server 1.0 hypervisor would be barely able to do any useful work these days. A guest VM could have a single virtual CPU (meaning a single core or a single thread if the processor had Intel's HyperThreading implementation of simultaneous multithreading) and could have at most 2GB of virtual memory. The VM could deliver about 500Mb/sec of network bandwidth on a virtual LAN connection and under 5,000 I/O operations per second (IOPs) on virtual disks. (That's a little more oomph than a fondleslab has these days).

With ESX Server 2, launched in 2003, the guest VM included a feature called VirtualSMP, which allowed that VM to span two cores on a dual-core processor or two sockets on a server using single-core processors. This was a nifty feature, and one that immediately made ESX Server more useful for production workloads such as Web, print, and file servers.

ESX Server 2 topped out at 3.6GB of virtual memory, 900Mb/sec of virtual network bandwidth, and about 7,000 IOPs for virtual disk per VM. (Depending on the capabilities of the underlying hardware, of course. Hypervisors cannot magically make up network and I/O bandwidth, although through over-committing and memory ballooning, they can make a certain aggregate amount of physical memory look larger, as far as the operating systems are concerned running on the VMs, than it really is.)

House of vSphere

In 2006, with the launch of ESX Server 3.0 and the Virtual Infrastructure 3.0 stack, VMware spent years gutting the microcode in the hypervisor to make VirtualSMP scale better. And even then, with the advent of dual-core processors, the guest VM scalability fell behind what the underlying hardware could deliver to a certain extent, which was frustrating to many customers. However, the real problem with putting big jobs like application, email, and database servers onto hypervisors was not virtual CPU scalability, but rather memory capacity and network and disk I/O scalability.

With the ESX Server 3/VI3 stack, VMware pushed VirtualSMP to four cores (or threads) and boosted virtual memory to 64GB; more importantly network bandwidth coming out of a VM was now pushed up to 9Gb/sec and disk I/O was pushed up to 100,000 IOPs. The assumption there is that you have a server with lots of disk drives and network interfaces that can support those bandwidths, of course, and in this case, those rates come from the fastest four-socket x64 servers on the market at the time.

With the ESX Server 4.0/vSphere 4.0 stack in 2009, VirtualSMP capability was doubled again to eight cores, virtual memory was quadrupled to 255GB (not 256 despite what the presentations say), network bandwidth rose to 30Gb/sec, and disk IOPs topped out at 300,000. The ESX Server and ESXi hypervisors could span as far as 128 cores and address as much as 1TB of physical memory.

With the ESX Server and ESXi 4.1 Update 1 last fall, the hypervisor was updated to span as many as 160 cores, matching the scalability of an 16-socket server using Intel's ten-core "Westmere-EX" Xeon E7 processor. Thus far, no one has delivered such a 16-socket box, but IBM is rumored to be working on one.

The monster VM

With no 32-socket x64 servers on the horizon at the moment, and with AMD committed to a ceiling of four sockets with its impending 16-core "Interlagos" Opteron 6200 processors, VMware does not have to boost the scalability of the hypervisor in terms of the number of cores it can span. But it did have to boost the memory scalability to 2TB for the ESXi 5.0 hypervisor, which it has done.

Within the ESXi 5.0 hypervisor, a guest VM can now span up to 32 virtual CPUs (that's cores or threads if you have HyperThreading turned on). A guest VM can also address as much as 1TB of virtual memory, and virtual network bandwidth can be pushed as high as 36Gb/sec and virtual disk can be cranked up to 1 million IOPs. The ESXi 5.0 hypervisor supports disk drives that are larger than 2TB, and can host as many as 512 VMs on a single host (up from 320 on the prior 4.X hypervisors).

These figures, says Raghu Raghuram, general manager of virtualization and cloud platforms at VMware, were obtained on a heavily configured eight-socket x64 server. The main thing is that the VM can support just about any large physical workload out there.

The one thing that VMware is very careful to never talk about in all these years of server virtualization is what overhead ESX Server or ESXi subtract from the system as they virtualize CPU, memory, disk, and network capacity. The benefits of virtualization far outweigh that overhead butcher's bill, no doubt, but only a fool pretends it isn't there.

Price school

VMware has not yet put out the configuration maximum tech specs as it does for ESX releases, but you can see a snippet of some of the feeds and speeds on VMware's Facebook page and look at the configuration maximums for vSphere 4

It is not clear if the ESXi 5.0 hypervisor is ready to go on the impending "Sandy Bridge" Xeon E5 processors from Intel as well the forthcoming Opteron 6200 and 4200 chips. What Bogomil Balkansky, vice president of marketing at VMware, could say is that some servers using a variety of chips will be ready when the new hypervisor ships, while others may take a month or two to get certified. It is hard to believe that the functionality for supporting the forthcoming Xeon and Opteron chips is not in there, and in fact was not in beta test six months ago at the least.

To prepare customers for the price changes coming with the vSphere 5.0 stack, VMware has put together a licensing and pricing guide, which explains how the new pricing scheme works.

With prior releases of the VI3 and vSphere 4 stacks, each software bundle, called an edition, had a limited number of cores or virtual memory that they could address. The four lower-end vSphere bundles - Essentials, Essentials Plus, Standard, and Enterprise ­ were restricted to machines that had six or fewer cores per socket, and only the two high-end editions ­ Advanced and Enterprise Plus ­could be used on machines with six or more cores per socket.

All of the vSphere 4.X editions also had physical (not virtual) memory caps, too, set at 256GB of main memory, and only the Enterprise Plus edition could run on machines with 1TB of physical memory.

Rotten to the memory, agnostic to the core

Starting with vSphere 5.0, VMware is getting rid of any core restrictions and is just going to license the vSphere software based on a processor socket regardless of the number of cores that are in each socket. This is necessary not only because of the increasing number of cores in a socket coming from Intel and AMD, explains Balkansky, but also because the definition of a core is becoming fuzzy.

With the launch of the "Bulldozer" cores from Advanced Micro Devices later this quarter. These cores are not really standalone cores as we think about them in prior Opterons and current and future Xeons, but rather have pairs of integer and floating point units that are hooked into shared schedulers and caches. The resulting pairs are a kind of fractional core, somewhere between 1 and 2 and depending on how the application wants to see them. (For instance, the floating point unit can act as two independent 128-bit FPs or a single 256-bit FP, depending on the availability of the FP units).

Customers were annoyed by the core restrictions in the various VMware vSphere releases, too, and VMware was annoyed about having to count cores and to explain to customers they needed to buy extra licenses of the software to run the ESX Server or ESXi hypervisor on a machine that had more than six cores per socket. By the way, here's an important bit of information: The core and memory caps that VMware put into its lower-end releases are completely arbitrary. If you want to use the entry Essentials or Essentials Plus version of the vSphere stack on a machine with two eight-core sockets, you just double up the licenses to a total of four sockets and the hypervisor will see all 16 cores just fine.

Memory makeup

If VMware eliminates the physical core caps, it has to make up the money some way, and the idea now is to charge customers for the amount of virtual memory they configure to each hypervisor. While the caps have been removed on physical memory with vSphere 5.0, VMware is limiting the amount of virtual memory a license entitles VMs to address, and to be able to address more virtual memory, customers have to buy more licenses per socket.

With the vSphere 5.0 editions, the Advanced Edition is now dead. Customers who have Advanced Edition can move to Enterprise Edition when they upgrade. The licenses to the Essentials, Essentials Plus, and Standard Editions are capped at 24GB of virtual memory per socket (and you buy licenses on a per-socket basis, of course). The Enterprise Edition has a cap of 32GB of virtual memory per socket, and the Enterprise Plus Edition has a cap of 48GB of vRAM, as VMware calls it, per socket.

Call it the virtual memory tax.

There are still caps on the number of virtual CPUs that can be used in a single guest VM, by the way, just as prior vSphere versions had. The Standard Edition costs $995 per server socket and is capped at 8 vCPUs and 24GB of vRAM per VM. Standard Edition has VMotion live migration, which allows for running VMs to teleport from one physical server to another, as well as high availability and disaster recovery features.

The Enterprise Edition has a 32GB vRAM cap and the same 8 vCPUs per VM, and tosses in storage APIs, storage VMotion, Distributed Resource Scheduler load balancing, and Distributed Power Management (which consolidates VMs onto as few servers as possible and shuts down unused ones). It costs $2,875 per socket. The full-tilt-boogie Enterprise Plus Edition has all of the vSphere bells and whistles, including the distributed switch and the new Auto Deploy and Storage DRS features. It also allows up to 32-way VirtualSMP and up to 48GB per VM. It costs $3,495 per socket.

With the average customer configuring about 3GB of vRAM per VM today, Balkansky says that the vast majority of VMware's customers won't see much of a change in moving from vSphere 4.X to vSphere 5.0 in terms of licensing. But what is clear is that customers that want to consolidate lots of cores into the fewest number of sockets possible and get good VM performance by adding lots of main memory are going to pay.

For the sake of simplicity

Let's take a big, fat, eight-way Xeon E7 server and push it to the limits. For the same of argument, let's assume that vRAM and physical main memory are the same. So if you have an 80-core machine with 2TB of main memory, you would need 43 licenses to Enterprise Plus, not eight. And that is $150,285 instead of $27,960, and that is also one hell of a price increase.

But the situation is not as simple as that example above outlines. While VMware is charging for incremental virtual memory access, virtual memory licenses are pooled within editions and across physical machines. So if you have a pod of machines running Enterprise Plus Editions, for instance, you could buy 44 licenses and scatter them across 22 machines and have a virtual memory pool of just over 2TB.

If one machine suddenly needed 1TB of virtual memory and a whole bunch of the other machines in the pool were running lean on the vRAM, the ESXi 5.0 hypervisor on the needy box would grab the physical memory it needs to create the virtual memory required for the job and there is no big deal. The vCenter Server console takes care of monitoring and managing the process and the license compliance.

The interesting effect of this is that customers will have to plan very carefully what they expect their physical CPU and memory configurations will be to underpin their virtual server instances, particularly since VMware is not allowing vRAM pooling across vSphere Editions. VMware is not doing this for obvious reasons. Everyone would buy the cheapest license possible to get the most vRAM access.

The idea with the new pricing scheme was to make licensing simpler, but VMware may have just made it more complex. It could have simply said ESXi 5.0 costs $995 per socket plus a certain amount for physical memory and then put out a list price to activate features. That is apparently too much a la carte for VMware's liking. ®