VMware unmasks next-gen hypervisor
Cloud eats ESX 4.0
As expected, server virtualization kingpin VMware will today take the wraps off its next generation hypervisor, ESX Server 4.0, and the related tools for managing it. They're now called vSphere rather than Virtual Infrastructure.
The vSphere stack embodies a strategy and product set that VMware used to call the Virtual Data Center Operating System, or VDC-OS. Now, says Bogomil Balkansky, vice president of marketing at VMware, it goes by the name Cloud OS.
Call it by any name you want, but ESX Server 4.0 is still a hypervisor that virtualizes compute, storage, and network resources on x64 servers and that has a bunch of features that plug into that hypervisor or wrap around it to allow virtual machines to do neat things, like transport around networks of machines or back each other up.
And despite all the different names that VMware has come up with for the vSphere package - vCompute, vNetwork, and vStorage, all part of what VMware chief executive officer Paul Maritz called the "21st century software mainframe" at EMC's analyst conference in early March - most of the features in vSphere are, according to Balkansky, in the hypervisor.
That's ESX Server 4.0 for servers and ESXi 4.0 for the embedded version that ships on flash drives inside servers. But you won't see VMware saying ESX Server much in the announcement today or in its marketing materials. And there is not a set of products called vCompute and then another called vStorage and yet one more called vNetwork. These are just aspects of the ESX Server hypervisor, with some features being truly bolted on from the outside.
A funny aside about names and marketing. For many years now, the sources of names for VMware's GSX Server type 2 hypervisor, which came to market first and put VMware on the server map, and ESX Server, a type 1 or bare metal hypervisor that followed it to market and that accounts for most if VMware's revenues and profits these days, have been a mystery. As it turns out, VMware hired a consultant way back when, and this consultant came up with the names "Ground Swell" for the variant of VMware Workstation tweaked for servers and "Elastic Sky" for its bare-metal, more capable follow-on.
At the last minute, the marketing and product people chickened out and changed them to GSX and ESX and slapped "server" on the two monikers. As for vSphere, which sounds a bit too much like IBM's WebSphere middleware and its LotusSphere trade show, Balkansky says that the people in the company voted on a whole bunch of names, and vSphere is the one people liked best.
Maritz already went through the reasoning behind vSphere in March and will no doubt go into it again at the launch event in Palo Alto. The feeds, speeds, packaging, and pricing are what is really news today. The ESX Server 4.0 hypervisor comes a long way toward getting the hypervisor in better synch with multicore processors and the kind of main memory and I/O bandwidth modern applications require whether they are on virtual or physical servers.
With ESX Server 2.X, the hypervisor could span one or two processors and each VM could handle 4 MB of memory. Network I/O was under 300 Kb/sec and disk bandwidth was under 10,000 I/O operations per second (IOPs). The hypervisor was wickedly underpowered, to say the least. With ESX Server 3.X, the hypervisor could span up to four processor cores (or two cores if they have HyperThreading, which Intel has for its Xeon chips but which Advanced Micro Devices does not for its Opteron chips). That generation of hypervisor could allocate a maximum of 64 GB of memory to a single VM, as network bandwidth grew to 9 Gb/sec and disk IOPS went up by more than two orders of magnitude to 100,000.
With ESX Server 4.0, VMware is boosting the CPU count in a single VM to eight (that's eight cores with HyperThreading off and four cores with HyperThreading on), and each VM can have up to 255 GB of memory allocated to it (not 256 GB, but 255 GB, according to Balkansky.) Network bandwidth has risen by more than a factor of four to 40 Gb/sec, and a single hypervisor can cope with more than 200,000 IOPS of disk bandwidth. This is a massive increase in capacity and bandwidth.
The 21st century software mainframe that Maritz was talking about is what happens when you plunk the vSphere software on a whole bunch of servers all clustered together. In one potential configuration of vSphere, customers can put up to 32 64-core Xeon servers together with 2,048 processor cores supporting 1,280 VMs, with 32 TB of aggregate main memory and 16 PB of storage, delivering 3 million IOPS. VMware's Dynamic Resource Scheduler automatically balances the VMs and their workloads and a single instance of the vCenter management tools can be used to manage the whole shebang.
(Assuming a maximum of 20 VMs per core, such a "giant computer" as the current presentations say - 21st century software mainframe was more accurate and fun - such a machine might have 81,920 VMs. Which is a huge number).
Another comparison I saw in VMware's specs seemed to span up to 512 two-socket servers, with a total of 4,096 processor cores, with the machines organized into a total of 16 32-node sub-clusters, which had the same memory and I/O capacities. No matter how you build this "software mainframe," each ESX Server 4.0 hypervisor instance can span as many as 64 cores and 512 GB of main memory. If an x64 server has more resources than this, you have to plunk down multiple licenses of ESX Server.
The vNetwork part of the ESX Server 4.0 hypervisor that everyone has been chattering about puts a virtual network switch inside of a virtual machine and lets VMs running operating systems and applications talk to this virtual switch instead of real ones. VMware has created its own virtual switch, which is included with the vSphere package, but the company has also worked with networking giant Cisco Systems to put that company's IOS switch operating system in an ESX Server wrapper and let it manage the networking for VMs.
The beauty of this is that Cisco network managers who are dealing with a Nexus 1000V virtual switch use the exact same tools as they would use to manage real Cisco routers and switches. Other switch vendors have not come forward to slide their switches into VMs and become part of the vNetwork stack, but they will be encouraged to do so - not just by VMware, but by their customers.
On the storage front, ESX Server's vStorage features include VM direct path I/O, which allows a VM to circumvent a hypervisor and to directly bind to and access an I/O device (such as a disk controller or a network card) and run at native speeds. Think of it as I/O paravirtualization, Balkansky says. Anyway, this not only can boost performance, it allows for devices that are not supported in ESX Server directly to be linked to VMs. However, once you do this VM direct path I/O, you sacrifice a lot of the virtualness of the VM and it is no longer mobile.
The existing ESX Server 3.X hypervisor allowed a kind of thin provisioning for main memory for virtual machines, allowing up to twice as much main memory as existed in the system to be allocated to VMs since most VMs don't use all the memory they want - and this is so because the operating systems they support do not.
With ESX Server 4.0, the disk side of the house is getting thin provisioning, which allows you to give a virtual machine, say, 2 GB of virtual storage to make an operating system happy, but based on the disk space it actually needs, maybe it only really has 20 per cent of that. As the VM needs more disk space for data, it gets pulled from its 2 GB allotment.
With the code optimizations in ESX Server 4.0, the thin provisioning of memory and disk, improvements in bandwidth for network and disk, and support for more cores in the hypervisor, Balkansky says that VMware can get 30 per cent more physical servers onto a physical machine. And thanks to the distributed power management features (which were already in ESX Server 3.5), the VMotion teleporting features can be used to power servers up and down as needed, consolidating as many VMs onto as few physical servers as possible, yielding a 20 per cent reduction in power and cooling costs. The thin provisioning feature of ESX Server can cut storage costs in half as well.
With ESX Server 3.X, VMware had three different Virtual Infrastructure 3 bundles - Foundation, Standard, and Enterprise. Four of the six vSphere bundles overlap these three VI3 bundles in features and prices, and then a lower cost option is tossed in and so is a higher-priced option.
vSphere Essentials includes the ESX Server 4.0 hypervisor and its patch manager, management agents for VMs and a management server. It costs $995 for a license that spans three physical two-socket servers, or a low of $166 per socket. The Essentials Plus Bundle, which overlaps with the old Foundation bundle with ESX Server 3.5 in terms of features, adds high availability and data protection features for those three servers, and costs $2,995. If you are not catching it, you there is no such thing as buying for one or two servers for these two Essentials packages, even though they are aimed at SMBs.
Moving on up into the data center, you have vSphere Standard, which rolls up the hypervisor (either ESX Server 4.0 or the embedded ESXi 4.0), thin provisioning, high availability, and the management agents for the hypervisor and VMs for $795 per processor socket. vSphere Advanced adds VMotion live migration, network security zoning (vSecurity), data protection, and continuous availability (VMware Fault Tolerance) for $2,245 per processor socket.
vSphere Enterprise adds distributed resource allocation, power management, and storage live migration, and it costs $2,875 per socket. These are essentially the same prices as VMware was charging for the VI3 stacks. vSphere Enterprise Plus adds the distributed software switch capabilities and host configuration controls and raises the price to $3,495 per socket.
VMware is offering upgrades from VI3 Standard to vSphere Advanced for $745 per socket and from VI3 Enterprise to vSphere Enterprise Plus for $295 per socket.
While the vSphere software is being announced today, the ship date has not been set yet. But VMware is telling customers that ESX Server 4.0 and ESXi 4.0 should be ready by the end of the quarter. ®