This article is more than 1 year old

VMware unmasks next-gen hypervisor

Cloud eats ESX 4.0

As expected, server virtualization kingpin VMware will today take the wraps off its next generation hypervisor, ESX Server 4.0, and the related tools for managing it. They're now called vSphere rather than Virtual Infrastructure.

The vSphere stack embodies a strategy and product set that VMware used to call the Virtual Data Center Operating System, or VDC-OS. Now, says Bogomil Balkansky, vice president of marketing at VMware, it goes by the name Cloud OS.

Call it by any name you want, but ESX Server 4.0 is still a hypervisor that virtualizes compute, storage, and network resources on x64 servers and that has a bunch of features that plug into that hypervisor or wrap around it to allow virtual machines to do neat things, like transport around networks of machines or back each other up.

And despite all the different names that VMware has come up with for the vSphere package - vCompute, vNetwork, and vStorage, all part of what VMware chief executive officer Paul Maritz called the "21st century software mainframe" at EMC's analyst conference in early March - most of the features in vSphere are, according to Balkansky, in the hypervisor.

That's ESX Server 4.0 for servers and ESXi 4.0 for the embedded version that ships on flash drives inside servers. But you won't see VMware saying ESX Server much in the announcement today or in its marketing materials. And there is not a set of products called vCompute and then another called vStorage and yet one more called vNetwork. These are just aspects of the ESX Server hypervisor, with some features being truly bolted on from the outside.

A funny aside about names and marketing. For many years now, the sources of names for VMware's GSX Server type 2 hypervisor, which came to market first and put VMware on the server map, and ESX Server, a type 1 or bare metal hypervisor that followed it to market and that accounts for most if VMware's revenues and profits these days, have been a mystery. As it turns out, VMware hired a consultant way back when, and this consultant came up with the names "Ground Swell" for the variant of VMware Workstation tweaked for servers and "Elastic Sky" for its bare-metal, more capable follow-on.

At the last minute, the marketing and product people chickened out and changed them to GSX and ESX and slapped "server" on the two monikers. As for vSphere, which sounds a bit too much like IBM's WebSphere middleware and its LotusSphere trade show, Balkansky says that the people in the company voted on a whole bunch of names, and vSphere is the one people liked best.

Maritz already went through the reasoning behind vSphere in March and will no doubt go into it again at the launch event in Palo Alto. The feeds, speeds, packaging, and pricing are what is really news today. The ESX Server 4.0 hypervisor comes a long way toward getting the hypervisor in better synch with multicore processors and the kind of main memory and I/O bandwidth modern applications require whether they are on virtual or physical servers.

With ESX Server 2.X, the hypervisor could span one or two processors and each VM could handle 4 MB of memory. Network I/O was under 300 Kb/sec and disk bandwidth was under 10,000 I/O operations per second (IOPs). The hypervisor was wickedly underpowered, to say the least. With ESX Server 3.X, the hypervisor could span up to four processor cores (or two cores if they have HyperThreading, which Intel has for its Xeon chips but which Advanced Micro Devices does not for its Opteron chips). That generation of hypervisor could allocate a maximum of 64 GB of memory to a single VM, as network bandwidth grew to 9 Gb/sec and disk IOPS went up by more than two orders of magnitude to 100,000.

With ESX Server 4.0, VMware is boosting the CPU count in a single VM to eight (that's eight cores with HyperThreading off and four cores with HyperThreading on), and each VM can have up to 255 GB of memory allocated to it (not 256 GB, but 255 GB, according to Balkansky.) Network bandwidth has risen by more than a factor of four to 40 Gb/sec, and a single hypervisor can cope with more than 200,000 IOPS of disk bandwidth. This is a massive increase in capacity and bandwidth.

The 21st century software mainframe that Maritz was talking about is what happens when you plunk the vSphere software on a whole bunch of servers all clustered together. In one potential configuration of vSphere, customers can put up to 32 64-core Xeon servers together with 2,048 processor cores supporting 1,280 VMs, with 32 TB of aggregate main memory and 16 PB of storage, delivering 3 million IOPS. VMware's Dynamic Resource Scheduler automatically balances the VMs and their workloads and a single instance of the vCenter management tools can be used to manage the whole shebang.

(Assuming a maximum of 20 VMs per core, such a "giant computer" as the current presentations say - 21st century software mainframe was more accurate and fun - such a machine might have 81,920 VMs. Which is a huge number).

Another comparison I saw in VMware's specs seemed to span up to 512 two-socket servers, with a total of 4,096 processor cores, with the machines organized into a total of 16 32-node sub-clusters, which had the same memory and I/O capacities. No matter how you build this "software mainframe," each ESX Server 4.0 hypervisor instance can span as many as 64 cores and 512 GB of main memory. If an x64 server has more resources than this, you have to plunk down multiple licenses of ESX Server.

Next page: vNetwork

More about

TIP US OFF

Send us news


Other stories you might like