This article is more than 1 year old

VMware moves vSphere 5.0 to launch pad

'Next step forward in cloud infrastructure'

The virtual cat is out of the physical bag. Server virtualization juggernaut VMware has sent out invitations to a big shindig it is hosting in San Francisco on July 12, which is almost certainly going to be the launch of its ESXi 5.0 hypervisor and related vSphere 5.0 virtualization management stack and vCloud extensions.

All that VMware said in its announcement is that CEO Paul Maritz and CTO Steve Herrod would be on hand "to unveil the next major step forward in Cloud infrastructure". That's a capital cloud, as you can see, as in capital expenses for you and money in the bank for VMware.

The one big change that we know is coming with the vSphere 5.0 stack is that there will no longer be an ESX Server 5.0 hypervisor. The full ESX Server hypervisor includes the ESX Service Console, which was extracted from the ESX code base back in September 2007 to create the first embedded version of VMware's hypervisor, ESX Server 3i. By taking out the Service Console, VMware dropped the size of the hypervisor down from 2GB to 32MB.

This embedded version of the hypervisor was originally only intended to be sold on an OEM basis by server makers, who would park it on flash memory drives tucked up onto motherboards. But eventually, to compete with the freebie XenServer from Citrix Systems, VMware started giving ESXi away. The ESXi hypervisor has hooks to external mechanisms to control it, which eventually evolved into vSphere and include functions now in the vCenter management console, making the full-blown ESX Server unnecessary at this point. Now, VMware will be down to having to support only one server hypervisor instead of two.

Virtual rumors

There are rumors about what else might be in the vSphere 5.0 stack, which surfaced after details were briefly posted by a Turkish forum based on prebriefings given to VMware partners earlier this year. The blog post was quickly removed, but not before others saw it and the story leaked out.

According to these reports, the ESXi hypervisor is due to be fattened up, much as its KVM rival from Red Hat was last summer. The ESXi 5.0 hypervisor reportedly be able to span up to 160 x64 processor cores (or 160 threads if you have HyperThreading turned on in a system based on Xeon 7500 or E7 processors) and address up to 2TB of physical memory.

That hypervisor will then be able to carve up a single physical server into as many as 512 virtual machines, with a tiny bit or memory or CPU capacity allocated to those VMs if that is what customers want to do. A single VM running atop ESXi 5.0 will be able to grow as large as 32 cores (or virtual CPUs, if you are counting threads) and up to 1TB of virtual memory.

The existing ESXi 4.1 and ESX Server 4.1 hypervisors were able to span machines with up to 128 virtual CPUs (cores or threads, depending on if you use HyperThreading again) and 1TB per host; the 4.1 hypervisors could support VMs with as many as eight virtual CPUs and 255GB (not 256GB) of virtual memory per VM, and could put up to 320 VMs on a single host.

Quadruple the cores

With Update 1 for ESXi and ESX Server 4.1, announced in February 2011, VMware has already boosted the CPU core count to 160, to match Intel's delivery of the ten-core "Westmere-EX" Xeon E7 processors on a 16-socket server, which no one has delivered. (And no, I am not counting Silicon Graphics' "UltraViolet" Altix UV 100 and 1000 series machines since they do not support ESX Server or ESXi as hypervisors; KVM is supported on these boxes, as you can see from the specs [pdf].) So the host is getting double the memory and the VMs are getting quadruple the cores and memory with ESXi 5.0; the number of VMs is boosted from 320 to 512, if the rumors are correct.

As part of the vSphere 5.0 stack, VMware is expected to move away from a Windows-based console and toward a Web-based console, which will finally make Linux shops stop complaining.

The new vSphere stack is also expected to include a feature called Storage Distributed Resource Scheduling, or Storage DRS, a companion to the existing DRS feature for the several prior ESX releases. With the DRS feature, virtual machines are live migrated using the VMotion feature from server node to server node and hypervisor to hypervisor based on policies that are set by administrators.

DRS can be used to consolidate workloads on the fewest possible machines, or conversely, to spread them out over more machines if there are performance bottlenecks. With Storage DRS, the underlying disk files that describe and encapsulate those VMs can be similarly migrated from one RAID disk group to another or across disk arrays to avoid storage bottlenecks or to conserve storage space, depending on the policies.

What happens when a VM starts live migrating at the same moment its underlying storage files start moving? Obviously, VMware will have thought of that one. ®

More about

TIP US OFF

Send us news


Other stories you might like