This article is more than 1 year old

Red Hat turns the crank of KVM enterprise virt

Tuning up for Cisco's blades

Cloud infrastructure wannabe and Linux juggernaut Red Hat has announced the next rev of its Enterprise Virtualization commercial-grade KVM hypervisor, saying it has qualified it to scale further and also adding the ability to support desktop images as well as server images.

At the Red Hat Summit today in Boston, the company also previewed some of the virtualization features coming out with the impending Enterprise Linux 6 release, including some network virtualization and management that its engineers have done in conjunction with server wannabe Cisco Systems' "California" Unified Computing System blade and converged networking systems.

First, the 2.2 update to the standalone Red Hat Enterprise Virtualization, or RHEV, hypervisor for x64 servers. As El Reg told you when RHEV 2.2 went into beta at the end of March, the number of virtual CPUs supported by a single virtual machine running atop RHEV was doubled to 16 and the virtual memory addressable by a VM was quadrupled to 256 GB from the RHEV 2.1 release.

At the time, Red Hat admitted that RHEV could scale memory as far as 1 TB per virtual machine, but the company was limited by its rigorous testing to claim any more than 64 GB for RHEV 2.1 and now 256 GB for RHEV 2.2, announced today. The current KVM and therefore RHEV are based on the RHEL 5.5 release, and RHEV will get some very serious expansion on nearly every dimension when it moves to the RHEL 6 kernel later this year.

According to Navin Thadani, senior director of the virtualization business at Red Hat, in the first six months of entering the commercial server hypervisor racket, RHEV has been able to span 96 cores and 1 TB of main memory on host systems and support 16 vCPUs and 256 GB of virtual memory. The management tools (libvirt and other things in the RHEV stack) that come with RHEV are able to cope with as many as 500 VMs per server host on machines with lots of memory and physical cores and manage up to 200 hosts in a cluster pool of server resources.

Thadani compared this with the touchstone in server virtualization, VMware's vSphere 4.0 stack and its ESX Server 4.0 hypervisor. ESX Server 4.0 can only span 64 cores on a host and matches RHEV 2.2's 1 TB of addressable memory for the hypervisor. Guest partitions on ESX Server 4.0 can only have 8 vCPUs at the moment and address a maximum of 256 GB of virtual memory. ESX Server can deal with 320 VMs per host and the vSphere management stack can only manage 32 hosts in a cluster as a single management domain. Red Hat has the advantage on most metrics.

Flash forward to RHEV 2.3 and its RHEL 6.X kernels for the next six months. By this time next year, Thadani says the goal is to be able to support 4,096 cores and up to 64 TB of memory on a single host. Red Hat doesn't actually expect anyone to build such a machine, of course. At least not for a couple of years, but the benefit of being part of the Linux kernel, as the KVM hypervisor is, means that the hypervisor can take advantage of any scalability inherent in the kernel.

Unlike VMware's ESX Server, which is its own animal, and the open source Xen hypervisor, which is a bolt-on for Linux, scalability has to be coded purposefully. VMware's and Citrix Systems' techies are just as bright as those working on KVM, so they can keep pace, in theory. But not without doing a lot of work that is already being done in the Linux community for the sake of Linux itself.

That future RHEV 2.3 hypervisor will eventually have a KVM that supports 64 vCPUs maximum on each virtual machine riding atop the hypervisor, and the ability to address up to 8 TB of virtual machine max in each VM. Red Hat is aiming to be able to cram more than 2,000 VMs on a single x64 host and link together as many as 200 hosts into a single cluster of pooled resources. That's over 400,000 virtual machines in a cloud.

In the meantime, Red Hat software enthusiasts are going to have to get by with RHEV 2.2, which now has a few more features to make it worth the while to get started now rather than wait for the RHEL 6/RHEV 2.3 updates. RHEV 2.2 includes an import/export tool for the Open Virtualization Format (OVF) storage format for VMs as well as a virtual machine image converter that can convert ESX Server and Xen (inside of RHEL 5.X) images to KVM images running inside RHEL 5.5 or RHEL 6 or in the standalone RHEV product.

Back in March, when RHEV 2.2 went into beta, the company said that in the future, Red Hat will add the ability to convert Windows XP, Windows Server 2003, and Windows Server 2008 images from ESX Server or Xen to KVM formats. This smells like a RHEV 2.3 feature, but Red Hat didn't say.

The RHEV 2.2 update also includes the desktop virtualization protocol called Spice that Qumranet and then Red Hat have been perfecting for years to create virtual desktop that can be streamed out of centralized servers to thin clients or modest PCs but give them a local and peppy end user experience because of the remote rendering technology inside Spice. (This is one of the reasons why Red Hat paid $205m to buy Qumranet). The desktop virtualization pieces, which were originally destined for a separate product but which were merged into a single RHEV several months ago, also have a connection broker and desktop pooling, features that were missing until now.

While Cisco didn't make a big deal about it at the time, with VMware being such an important partner on its California blade servers that launched back in March 2009, Red Hat is also a strategic partner on the UCS boxes both as an operating system to run atop VMware and Hyper-V hypervisors as well as for the RHEV version of the KVM hypervisor. RHEL has been shipping on UCS machines since July 2009, when the blades first started coming out of the factories at Cisco and KVM has been supported since it was delivered earlier this year as well.

Today, Cisco and Red Hat said that they had been working to tightly couple Cisco's Virtual Network Link, or VN-Link, a kind of hypervisor abstraction layer for networks that hooks into the UCS M81KR virtual network interface card and allows the UCS switches and management servers to peer into the network traffic coming into and out of virtual machines.

By integrating VN-Link with the KVM hypervisor and the libvirt management tools inside of RHEV, and leveraging the VT-d I/O virtualization features in the latest Xeon chips (which are inside of the UCS blade and now rack servers), administrators working from inside of RHEV can set policies for VMs with regard to their network access and quality of service and the Cisco iron will enforce it. All the code that lets KVM support VN-Link is open source, so other Linuxes can follow suit and likely will. ®

More about

TIP US OFF

Send us news


Other stories you might like