Citrix stretches XenServer 6.0 to cover bigger iron
Chubbier VMs for heftier apps
Citrix Systems doesn't make a lot of noise about server virtualization these days, now that the two founders of the Xen project have left to start Bromium. But the company, and the open source Xen project that it sponsors, continues to hammer out code to make Xen a credible alternative to VMware's ESXi, Microsoft's Hyper-V, and Red Hat's KVM.
On Friday, Citrix announced the XenServer 6.0 commercial-grade server virtualization hypervisor, code-named "Boston," (it is based on the latest open source Xen 4.1 hypervisor) went into beta in July. Xen hypervisor is at the 4.1.1 release and has had over 102 people from 25 organizations make over 400 commits to the Xen subsystem and driver stack in the past 11 months.
As you can see from the Xen 4.1.1 release notes, one of the powerful new features in the hypervisor is the ability to create CPU pools that VMs are put within, rather than pinning a particular VM to a specific CPU. Each pool also gets its own scheduler.
The Xen 4.1.1 hypervisor can support machines with more than 255 CPUs and 1GB superpages (made up of elements with Xeon "Westmere" class processors), and also allows applications to make use of Advanced Vector Extension (AVX) floating point instructions in Xeon processors.
It also supports discrete GPU and GPU co-processor pass through, which means a GPU can be de-virtualized for a particular VM on a server, much as network cards can be pinned directly to a VM for performance reasons. This GPU pass-through support will be important for delivering virtual CAD, and perhaps even remote gaming, through virtual desktops.
XenServer 6.0 makes Xen 4.1 ready for biz
The raw hypervisor is not particularly useful all by its lonesome. To be consumed by corporations, it needs to be wrapped with lots of features and tech support. The commercial-grade XenServer 6.0, as you can see from its release notes, now supports hosts with as many as 64 logical CPUs.
That's either 64 cores in a machine that doesn't have HyperThreading (or doesn't turn it on, or 32 cores with HyperThreading turned on) 1TB of main memory, and 16 physical Ethernet network interface cards.
The prior XenServer 5.6 hypervisor supported up to 64 logical CPUs and 16 NICs as well, but could only address 256GB of physical memory. Within the hypervisor, a XenServer 6.0 guest can now address 16 virtual CPUs and 128GB of virtual memory, which is double what XenServer 5.6 could do. Your mileage on that virtual memory may vary by guest OS.
This is neither the fattest hypervisor nor the beefiest guest VM out there in x86 server land, but any enbiggening is always appreciated by server buyers and XenServer customers will be grateful (if not somewhat envious) of KVM and ESXi.
The KVM hypervisor embedded in Red Hat Enterprise Linux 6.1 can span 128 cores (or 256 threads if HyperThreading is on) and up to 2TB of physical memory, while KVM guest VMs can span 64 virtual CPUs and up to 2TB of memory.
This embedded hypervisor is in beta now as the freestanding Enterprise Virtualization 3.0 hypervisor and is expected to ship later this year.
VMware's ESXi 5.0 hypervisor, launched in July can span 160 cores and up to 2TB of physical memory, and an ESXi 5.0 guest can consume as much as 32 virtual CPUs and 1TB of virtual memory. Well, provided the physical server has as much or more physical resources to support this.
Open vSwitch, the open source virtual switch that was added with XenServer 5.6 Feature Pack 1as an option is now the default switch with XenServer 6.0.
These virtual switches can be ganged up across hypervisors to create a distributed virtual switch akin to the vSwitch created by VMware for its ESXi hypervisor. See also the Nexus 1000V, made by Cisco for its "California" Unified Computing Systems as a replacement for VMware's vSwitch, and speaking NX-OS like Cisco network admins like.
Citrix is all fired up about Open vSwitch because it supports the evolving OpenFlow software-defined and programmable network devices.
More importantly for customers just beginning on virtualization is the fact that XenServer is no longer tied to features that requires a Windows server in the loop.
Citrix has integrated its StorageLink feature, which allows the hypervisor and its VM to make use of the data replication, de-duplication, snapshot, and cloning features built into storage arrays.
Until now, because this feature was done in partnership with Microsoft, StorageLink Gateway had to run with Windows on a XenServer partition. Similarly, StorageLink Gateway Site Recovery, which also had to run in a VM with Windows, will now work on just about any iSCSI array or disk array with a host bus adapter on the server linked to the hypervisor, since it is also being pulled back into the XenServer 6.0 hypervisor.
The previous StorageLink Gateway is supported on XenServer 5.X until September 2013, and Citrix warns that the integrated StorageLink features in the hypervisor will be focused on arrays that are popular with XenServer customers, shouting out at EMC Clariion, Dell EqualLogic, and NetApp arrays by name.
Thanks to its partnership with Microsoft, Citrix has for some time had plug-ins for its XenCenter management console that allow it to manage Hyper-V hypervisors as well as VMs running on it.
Now, Citrix is working with Microsoft to go the other way, and the future Systems Center Virtual Machine Manager 2012, which is in beta now, can be used to provision XenServer 6.0 hypervisors and the VMs that run on them.
This XenServer plug-in will not work with XenServer 5.1, 5.5, or 5.6. Microsoft and Citrix also allow for the integration of XenServer 6.0 with Systems Center Operations Manager for hypervisor and VM monitoring and troubleshooting.
On the guest operating system front, XenServer 6.0 supports Red Hat Enterprise Linux 6.0, Canonical Ubuntu 10.04, and Debian Squeeze. RHEL 5.6, Oracle Enterprise Linux 5.6 and 6.0, CentOS 5.6, and SUSE Linux Enterprise Server 10 SP4 had some tweaks to fix issues. XenServer 6.0 has experimental support for Ubuntu 10.10, CentOS 6.0, and Solaris 10.
Pricing on XenServer remains the same. There is an open source XenServer that is free. Advanced Edition includes the XenCenter management console, XenConvert P2V tools, XenMotion live migration, the distributed virtual switch, high availability, and memory optimization and costs $1,000 per server.
The Enterprise Edition adds the integrated StorageLink dynamic workload balancing, live memory snapshots, role-based administration; it costs $2,500 per server. The full-on Platinum Edition has a lab manager VM jukebox tossed in, plus provisioning services for physical services and StorageLink Site Recovery disaster recovery; it costs $5,000 per server.
The XenServer roadmap ahead
The Self-Service Manager feature, which was available during the XenServer 6.0 beta through the late summer, didn't make the cut to production as Citrix contemplates how to weave its Cloud.com acquisition from July. Cloud.com's CloudStack and Cloud Portal are the future of the Xen stack.
In a blog post by Bill Carovano, senior director of product management at Citrix, the company also announced a tentative schedule for future XenServer releases and added that it would be dropping the Service Pack and \Feature Pack naming conventions for dot releases and sub-releases.
The current XenServer 6.0 is what Citrix calls a major release, and XenServer 6.0.1 is earmarked for an upcoming version of the NetScaler SDX, which is the multi-tenant virtual network acceleration appliance aimed at service providers and cloud operators. XenServer 6.0.2 is slated for the first quarter of 2012 as a maintenance and hotfix rollup, while XenServer 6.1, due in the middle of 2012, will be a new feature release.
XenServer 6.1.1 will ship with a future version of XenDesktop, with specific features for XenDesktop virtual desktop infrastructure and XenApp application virtualization. And finally, XenServer 6.5 will be the next major release, which will be built on the next iteration of the open source Xen hypervisor. ®