The Register guide to Windows Server 2012: Hyper-V
In February we published the The Register Guide to Windows Server 2012 as a free ebook. (You can get it here via our ebooks page.)
To date there has been about 8,000 downloads. To give you a flavour of the book, co-written by Trevor Pott and Liam Proven, we are publishing our third and final extract, about Hyper-V - but to read it all you need to get downloading.
WS2012 includes the third release of Hyper-V. After nearly a decade of playing catch-up, Microsoft has a full-featured virtualisation stack in the field. Hyper-V 3.0 is a mature, stable hypervisor, whose feature set and pricing structure ought to seriously worry the competition.
Hosts, clusters and VMs all support industry-leading high-end specs. WS2012 also ships with a swath of high-availability features that offer serious enhancement to the basic hypervisor. Hyper-V 3.0 raises the bar for the size and number of virtual machines that a hypervisor can support in multiple areas.
Hyper-V 3.0 can support up to 320 logical processors and 4TB of RAM, increased from 64 processors and 1TB of RAM in the 2008 R2. A Hyper-V 3.0 virtual machine can have up to 64 virtual processors and a terabyte of RAM, up from 4 virtual CPUs (VCPUs) and 64GB of RAM in the previous version.
Clustering is enhanced: Hyper-V clusters support 64 nodes and 8,000 VMs in a cluster, up from 16 nodes and 1,000 VMs previously. Hyper-V 3.0 has a new virtual hard disk (HD) format called VHDX, which allows up to 64TB per virtual disk. You can still use the old format, though, and WS2012 can convert virtual drives between VHD and VHDX - and VM's VMDK format, too.
An important low-level feature is that VHDX supports 4KB alignment for large-sector drives, which can give major performance gains. Most older drives had 512-byte blocks - which are still supported - but mismatched block sizes cause poor performance, which is an issue with previous-generation hypervisors.
Microsoft’s Technet has been running on Hyper-V since the Windows Server 2008 beta-test period, taking a million hits per day. Later, Microsoft moved MSDN onto it, which takes three and half million hits a day, and later still, Microsoft.com itself, which has now been running on Hyper-V for more than four years.
Replica is a new disaster recovery (DR) -oriented feature that allows VMs to be replicated in the background between shared-nothing hosts, even across WAN links. A built-in feature of WS2012, Replica does asynchronous replication of live VMs with no extra services, products or connectivity; it doesn't even need shared storage.
The primary intention is that you use it to copy your VMs to an off-site server - over any available link including the public internet - so that if some disaster befalls your server, you'll have a snapshot of it available that normally will be not more than 15 minutes old.
Failover can be automatic, manual or scripted. Replica is completely controllable from PowerShell 3 or within System Center 2012. For instance, you can script failover in a disaster situation and run it with a single click, then move it back again afterwards the same way.
The initial copy of the VM can be online, over the network, or offline; in other words, you copy your VM onto an external drive and take that to the remote site. For security, the backup can be encrypted with Bitlocker2Go.
That's not all it does; Hyper-V Replica also supports multiple snapshots, reversion to earlier versions of the VM, offsite snapshot storage and more. All you need is two WS2012 boxes and some form of link between them.
Shared-nothing live migration
WS2008R2 supports live migration of VMs between hosts, but this could only be done in a cluster environment when the VMs' VHDs were stored on Cluster Shared Volumes.
This feature is much improved in Hyper-V 3.0; now, VHDs can be stored on ordinary file shares, hosted on another Windows server, a NAS or whatever you choose. This allows live migration between multiple host servers connected to that file server without moving the virtual disks.
Similarly, you can keep a VM on the same host but live-migrate its storage from one place to another - say from one storage area network (SAN) to another, or between a SAN and local storage. All disk writes are duplicated both to the host and the destination while the VHD is being copied across, then on completion, requests are seamlessly switched to the new location.
The only requirement is a 1Gb network interconnect and the feature is supported even on the freeware Hyper-V Server.
Hyper-V Extensible Switch
All hypervisors have some kind of internal network to connect virtual network ports, belonging to the VM, with the real network that the host machine is attached to. For instance, in desktop hypervisors such as Microsoft VirtualPC, you can typically choose whether the VM gets its own IP address on your network, has a Network Address Translation (NAT) masqueraded connection, or just a private internal link between host and guest with no external network link at all.
The Hyper-V Extensible switch allows third-party vendors to extend and enhance the basic Hyper-V switch with features not supported out of the box. Cisco, for example, expands this into a full Layer 2 managed switch, a Cisco Nexus 1000v. NEC has developed the UNIVERGE PF1000 to bring the competing Openflow to the Microsoft virtual ecosystem.
These extensions to Hyper-V’s core switch are software-only switches which runs inside their own virtual appliances, taking to the application programming interfaces (APIs) within Hyper-V. Using the Nexus 1000v as an example, you can manage the virtual switch with external Cisco management tools as if it were a hardware device.
In nearly all cases, the same model extensible switch can also be used on VMware vSphere should you wish, allowing you to maintain data centres with a heterogeneous hypervisor environment while easing network management burdens.
The end result is that you can do things like monitoring, traffic management, adaptor teaming and so on, all from your standard network management software.
How network virtualisation works is complex, but what it does is easy to understand: it enables you to virtualise VM's IP addresses. In other words, these become independent of the network that the host server is attached to - you can move VMs around your network, even onto different networks, and their IP addresses go with them.
So for example IP addresses can be reassigned on the fly from a VM on one machine to another on a different host, regardless of what subnet they're on. Both network-internal IP addresses and live, publicly-visible ones can be virtualised and redirected. WS2012 ensures that addresses are rewritten or encapsulated as necessary so that the right traffic goes to the right host.
It is vendor-independent. Single or multiple internal or external IP addresses can be added to multiple hosts on multiple physical machines.
Some Microsoft customers have many virtual local area networks (VLANs) - hundreds of them on a single network, in some cases even thousands. Virtualising the network into a virtual managed switch inside the hypervisor brings powerful new facilities and enables dramatic simplifications of host networking, letting you get rid of multiple DHCP ranges and VLANs.
Hyper-V 3.0's virtual desktop infrastructure support has been considerably enhanced, with several bits of new functionality aimed specifically at providing better, richer virtualised desktops and terminal server sessions.
VDI images can come a few different flavours. Dedicated 1:1 personal virtual machines are the easiest to understand and deploy, but are the most resource intensive and have the highest overhead. VDI pools can spawn multiple child VMs from a single master image using either a stateless model, or with a fully stateful user experience via the new User Profile Disk.
RemoteFX is now supported over the Wide Area Network (WAN), and incorporates both USB redirection and multitouch interface support; RemoteFX’s multitouch support includes up to 10 contact points for those who really want to get their fondle on. Hyper-V’s vGPUs have evolved. They now support DirectX 11 and will be available as “soft vGPUs” even in the absence of a physical GPU. If physical GPUs are present, Hyper-V will use them for vGPU offload.
The vGPUs are impressive; RemoteFX does a fantastic job of delivering multimedia across a WAN and certainly has no issues in a LAN environment. The only limitation - and it is a big one - is that OpenGL 2.0 and above are not accelerated. For that, we must still turn to Citrix’s HDX or Nvidia’s VGX (via VMware ESX.) Having users with OpenGL using hypervisors other than Hyper-V shouldn’t be a problem; System Center 2012 incorporates heterogeneous management to help you run a mixed environment efficiently.
Looking back to the 2008 R2 virtualisation stack, the contrast between that era’s VDI offerings and 2012’s are stark. In WS2012, VDI is a first-class citizen. The management tools are VDI aware, and technologies like RAM deduplication - which is of limited benefit for server virtualisation scenarios, but changes everything in VDI - were introduced.
If you are unsure which variant of Microsoft’s VDI offerings are right for you, there are wizards help you decide between App-V, RDSH or virtualised Windows client instances. RDSH now includes Fair Share, a CPU-scheduling technology that prevents a single user instance from consuming all the resources on a shared server. Overall, Microsoft’s 2012 VDI is easier to deploy and manage centrally. Fewer steps are required, there’s no bizarre dependence on IPv6, and everything - client or server - is entirely addressable via PowerShell 3.
Active Directory now understands virtualisation and domain controllers have direct support for running inside VMs. The idea is that now you should have all your domain controllers virtualised as standard. WS2012’s AD is fully VM aware; you can now clone and roll back domain controllers (DCs) as required.
The key new functionality is the "VM Generation ID attribute" (GenID). Let’s say, for example, you create a new virtual domain controller (VDC) and set its name and IP address. Once you clone that VM and reboot it, the copy looks at its GenID, determines if it is a clone or a rollback and automatically configures itself appropriately. The cloning of virtual machines we’ve been warned against for the past decade or so - but practiced so liberally regardless - is now accepted, institutionalised and baked directly into the OS.