Original URL: http://www.theregister.co.uk/2011/11/02/desktop_virtualisation/

Virtualisation turns PCs into personal clouds

Desktops break free

By Timothy Prickett Morgan

Posted in Enterprise Tech, 2nd November 2011 16:00 GMT

If IT managers had had it their way decades ago, we would have never been allowed our own personal computers.

The whole idea of giving end-users their own computing resources runs counter to the philosophy that data processing is a centralised function best left to professionals. PCs took off, and dragged IT departments into the client/server revolution, precisely because of this high-priest attitude.

But ultimately, the IT department gets the support calls when end-users adopt new technologies with or without its approval. That means the IT department is often a step behind, trying to rein in rogue users while at the same time adopting some of the new technologies they are deploying behind its back.

The data centre as we know it is being shredded by virtualisation and the adoption of cloud-style computing technologies. And it was inevitable that with the proliferation of online applications and the smartphones and tablets that are favored by many end-users for specific kinds of computing, the desktop as we know it would eventually be ripped to shreds.

It is hard to say where desktop virtualisation started, but a tip of the hat is due to ClearCube. Established in 1997, the firm came up with the idea of creating a Pentium-based blade server that could be installed in racks in the data centre and push a Windows PC image stream down a wire to a special box into which you could plug a keyboard, mouse, and monitor.

The ClearCube system could provision a blade with personalised settings for each user and all of the patching and management for the Windows instances was done back in the data centre.

This initial ClearCube product provided half of the idea for what came to be called virtual desktop infrastructure, or VDI. The other half would come from server virtualisation juggernaut VMware.

(Interestingly, ClearCube now sells server-based VDI solutions in partnership with VMware in addition to its PC and workstation blades and VDI software, which is called Sentral.)

VMware started its business virtualising PCs with its Workstation hypervisor, and it continues to have a keen interest in what happens on the other side of the data centre wires – you can't really say desktop any more with so many people working from laptops and other mobile devices, even within the confines of the office.

VMware was founded in 1998 and in 1999 came out with Workstation, a hosted or type 2 hypervisor that allowed a Windows or Linux PC to host several incompatible operating systems as guests on top of the host operating system.

A few years later VMware was selling GSX Server and ESX Server hypervisors to carve up a server into virtual slices for running multiple server images side by side.

It wasn't long before the idea of a virtualised blade PC hosted in the data centre was merged with server virtualisation to create what VMware called VDI, and the rest of the industry followed suit. VMware's VDI stack has matured considerably and is now called View.

The other big player in this space is Citrix Systems, with its XenDesktop stack and its history of serving up centrally hosted applications through Presentation Server (now known as XenApp).

VMware bought itself into application streaming with ThinApp. Microsoft bought Softricity for its App-V app streaming, and has its own Hyper-V hypervisor and an Enterprise Desktop Virtualisation (MED-V) VDI broker.

And there are many different variations on the theme and point products available from Kaviza (now part of Citrix), MokaFive, Pano Logic, Parallels, Cisco Systems, Virtual Computer, Red Hat, Liquidware Labs, Res Software, Wanova.

Stormy weather

VDI was a bit choppy at first. End-users complained about the poor quality of video and audio streaming to them from the data centre, and the centralised PCs were subject to boot storms and patch storms as well. Everybody tried to get a virtual PC at the same time every morning and the systems tended to try to patch all the machines at once.

Moreover, VDI required the live migration of PC images from machine to machine as well as other features in the server hypervisors that depend on SAN storage. That made it expensive and in most cases unsuitable for small and medium businesses because of the complexity and cost.

The problem is that VDI, in the strictest sense of sending hosted virtual desktop images down to a client device from the data centre, is far from suitable in all cases. And that is why a number of niche players have sprung up to offer their own twists on the VDI theme.

Meanwhile Citrix and VMware have begun to talk about merging virtual desktops and cloud applications into a kind of workspace that can be streamed down to a laptop, desktop, smartphone or tablet, regardless of what kind of computer the application needs to run on.

The new mantra is desktop virtualisation, not just VDI. The idea is to give end-users access to their data and applications anywhere and on any device they choose, while bringing centralised control to an increasingly diverse set of virtual machines and online applications.

It is a tall order, but it is the one that end-users are demanding. And if IT vendors don't support it and IT departments don't buy it, end-users are perfectly happy to throw together a hodge-podge of cloud-based apps using their own devices. They won’t wait for the IT department.

VMware and Citrix in particular have been trying to get out in front instead of leading from behind, while Intel, which has a stake in both the server and desktop arenas, is trying to shape how this "personal cloud", as Citrix is calling it, evolves.

IDV is not just VDI backwards

Intel, which supplies the vast majority of server and desktop microprocessors in the world, wants its Atom chips to challenge ARM processors in smartphones and tablets too. The vendor wants to own the end-user device and the in-house or cloudy server infrastructure that serves up apps to smartphones, tablets and PCs.

While Intel has certainly been eager to sell servers to host VDI setups, the three architectural principles of the Intelligent Desktop Virtualisation (IDV) architecture it is espousing run a little counter to the conventional VDI approach.

"You don't even have to get into exotic devices to see that something has to change," says Dinesh Rao, director of Intel's independent software vendor program.

"You have to understand what computer model works when, and you have to accept the fact that no single computer model will answer all of your issues.

“We are in a period of change. We are moving toward some stable end state but we haven't achieved that yet. So we have tried to outline the characteristics of that future state."

Intel's IDV scheme concedes that users don't want to be locked down to any one particular kind of virtualisation for operating systems and applications.

IDV has to span everything from traditional terminal services to shared virtual desktops, classic VDI, OS streaming, app streaming, type 2 (hosted) hypervisor containers on the client, all the way to putting a type 1 (bare-metal) hypervisor on the client.

The key difference with IDV is that Intel wants management to be centralised and computing to be executed locally. It reasons that the end-user experience will be best if the desktop virtualisation takes advantage of whatever computing, graphics and I/O each device has – and the device will change as end-users move from home to office and back again.

"Local execution should not be done as an accident but by design," says Rao.

This is not something any of the desktop virtualisation vendors have fully figured out yet. The "intelligent" part of local execution means that whatever setup companies have should be able to check what local resources are available and compensate for whatever users don't have to run an application back in the data centre.

So whatever you are using to control your access to data and applications should be smart enough to check the kind of device you have and stream whatever experience makes most sense.

You have to accept the fact that no single computer model will answer all of your issues

The local execution is something that makes the most sense economically, of course. This is something that MokaFive and Virtual Computer have been banging the drum about for the past several years. MokaFive Suite 3.0, announced back in May, has a central management server that deploys a Windows PC image atop a modified VMware Player client hypervisor (type 2) or its own BareMetal hypervisor (type 1).

Virtual Computer has created its own client-side hypervisor, called NxTop, that does much the same thing, with the central management server creating and storing PC images and beaming them down to a hypervisor running on a PC client device instead of on those central servers.

Ask yourself this: is it cheaper to compute on a $500 PC or on a slice of the combination of a server, network switches and SANs sitting back in the data centre? We all know the answer.

So when you can, you execute on the local device, not just because the experience is better but also because it cost less. For many users, the endpoint device will change and yet they still want access to their data and apps. And that means you need to manage across desktop virtualisation tools as well as within them.

The layered look

The second goal of the IDV architecture, called layered images, is also tough to achieve.

"If we want to access our desktops from any device anywhere, the fundamental thing to do is split out the operating system from the application, user data and user settings," says Rao.

"What we are really after with layering is dynamically assembled desktops. Whenever people talk about VDI, they always show this layer cake, with the OS, apps and user data separate.

“But in practice, nothing is quite that separated. If you install a Windows application it updates the registry, and the moment you do that my version of Windows is different from yours. Microsoft has ways of doing this, with folder redirects and profiles, but these techniques need to be used consistently."

Layering in desktop virtualisation saves memory and storage back in the data centre because if your OS and app layer are the same as mine, you can serve one copy to both of us. So the virtual desktop compression can be higher for each server.

Layering also means asking if the endpoint can run the image, sending it down the wire and then keeping it in sync with bi-directional synchronisation and de-duplication. (If you send a layered desktop image down to the endpoint and execute it locally, it means you can work offline – something that VDI does not do very well.)

"This technique is absolutely essential and has absolutely no compromises," says Rao.

"Everybody gets what they want. IT gets its goal, which is the golden image. Users are executing in a relatively local way, so offline operations are fine.

“A lot of VDI today is really dependent on LAN and WAN latencies, and if you are dependent on a remoting protocol – and these have got fantastically sophisticated – you are still vulnerable to the LAN and WAN.

“The way that VDI is implemented today, vendors kind of pick a fight with both physics and economics. The physics is packet loss, which people can see, and if you insist on doing everything in the data centre, then the most economical endpoint is not being used."

The last bit of the Intel architecture for future desktop virtualisation is device-native management. This is a techy way of saying that the endpoint device, of whatever kind, is an active participant in its own management along with the centralised management servers.

In some cases that means making use of the Trusted Execution Technology secure boot and other features of the vPro desktops, and in others it might mean using a client-side, bare-metal hypervisor with secure boot.

Intel is also porting its Active Management Technology feature from Xeon servers, which allows servers to be remotely administered even when operating systems and hypervisors crash, to its future Core processors for PCs. So expect this to play a part, too.

Shifting shapes

"People are rethinking what a desktop is," says Tyler Rohrer, co-founder at Liquidware Labs.

The company makes a tool called Stratosphere UX, which assesses performance and helps with troubleshooting in VDI installations.

"I think we were all a little spoiled by Moore's Law in the PC, with predictably dropping costs and predictably increasing performance,” says Rohrer.

“Most of the time we had predictable refresh cycles and improved productivity because the machines had more power than the applications required. And everything about the end-user was locked up in that one device.

“It has taken a spark like server virtualisation to get us to think we don't have to have one OS tried to a device, and that we can do other things. And the reason we are all interested is that we know, almost instinctively like we did when PCs first came out, that some of this technology is really going to increase productivity."

At one end of the desktop virtualisation spectrum is Pano Logic, whose Pano System runs counter to Intel’s IDV architecture. Pano Logic has taken the original ClearCube idea to its logical conclusion by extending a virtual PC bus down an Ethernet wire to a box that has no state whatsoever because it has no CPU, no storage and no moving parts.

The virtual PC image runs atop VMware's ESXi, Microsoft's Hyper-V or Citrix's XenServer hypervisors and is managed by Pano Manager. And because it doesn't have any brains at the other end of the wire, it is perfect for security-conscious customers in banking, finance and government.

"If you steal the device, you can't even get a login," says Dana Loof, executive vice-president of marketing at Pano Logic.

But even more importantly, end-users working from Pano clients can't tell that they are not working from local PCs. They show in a recent side-by-side bakeoff the Pano Zero Clients beating out thin clients powered by VMware View and its PC over IP protocol and Citrix XenDesktop clients and their HDX protocol.

If security was a concern that drove some customers to zero-client VDI, the high cost of classic VDI desktop hosting setups from Microsoft, VMware and Citrix has kept many customers from adopting VDI to date.

Even if you give the VDI software away, you haven't solved the problem

Krishna Subramanian, vice-president in charge of the VDI-in-a-Box line that Citrix got through its acquisition of partner and rival Kaviza in May 2011, says Kaviza was attractive because its kMGR VDI stack, which originally ran atop VMware's ESXi hypervisor, was able to run on clusters of servers using local storage instead of SANs.

The kMGR setup also does not need the connection brokers, load balancers and management servers other VDI setups often require, which means Kaviza could radically reduce the cost of a VDI-based PC image.

"We made sure that a virtual desktop costs less than a real PC," says Subramanian, adding that depending on the configuration options a virtual PC costs $260 to $425.

"Customers get that there are soft dollar benefits down the road with VDI, but they must show return on investment for VDI projects in the first months after it is installed."

This is particularly acute, says Subramanian, when you realise that 70 per cent of the PCs in the corporate world enterprise are in small and medium businesses, and that 60 per cent of the cost of a VDI setup is not the software but the hardware back in the data centre to drive it.

"Even if you give the VDI software away, you haven't solved the problem,” she says.

Given this, you might expect for Citrix soon to rebrand VDI-in-a-Box as XenDesktop SMB Edition, or even go further and try to scale up kMGR underneath XenDesktop proper and get rid of the SAN requirement for XenDesktop altogether. Subramanian is mum on the subject.

Depending on who you ask, there are somewhere between 600 million and 800 million PCs in the corporate environment. And Raj Mallempati, director of product marketing for the desktop and application virtualisation group at VMware, does not think that enterprises will shift from real PCs to VDI-streamed PCs. Anymore than Citrix or Microsoft believe this, either.

Windows and the long tail

"The post-PC era has definitely arrived," says Mallempati. "But there is a long tail of Windows-based applications and they will use more traditional VDI. Over time, the penetration of View will increase, but it will still be a niche case."

That is why VMware has expanded from just PC image management to end-user application management with its Horizon App Manager, announced side by side with View 5.

Horizon App Manager is being pitched as a a sort of iTunes store for enterprise applications. It will allow use cloud-based applications as well as internal applications, and will also eventually be able to field up requests for applications streamed from ThinApp or virtual desktops streamed from View.

And here's the important part: Horizon will be able hook users working from devices such as iPads and iPhones into those Windows-based apps. That is exactly what Citrix is also doing with its Receiver universal client.

Mallempati cites two statistics that illustrate why VMware changed its focus from VDI to the broader issues of end-user application access management that Horizon is aimed at.

First, this will be the first year when half of all enterprise applications will be developed to be independent of any particular operating system. (That's another way of saying that more than half of the applications are not being coded specifically for Windows, essentially.)

Moreover, the fourth quarter of 2010 was the first time that the aggregate of smartphone and tablet shipments was larger than the number of PC shipments.

"Over time, people will start transitioning away from Windows-based desktops," says Mallempati.

He adds that over the next two to three years, people will being mixing classic VDI and application streaming with software-as-a-service and mobile applications.

This is also why VMware has launched Project Octopus, an enterprise-grade, file sharing service, and AppBlast, which can allow any browser that supports HTML 5 to run native, non-browser applications.

It is also why Citrix bought ShareFile in October to answer VMware's Project Octopus effort. If you want data to follow applications around and be available on any device, it is probably better to store that data on a cloud than somewhere on a C drive.

Citrix is not as pessimistic about classic VDI as VMware is, and that is probably thanks to the SMB-focus folks from Kaviza.

"If you have a weak lamp and look into a dark room, you can't see how large that room is," says Subramanian.

He believes that anywhere from 30 to 40 per cent of the enterprise market could convert their PCs to VDI images.

"You have to start thinking of a PC as a kind of container for your applications and your data, not a machine," he says.

Now here's the funny bit. Managing the PCs, whether they are physical devices or virtualised, and the applications, whether internal or running on a cloud, is not nearly as tough as managing the human beings who access them.

No one has come up with a technology yet that people can't somehow crash or mess up. And they never will. ®