This article is more than 1 year old

Virtualisation turns PCs into personal clouds

Desktops break free

IDV is not just VDI backwards

Intel, which supplies the vast majority of server and desktop microprocessors in the world, wants its Atom chips to challenge ARM processors in smartphones and tablets too. The vendor wants to own the end-user device and the in-house or cloudy server infrastructure that serves up apps to smartphones, tablets and PCs.

While Intel has certainly been eager to sell servers to host VDI setups, the three architectural principles of the Intelligent Desktop Virtualisation (IDV) architecture it is espousing run a little counter to the conventional VDI approach.

"You don't even have to get into exotic devices to see that something has to change," says Dinesh Rao, director of Intel's independent software vendor program.

"You have to understand what computer model works when, and you have to accept the fact that no single computer model will answer all of your issues.

“We are in a period of change. We are moving toward some stable end state but we haven't achieved that yet. So we have tried to outline the characteristics of that future state."

Intel's IDV scheme concedes that users don't want to be locked down to any one particular kind of virtualisation for operating systems and applications.

IDV has to span everything from traditional terminal services to shared virtual desktops, classic VDI, OS streaming, app streaming, type 2 (hosted) hypervisor containers on the client, all the way to putting a type 1 (bare-metal) hypervisor on the client.

The key difference with IDV is that Intel wants management to be centralised and computing to be executed locally. It reasons that the end-user experience will be best if the desktop virtualisation takes advantage of whatever computing, graphics and I/O each device has – and the device will change as end-users move from home to office and back again.

"Local execution should not be done as an accident but by design," says Rao.

This is not something any of the desktop virtualisation vendors have fully figured out yet. The "intelligent" part of local execution means that whatever setup companies have should be able to check what local resources are available and compensate for whatever users don't have to run an application back in the data centre.

So whatever you are using to control your access to data and applications should be smart enough to check the kind of device you have and stream whatever experience makes most sense.

You have to accept the fact that no single computer model will answer all of your issues

The local execution is something that makes the most sense economically, of course. This is something that MokaFive and Virtual Computer have been banging the drum about for the past several years. MokaFive Suite 3.0, announced back in May, has a central management server that deploys a Windows PC image atop a modified VMware Player client hypervisor (type 2) or its own BareMetal hypervisor (type 1).

Virtual Computer has created its own client-side hypervisor, called NxTop, that does much the same thing, with the central management server creating and storing PC images and beaming them down to a hypervisor running on a PC client device instead of on those central servers.

Ask yourself this: is it cheaper to compute on a $500 PC or on a slice of the combination of a server, network switches and SANs sitting back in the data centre? We all know the answer.

So when you can, you execute on the local device, not just because the experience is better but also because it cost less. For many users, the endpoint device will change and yet they still want access to their data and apps. And that means you need to manage across desktop virtualisation tools as well as within them.

The layered look

The second goal of the IDV architecture, called layered images, is also tough to achieve.

"If we want to access our desktops from any device anywhere, the fundamental thing to do is split out the operating system from the application, user data and user settings," says Rao.

"What we are really after with layering is dynamically assembled desktops. Whenever people talk about VDI, they always show this layer cake, with the OS, apps and user data separate.

“But in practice, nothing is quite that separated. If you install a Windows application it updates the registry, and the moment you do that my version of Windows is different from yours. Microsoft has ways of doing this, with folder redirects and profiles, but these techniques need to be used consistently."

Layering in desktop virtualisation saves memory and storage back in the data centre because if your OS and app layer are the same as mine, you can serve one copy to both of us. So the virtual desktop compression can be higher for each server.

Layering also means asking if the endpoint can run the image, sending it down the wire and then keeping it in sync with bi-directional synchronisation and de-duplication. (If you send a layered desktop image down to the endpoint and execute it locally, it means you can work offline – something that VDI does not do very well.)

"This technique is absolutely essential and has absolutely no compromises," says Rao.

"Everybody gets what they want. IT gets its goal, which is the golden image. Users are executing in a relatively local way, so offline operations are fine.

“A lot of VDI today is really dependent on LAN and WAN latencies, and if you are dependent on a remoting protocol – and these have got fantastically sophisticated – you are still vulnerable to the LAN and WAN.

“The way that VDI is implemented today, vendors kind of pick a fight with both physics and economics. The physics is packet loss, which people can see, and if you insist on doing everything in the data centre, then the most economical endpoint is not being used."

The last bit of the Intel architecture for future desktop virtualisation is device-native management. This is a techy way of saying that the endpoint device, of whatever kind, is an active participant in its own management along with the centralised management servers.

In some cases that means making use of the Trusted Execution Technology secure boot and other features of the vPro desktops, and in others it might mean using a client-side, bare-metal hypervisor with secure boot.

Intel is also porting its Active Management Technology feature from Xeon servers, which allows servers to be remotely administered even when operating systems and hypervisors crash, to its future Core processors for PCs. So expect this to play a part, too.

Next page: Shifting shapes

More about

TIP US OFF

Send us news


Other stories you might like