This article is more than 1 year old

Everything you always wanted to know about VDI but were afraid to ask (no, it's not an STD)

All you need to make virtual desktops go

Bandwidth: Feel the thickness

You can have the greatest server farm in the world, perfectly specified to provide a beautiful user experience, but it all means nothing if you can't get those screens to users. If everyone is wired up with gigabit Ethernet, you're probably fine. You aren't, however, going to drag a 1080p video experience over a 3G mobile connection.

You also need to bear in mind that VDI almost always results in a change in usage patterns. Whatever your usage patterns are today, expect that VDI deployments will ultimately see more people working remotely, be that telecommuting from home or pulling down their desktop at a hotel or business meeting. You need enough WAN bandwidth to meet not just today's needs, but tomorrow's.

Consider also that by deploying VDI you may be changing the access patterns of your entire data center. Instead of a great deal of "north-south" network traffic your traffic is now "east-west". (The directions are based on a traditional network diagram in which "north" is the physical user desktop and "south" is centralized storage.)

VDI Bandwidth Visio Image

North, South, East and West: the cardinal directions of the datacenter

After deploying VDI, a user's applications won't be coming through your edge switches with the same access patterns as before. Instead, you are going to have a bunch of servers chatting merrily among themselves, creating new network bottlenecks that need to be modeled before you deploy.

Your storage choice will also have an impact on your bandwidth consumption. Centralized storage will require either a converged network adapter or a dedicated network to shuttle data around, and hypervisor vendors recommend using a dedicated NIC for VM migration and replication traffic on each compute node.

Dedicated NICs are also a real-world requirement for building server SANs and for host-based write caching; count up the ports you'll need per host and make sure you've got enough switches to handle it all.

Server resources: How much brawn is in that bare metal?

The purpose of VDI is to allow users to run applications from a remote location. Typically, the promise is any3: any application on any device at any time. Running these applications consumes resources. A dozen VDI instances all running Microsoft Word aren't going to consume a lot of CPU to do so, but running AutoCAD will.

The resources of the applications consumed by your users are not a simple linear additive. You cannot simply profile the resources consumed by their desktops, add them together, and call it a day. There is overhead involved in virtualization, and that overhead can vary dramatically from deployment to deployment, as some systems have devices for offloading workloads and some do not.

Graphics generation is an obvious concern here; the more graphics you have to generate, the more overhead there is. This overhead can get to the point that it seriously impinges upon your server resources, but CPU resources are by no means the only consideration.

VDI Server Resources Visio

Do your servers have the right mix of components for your VDI workload?

Servers have limited system memory capacity and bandwidth. Even if you have enough RAM to handle all the VDI instances on a given server, you will not necessarily have enough RAM bandwidth to handle them all. RAM bandwidth is typically so high that most people never give it a second thought, but when you start running 100 desktops on a single server it can become a hidden bottleneck in a real hurry.

Hypervisors can overcommit with memory, allowing (say) a server with 16GB of physical RAM to juggle 32 VMs each allocated 1GB of RAM, on the basis that each virtual machine will probably only use a few hundred MBs of RAM. This can put a lot of pressure on system memory bandwidth.

Similarly, storage deduplication, server SANs, Diablo Technologies' SSD DIMMs and Atlantis Computing's ILIO all have the potential to bring a heavily loaded VDI server to the red line.

In a similar manner, it would seem ludicrous to expect a single-workload system to max out its PCIe bus. With VDI, however, this is increasingly simple. 40GbE NICs, PCIe SSDs, GPUs for graphics offloading, and so forth can not only fill all the PCIe slots in the system, they can cause very real performance bottlenecks.

Storage: All those bytes have to go somewhere

Storage is all too often the least considered component of VDI. If you look at the "average" desktop, it doesn't do a lot during the day. Unless you are using your VDI instances to run a lot of rendering to the local virtual disk, the only real punishment you're going to see is during logon, logoff and updates. (Malware scanning used to be an issue but vShield takes care of that nicely.)

The problem with logon, logoff and update events is that they tend to occur all at the same time. Thus VDI storage has to be spectacularly overprovisioned (speed-wise) when compared to the daily grind. When it comes to figuring out how your gear will perform, even the best synthetic benchmark software [PDF] does not appropriately model storage demand. It is, from experience, the hardest element to pin down.

The "average" VDI user is expected to sustain about 10 IOPS throughout the day, but I have plenty of user groups that will gladly sit at 200 IOPS all day long, and some that will barely break an average of 4. More than any other area, storage is where "knowing your users and their workloads" matters.

More about

TIP US OFF

Send us news


Other stories you might like