Original URL: http://www.theregister.co.uk/2011/03/14/dv_network_considerations/

Don't forget the network

Or your DV project will become a nightmare ordeal

By Danny Bradbury

Posted in Desktop Virtualisation, 14th March 2011 10:29 GMT

Desktop Virtualization Two cups and a piece of string won’t cut it in a virtual world. If you are virtualising your desktops, your network must be able to cope with the additional traffic load, and resilient enough to support users who require access to their desktops at all times. How can you ensure it measures up?

A poorly configured network can lead to poor response times and service drop-outs. It can also worsen the bootstorm problem, incurred when many users log in at once.

“The whole networking side is something that lots of people forget about until they’ve done the project,” warns Tony Lock, programme director at analyst Freeform Dynamics.

A virtual desktop infrastructure (VDI) configuration in which an entire virtual machine is hosted centrally for each user represents the worst-case scenario for any harried network manager. Nevertheless, says Michael Allen, director of IT service management solutions at Compuware, it offers some predictable parameters. Bandwidth requirements in a VDI implementation are relatively easy to define. Latency is the real issue.

“There are only so many keys that a user can type in a given second, while the keyboard and mouse uses just a tiny bit of bandwidth up to the data centre,” says Allen. “And the only thing coming the other way is screen updates.”

“We work on the basis of 50k of bandwidth per active user,” says Scott Underwood, senior solutions specialist at IT and telecoms consulting firm Niu Solutions. “Really heavy graphics work could send it up.”

While bandwidth may be predictable, latency remains a challenge (and of course, a lack of the former will affect the latter). Users want a responsive machine, which means data must pass over the network fast enough so they don’t have to wait.

“Usually, if you experience latency of over 150ms, you’ll get calls to the helpdesk,” says Mark Edwards, technical director of network consulting firm Capital Networks. To be safe, aiming for a latency of a 0.1 seconds or under is best.

Latency is affected by the physical distance across the network, but that is not the only factor: other traffic travelling over the network to the data centre could force VDI traffic to queue up. Perhaps a remote backup spikes network traffic at a certain time of day, or voice over IP traffic creates problems. Requirements may also be seasonal. A retail network may look fine until that all-important fourth quarter when holiday sales pick up.

This makes proper baselining particularly important, and there may be a need for quality of service protection on the network. On IP networks, technology such as Cisco’s low-latency queuing is an option for guaranteeing bandwidth.

Allen cites one client who complained of terrible performance on the network. On further analysis, he found that an IP security camera was streaming traffic to a proxy server sitting in Switzerland. A simple design flaw was choking the network. The moral is always look for the simplest fix first.

WAN connections can create both latency and bandwidth problems, given the higher cost of throughput. Lock recommends WAN optimisation measures, such as traffic compression to reduce network overhead. “You can do things like putting more of the compressed traffic together into larger packets so that you’re not pushing traditional smaller IP packets up and down the line,” suggests Lock.

What about resiliency? Some Reg readers have worried about the potential service effects of a network dropping out. “In many organisations with one PC per desk, if someone’s machine fails at a critical time – say accounts running the payroll – they can often walk to another PC near to them and carry on working,” said one. “It’s not the same in a virtualised world.”

Edwards argues that many networks are simply not robust enough, especially in smaller businesses. Ideally, the situation calls for two of everything, including dual-honed switches and hot standby redundancy protocols. “You might have a number of access switches in the closet, and each of them would be dual-connected into pairs of distribution switches,” he says. “So, if a switch failed in the access layer, it would affect no more than 24 to 48 clients and there would be spare switches. It’s a cost-benefit decision.” ®