This article is more than 1 year old

Forecasting logon storms with desktop virtualisation

Planning for bad weather

You the Expert For an IT manager, desktop virtualization is no bad thing, if only because it limits how badly a user can mess up his or her own settings. So if you are thinking that you could slim down your desktop hardware requirements and maybe keep track of everyone’s software upgrades more simply are there any downsides consequences it would pay to be aware of?

There is no getting away from it: this will have an impact on your network and your storage requirements. But how, and what can you do to plan for it?

Don’t ask us, we’re only journalists. That’s why we asked for your input. The author of the most interesting comment (either as voted for by you, or as determined unilaterally by us) would join our panel of genuine, still-in-original-packaging experts: Jim R Henrys, Intel Enterprise Solution Sales and Andrew Buss, service director at our friendly neighbourhood analyst house Freeform Dynamics.

As ever, there was a lively debate. And that’s not a euphemism, honest. But we especially liked what Neil Spellings had to say, so we asked him to elaborate.

Neil Spellings, Virtualisation Consultant Virtual Desktop Infrastructure (VDI) is often the straw that breaks the camels back when it comes to storage infrastructure. Desktop workloads are vastly different to server workloads and many companies who have utilised existing storage for VDI find it can now no longer cope.

What are some of the causes?

1. Boot and logon storms. Most people will log into the virtual desktop in the morning and log out again at the end of the day.

2. AntiVirus. Most AntiVirus products are very storage-unfriendly having been written in the days when desktops had virtually unlimited write I/O to a local fast hard disc.

3. Read/Write ratio. Most storage and RAID configurations are fast for reads, and slow for writes. VDI presents a challenge for storage because at logon and boot time the disc I/O is predominantly read, whereas during “steady state” (i.e. the rest of the day) the I/O is predominantly writes.

4. Latency. Latency of storage I/O, especially when varying, has a much bigger user perception impact in a VDI environment. If your storage is fast one day, and slow the next, you’ll get complaints from users.

5. VDI isn’t P2V of the desktop. Many organisations just take their existing Windows 7 image, strip out the hardware-specific drivers, install the hypervisor tools and deployas their template VDI VM. Job done, right?

Wrong. There are many configuration and software pitfalls that will cause unnecessary I/O, distorting your benchmarking and thus storage I/O requirements.

OK so we know that VDI I/O requirements are going to be different, but are the only options upgrading your existing SAN or buying a new one? Of course, if you only speak to your storage vendor the answer will always be yes. But you do have alternatives.

It’s becoming increasingly common to utilise inexpensive local storage and leave your expensive SAN/NAS untouched.

Network-wise, the increased demand for storage bandwidth (if you aren't using local storage) may force you to upgrade existing Gigabit Ethernet cabling and switches to 10Gig Ethernet. And we know that isn’t going to be cheap.

VDI is also 100 per cent network dependant, so having reliable WAN and Internet links is paramount. Latency can be an issue, but modern remoting protocols (HDX and RDP) are designed for this. Variable latency is a much bigger problem, as users will adapt to a consistent latency (even if its slower than before) but they will complain if it’s fast one minute and slow the next.

There's no "offline working" scenario with hosted desktops, so multiple-resilient links are a must if you have business critical offices connecting to centralised VDI infrastructure. And they don't come cheap, either.

Also, if you're delivering a "rich user experience" including videos on your VDI infrastructure over the WAN, then you might want to consider WAN acceleration and caching devices. How much you need to invest/upgrade will depend on the size of the organisation and the product sets you choose. It's a minefield, and can easily blow up in your face so be careful out there.

Jim R Henrys, Intel Enterprise Solution Sales There are numerous forms of desktop virtualization but the two that obviously have the biggest impact on networking and/or storage are virtual desktop infrastructure (VDI) and application virtualization (App Streaming) as both models require centralized infrastructure that’s typically housed in a data center.

Another way to consider asking the question is “From a capacity planning and sizing perspective what do I need to account for?”

There are many variables to consider when working out the potential impact meaning that, unfortunately, there isn’t a straight forward answer. So where to begin? Well my advice is that you must first determine what is required in terms of an acceptable user experience and use this reference point to determine what is expected from the underlying infrastructure. Ultimately, IT exists to serve the needs of the business and the success of any virtualization initiative will be determined by how well the users rate the service they receive.

So let’s begin by considering the network. Different usage models impact the network in varying degrees. For example, in the case of VDI, usages requiring graphical and multi-media capabilities will have greatest impact. Hence, a user requiring a full Windows 7 desktop aero experience with multimedia (including VOIP and video) is going to require much greater network bandwidth to achieve an acceptable experience than a much less sophisticated user running, say, a simple forms based application. In this instance mitigating strategies need to be considered – for example the use of Multi Media Redirect (MMR) capabilities in VDI which take advantage of client side compute capabilities to process media locally and reduce the burden on the network.

Alternatively, in scenarios requiring media processing Application Virtualization, which again makes use of client side processing, can also be employed.

Secondly, from a networking perspective, you need to develop an understanding of when peak loads, versus average loads, will occur on the network. For example, many users may begin their working day at the same time thus giving rise to Boot Storms and Logon Storms.

Understanding and taking into account the different usage profiles is essential for your network architect to determine what is required in terms of network strategies such as cache sizes, cache locations and bandwidth in order to meet the throughput, performance and scalability levels needed.

From a centralized storage perspective the need is to avoid I/O bottle necks to ensure the user experience is not compromised. The key here is to understand the IOPS characteristics (I/O Per Second) of the working environment. And further, the ratio of disk reads to disk writes of the IOPS during the working day. This ratio will change during boot up, logon, normal work rate and log off times. For example, in a VDI environment, booting and login of a Windows desktop will be high on disk reads, whereas a general working session can be high on disk writes, depending on the applications running.

Again, understanding the profiles of heavy, medium and light users can help the storage architect determine the right approach to take with regards to factors such caching strategy and RAID level.

In summary, in determining the impact of client virtualization on your network and storage infrastructure I’d advise you begin by profiling and determining what constitutes an acceptable user experience. Use this information to determine the right virtualization model for users. Take the time to read and understand the vendors sizing guides. And, above all, be sure to conduct a proof of concept / pilot that’s representative of the real world operating environment to ensure you avoid any unwelcome surprises.

Andrew Buss, Service Director, Freeform Dynamics

Desktop virtualisation is all about centralising the various aspects of client computing such as processing, data transfers and storage. Taking a previously highly distributed architecture and condensing it has the potential for a massive impact on both the network and storage infrastructure.

Before any discussion can take place on the impact of desktop virtualisation on networking and storage, it is vital to recognise that there is no “one true virtual desktop” solution. Instead, there are several distinct types, such as shared server, dedicated blades or streamed desktops or applications. Each of these caters for a different need, and has a distinct effect on networking and storage. Trying to optimise for all of these can be as big a challenge as getting to grips with managing the “normal” desktop estate.

First up is the impact on networks, which tend to be in place for many years and have reached a level of performance and reliability that mean they are largely invisible (even if the kneejerk response to many IT issues is to first blame the network). Networks have also developed into distinct architectures, with datacentre and campus environments catering for particular host types, applications and data flows.

Moving to a virtual desktop solution can have a major impact on the network. A classic case with VDI is that data flows that used to be between client PC in the campus network and server in the datacentre are now concentrated within the datacentre. This can require a refresh of datacentre networking and may render prior investment in the campus network obsolete.

Another issue is that desktop environments have become richer with uptake of content such as high definition sound and video. Far from cutting network traffic by moving to desktop virtualisation, this could begin to load up the network traffic again. And to throw more spanners in the works, with media content things like latency become important. Catering for this by implementing mechanisms like Quality of Service will up the cost and complexity of the network.

It’s often difficult to tell how things will evolve at the start of the project. Best practice in this area is still developing. There are a few predictive tools that can help with modelling to give a general steer, but often it is going to be about a phased rollout with monitoring and management to ensure things are going to plan. In many cases, scoping for a “worst case” scenario would be advisable given the difficulties and expense of upgrading the network should it become necessary. This makes highlighting the importance of the network and securing funding vital for long-term success.

Next up is storage, which is often a major unforeseen hidden cost behind desktop virtualisation. This is critical, because many desktop managers are not all that familiar with backend storage platforms.

In many desktop virtualisation solutions, there is a doubling up on storage. Disks remain in the client machines, but storage is also required for the backend infrastructure. Server storage is usually a lot more expensive than for the client, so unless kept under control this can quickly cause the cost of desktop virtualisation projects to spiral.

One of the traps to avoid falling into is assuming that it’s going to be possible to use an existing storage platform to handle the desktop virtualisation load and therefore avoid the investment and risk of acquiring a new storage platform purely for desktop virtualisation.

Much of the feedback we’ve had from people who’ve tried this has been that it ends up being a major problem. It may be possible to use shared storage, but will require significant testing to confirm it. In most cases, there will be a need to budget for implementing a dedicated virtual desktop storage platform.

The hit caused by having to invest in server grade storage can be steep, and therefore it pays to have a plan of action to keep this to a minimum. It doesn’t make sense to centralise client computing, and then have separate data stores for each PC. Therefore good control of the build and image process is needed.

Rather than storing thousands of individual images that eat up space, a few master images can be used, with differences applied dynamically to make up each specific client image. Also, de-duplication technology and compression can reduce the space consumed still further.

Above all though, investment in management tools and processes are critical. With desktop virtualisation, the workspace that users need to be productive becomes a service. Few users will accept a desktop virtualisation solution that is slow and unreliable, and this means that it needs to be proactively monitored to maintain service assurance. ®

More about

TIP US OFF

Send us news


Other stories you might like