This article is more than 1 year old

The magic storage formula for successful VDI? Just add SSDs

They're cheap, they're plentiful... why not?

Let them eat cache

Let's say that I put two 400GiB Diablo Memory Channel Storage (MCS) SSDs into each server and RAID 1-ed them. Assuming I had some remotely decent caching software I could then set aside 256GiB for read caching and 128GiB for write caching, and partition off 16GiB as additional write blocks for the wear-levelling algorithm.

The end result would be a truly impressive VDI server. Crank it up to a pair of 800GiB sticks and you have a 200-user server.

Now picture for a moment that you run tiered caching.

Put the 8GiB that represents 95 per cent of all reads your desktop users will make into RAM. Put the rest of the reads onto the SSD. Cache your writes either at the array or at the server – depending on how much of what kind of SSD and what kind of caching software you have – and you have a VDI setup fast enough to make the gods themselves weep tiers* of pure IOPS.

Virtually any of the software-defined storage offerings that have risen to prominence in the past five years should do a great job

What should also become fairly apparent is that virtually any of the modern software-defined storage offerings that have risen to prominence in the past five years should do a great job at VDI, assuming you use them properly.

SAN storage gateways such as DataCore's SANSymphony-V or Permabit's SANblox can do things like inline deduping, compressing and (depending on the product) and even add a cache layer to your existing SANs. It is probably the only way peddlers of traditional magnetic disk arrays will be able to offer remotely decent VDI storage in the short term.

If you prefer not to use SAN gateways – or your SAN vendor is a douche and decides that use of a SAN gateway will invalidate your warranty – you can probably get away with server-side caching solutions.

They should all be extremely helpful for VDI – even if they are just read caching – so take the time to run your pilots and proofs of concept, and make rational choices about the cost of the caching software. Some of the options out there fetch eye-watering prices.

Server-side caching is so viable in the real world that the flash manufacturers are buying up caching companies. Consider SanDisk's acquisition of FlashSoft and Fusion-io. Sandisk spent $1.1bn on Fusion-io – not something it would do if it didn't think caching was worthwhile.

The software rounds out what was an already impressive enterprise flash portfolio and is a good indication of the sort of consolidation the flash market is currently undergoing.

By the same token, most server SANs should all do a great job. Certainly my experiments with Maxta thus far have proved the model valid.

At their core, server SANs are object stores that distribute storage across multiple servers. The vast majority of them are hybrid setups: you feed them traditional spinning disks as well as SSDs and poof! fast storage with room for cold bulk data and no need for a SAN.

It is never quite so easy to pigeonhole companies in this space, of course. PernixData, for example, offers software that is just as complex as a server SAN in implementation, but doesn't actually store your data. It is just a big, complex write cache, but it absolutely does make things faster.

Fusion-io is another one straddling the line between server SAN and server-side caching, though it is far more application-focused than its rivals. Fusion-io is very interested in the I/O that it is caching, whereas most other software-defined storage plays – including server SANs – don't care.

Array vendors are not out of the game either. Hybrid vendors, such as Tintri, Tegile and Nimble, can all get the VDI job done, while all flash vendors such as Pure, Kaminario and Skyera not only do the job but all have legitimate claim to winning marketing turf wars about who can cram the most VDI instances into the smallest physical space.

Traditional array vendors such as EMC, NetApp, Dell and HP are all cramming SSDs into their systems as fast as they can, and as long as they aren't serving you a traditional magnetic disk-only array you will probably be able to make VDI work just fine.

The wrong fit

The bit nobody wants to talk about is that there is no one-size-fits-all. Buying a Tintri array doesn't mean you can run $Australia's worth of VDI on it.

Seagate's research on the average user won’t apply to your stable of high-intensity AutoCAD users. For that matter, storage is not the only bottleneck in VDI, and if you solve it you will just run across another.

You need to think long and hard about where the flash belongs

Throwing SSDs at the VDI problem works, and works well, but you need to make sure you have enough flash to get the job done. And to do that, you need to profile your users.

You need to think long and hard about where the flash belongs, and that is more a factor of how the rest of your network is designed than anything else.

Talk to most VDI experts and they will tell you to make your VDI infrastructure separate from the rest of your network. This is because it is easier to stick to the hard maths and not have your other workloads infringe upon your VDI setup. They like certainty and easy consulting gigs; who wouldn't?

In the real world, however, it is never that simple. Storage will see mixed usage, be that array-based or server SAN-based. All-flash arrays make great sense here, as do some of the heavy-hitter hybrid plays.

Remember too that VDI really peaks usage only during logon and logoff. In mixed-usage environments it could make just as much sense to use server-side caching, even with the presence of a hybrid array.

Depending on the amount of SSD in the hybrid array and the shape of your other workloads, the hybrid array could be spending most of its time caching server workloads instead of VDI ones, leaving your VDI logon and logoff traffic to go straight to the slow magnetic disks.

In today's storage world, SSDs are almost always the answer, but to figure out how many and what kind you need you have to ask knowledgeable questions.

We have the advantage today that SSDs are cheap and plentiful, and storage vendors are desperate. The storage wars are winding down so reaching out to them for proof of concept tests is likely to receive a positive response.

If you are considering VDI, now is the time to look beyond the preferred vendor list and see exactly how much bang you can get for your buck in the storage world. ®

*Geddit? Aw, come on, it's one crummy joke. It's not like it's punishment or anything…

More about

TIP US OFF

Send us news


Other stories you might like