This article is more than 1 year old

On integrating flash arrays with server-side flash

Cache storage gets faster and faster

If you're buying flash storage today, you're doing it for speed. After all, you're not doing it to save money and you're definitely not rich enough to be doing it because you want to be green and save a few kilowatt-hours on your power bill.

With spinning disk, the disks themselves were probably the bottleneck in your SAN-based storage arrays. With flash, though, the drives are so fast that the storage infrastructure itself becomes the weakest link: that is, it's slower than both the storage and the servers.

Hence there's a growing temptation to move at least part of the storage into the physical servers to put it close to the applications; it's then either a cache (in front of the shared storage to which data is written later) or a full first-tier storage medium (with the shared storage as second-tier storage).

The advantages of having the storage on the servers is obvious: it makes the applications super-fast and if your particular OS has flash-specific optimisation you can take advantage of that when the storage is local (whereas you couldn't on shared storage).

But of course the downside is that you end up having a load of wasted storage because you have to over-specify the volume you buy. On the other side of the coin the advantage of having fast, shared flash storage is that all your servers can benefit from better performance than they used to have, so long as you spend some money on making the interconnect perform well; the downside is that by improving the average case you're killing the niche applications that rely on milking as much performance as they can from the flash.

When you buy servers, what do you buy? And what do you put on them?

The last paragraph could, of course, have been written about pretty well every storage type that's ever existed: it's the old compromise of speed against cost (with a big emphasis on cost – flash is a couple of orders of magnitude higher than traditional disk at present). There's one subtle difference, though, with today's technology.

Think about it. When you buy servers, what do you buy? And what do you put on them?

I do my best to avoid buying servers and running a server operating system on them (by which I mean Linux, Windows, AIX or some other stand-alone OS). If I buy a server it will generally become part of a VMware ESXi installation (other hypervisors are available – I just happen to prefer VMware) which means that at any point in time it'll be hosting dozens of virtual machines. So if it's got storage on board, that storage can be shared by all the VMs on that host.

That's not very resilient, of course, but that's not a great problem because I prefer not to buy individual servers: I tend towards chassis-based systems into which you stuff server and storage blades (each blade being the equivalent of a traditional stand-alone, single-box server). This gives you the potential to share the chassis' on-board storage between the various server blades in the box.

Where will flash storage go?

Of course you still have the potential for failure in your single chassis, but that's fine because the other benefit of the chassis is that it provides shared access to high-speed peripherals such as SAS-based storage adaptors and 10Gbit/s iSCSI links (which, if you're feeling so inclined, you can trunk with EtherChannel for added oomph – and ten-gig links are already the connection of choice for many flash-based array vendors). And of course having this in a shared environment will be less expensive and easier to manage than if you had a load of separate server boxes.

So where will flash storage go? Will it gravitate to the server because you need all that speed next to the applications and the hardware guys have given you fab new ways to hook the storage in directly with the processors and memory? Or will it stick in the shared storage because it's the only place you can afford to put it?

The answer is that it'll do both, and that the caching algorithms and storage subsystems of the operating systems and virtualisation engines will continue to become cleverer and cleverer (just as they've done for years anyway).

Let's look at server-based storage first. If you're running hypervisor-based hosts (i.e. you're in a virtualised server world) the vendors are already banking on there being some solid-state storage sitting there in a directly accessible form.

In vSphere 5.5, for example, VMware has a funky new concept called the Flash Read Cache which pools multiple flash-based storage items into single usable entities called vSphere Flash Resources. And even if you're running a traditional single-server setup on a physical box we're seeing more and more optimisation techniques and SSD-specific drivers that can exploit SSD's speed whilst minimising wear through excessive writes.

Alive and kicking

And on array-based storage, the multi-tier hierarchy is alive and kicking and will happily continue to exist for as long as spinning disk and SSD continue both to exist. Which, if you're wondering, will be the case for many years to come. Array-based storage gives you the ability to expand pretty much to your heart's content, because if you run out of space in your array you simply bolt on another shelf and stuff in a bunch more disks.

And the worry-mongers out there who keep banging on that the interconnects aren't fast enough clearly haven't heard that those network technology guys keep banging out faster and faster versions of their protocols: so if you can't keep up now, you'll be able to soon.

In short, then: flash-enhanced servers which have high-speed disk connected internally and directly to the processing and RAM hardware will continue to exist. But just as with traditional storage they'll remain in either niche applications or as cache storage in multi-tiered setups. The rest of our applications will continue to exploit the economies, resilience and convenience of shared storage.

Quite frankly it's the only sensible thing to do – and because of this the vendors are rather conveniently and consistently making it a faster and faster thing to do too. ®

Dave Cartwright has worked in most aspects of IT in his 20-year-or-so career, though the things he claims to be quite good at are strategy, architecture, integration and making broken stuff become unbroken. His main pastime is shouting at people who try to install technology without considering whether it actually fits the business or the requirement. Dave is a Chartered Engineer, Chartered IT Professional and Fellow of the BCS, and lives with his family on a small rock in the English Channel.

More about

TIP US OFF

Send us news


Other stories you might like