This article is more than 1 year old

Real talk: Why are you hanging on to that non-performant disk?

Tiers stream.... down your face. When you lose something you cannot replace

Analysis Generations of change have produced layers of storage that are a challenge to manage.

When I was a boy, storage was easy. You had servers with internal hard drives that had capacities of tens of megabytes, and that was it. It was inefficient – unused space on one server couldn’t be used by the other server – and it was expensive, since the hard drives themselves cost a small fortune (or for that matter a big one).

Fortunately, things evolved - particularly with the advent of Fibre Channel in the late 1990s that meant shared storage became a reality. But it had its downsides: SAN kit was expensive (particularly the SAN switches) and Fibre Channel networking was non-trivial to comprehend.

The costs were made manageable in two ways: first, SANs brought value by providing resilience against drive failure and by minimising wasted space. Second, by vendors’ R&D developing cunning compression and de-duplication technology that optimised drive usage.

Exploring Flash beyond the performance

READ MORE

And yet, storage continued to be perceived as the most expensive part of IT infrastructure. Not just the cost of purchase, provisioning of facilities, support staff cost and so on but from hidden things like power and cooling – all that had to be paid for.

These costs combined with the effort of swapping out existing equipment have created another issue: the rise of hierarchical storage models. Rather than replace old or slower storage with new, faster systems it’s common to implement the systems for high-performance apps and demote the older stuff to tasks like file-and-print that don’t need the performance. This can help you eke out a few years more use of the older infrastructure, but it leaves you with a multi-layer setup that becomes harder to manage as it ages and the number of layers increases.

The upshot has been a world where storage has become relatively complex: hardware only really understood by the storage specialists and that tends to live on past its official lifetime.

On top of this has come cloud, which has complicated things further: instead of buying more and more hardware we can now run up cloud-based storage and use it as a further tier for high-volume lesser-performing offsite storage. More layers, more complexity making management a challenge.

It’s time, then, for us to do something about this resulting complexity of our storage. Here’s how.

RIP Fibre Channel?

First of all, storage-oriented networking. Fibre Channel has kept up nicely with business requirements over the years: from 1Gbit/sec in 1997 to 16Gbit/sec by 2011 and onward to 128Gbit/s. But compare this with Ethernet, and the latter has stayed ahead: 100Gbit/s has been with us for a while now and 400Gbit/s is on the cards for 2018. IP-based Ethernet networks are more widely understood by network managers, and with protocols such as iSCSI and FCoE there’s no reason why you can’t use them for your storage infrastructure.

Maximising what you store

Centralising your storage brings opportunities to de-duplicate data. While the all-eggs-in-one-basket problem is a tangible one, this is dealt with by the built-in replication functions of all the major platforms – and generally looks after itself without needing someone clever to tend it constantly. The more copies of the same thing you’re trying to store in a single place, the more space you can save thanks to de-duplication.

I once asked a VMware trainer about de-duplication and data compression on the storage layer of an ESXi setup: should we use VMware’s implementation or the storage vendor’s? The answer: both – if the storage will do a lot for you then great, and if you get a bonus extra bit of saving from the hypervisor layer then that’s nice too.

Farewell to tiers?

Tiered storage is an interesting beast: by tiering your storage – in an on-premise setup, anyway – you’re basically compromising performance. Imagine phoning your supplier: “Hi, I’d like to buy some really slow storage, please, to augment my nice fast stuff”. “Certainly sir”, they’d reply. “How bad do you want it?”. You wouldn’t buy slow storage, so why compromise performance by hanging on to old non-performant disk?

Yes, SSD storage is still considerably more expensive than spinning disk. But it’s not so expensive as to be unaffordable. If you can afford a new car then you’re not going to buy a beaten up second hand vehicle, because you’ll be maximising the value you can get from the funds you can afford.

And in an SSD world one of the things you want to maximise is the balancing of load across the estate. Solid-state disks have a finite lifetime because each cell has a limit on the number of times it can be written before it stops working properly. So, if you have three sets of SSD disks that you bought at different times you’re unlikely to use the traditional spinning disk concept of slow, medium and fast layers; instead you’ll have fast, faster and really fast. If you have multiple selections of storage that can serve the necessary level of input/output ops per second (IOPS) then why not have a single tier so that the controllers can share the load and prolong all the disks’ life accordingly. After all, unless you have some mightily high-powered apps you’re not going to hit anything like the bottlenecks of a single drive or single shelf that you’d have experienced with traditional disks. Instead you can just tell the storage layer to deliver the appropriate level of IOPS to each presented volume and leave it to figure out how to deliver the correct quality-of-service level for each volume.

Hello to the cloud

There is, however, one aspect of tiering – or at least replication between logically distinct volumes – that will remain. And that’s where you want to use the cloud for virtual storage. Even here, though, the vendors are doing stuff for you: you’re spoilt for choice if you want to buy an appliance that’ll enable your on-premise storage and your cloud-based volumes to interact with each other and with your servers and apps as if they were best buddies sat next to each other. And it’s absolutely no surprise that the cloud providers are implementing and acquiring technologies that make this interaction simpler for the system administrator.

Under the hood

So are you going to leap up and change your organisation’s storage infrastructure tomorrow for something simplified, unified, homogenic and – therefore – easier to manage?

Of course not. What you can do, though, is evolve over time. Each time you need to make a change. If you buy new storage, make it SSD – and try to move sooner rather than later to an SSD-only on-premise world so you can get the most from the hardware’s performance and de-dupe/compression features. If you’re left with two tiers (SSD and traditional disk) consider a cloud alternative for the latter, and use storage appliances to make your life easy.

If you currently use Fibre Channel and are buying new storage, ask why you’re sticking with Fibre Channel, and when you can’t think of a reason embrace Ethernet-connected storage instead.

This might look like yet more layering and thus complexity but, fortunately, vendors are making management easier. Compression, de-duping, quality of service, replication, encryption-at-rest - you don’t have to worry about that. Enabling something like encryption has evolved to ticking something as easy as “Enable encryption” box and hitting “OK”.

Tiered and layered storage won’t go away overnight. Fortunately, the means to make management simpler and more efficient is slipping in under the hood. And that’s a good thing. ®

More about

TIP US OFF

Send us news