Feeds

Flash is dead ... but where are the tiers?

Storage tiering needs to be separate from arrays

Beginner's guide to SSL certificates

Storagebod Flash is dead: it's an interim technology with no future. But yet it continues to be a hot topic and technology. I suppose I ought to qualify that. Flash will be dead in the next five to 10 years and I’m talking about the use of flash in the data centre.

Flash is the most significant improvement in storage performance since the introduction of the RAMAC in 1956. Disks really haven't improved that much, and although we have had various kickers which have allowed us to improve capacity, at the end of the day they are mechanical devices and are limited.

15k RPM disks are pretty much as fast as you are going to get, and although there have been attempts to build faster spinning stuff, reliability, power and heat have really curtailed these developments.

But we now have a storage device which is much faster and has very different characteristics to disk, and as such, this introduces a different dynamic to the market. At first, the major vendors tried to treat flash as just another type of disk. Then various start-ups began to question that and suggested that it would be better to design a new array from the ground up and treat flash as something new.

But what if they are both wrong?

Storage tiering has always been something to which people pay lip service, but no one has ever really done it with a great deal of success. And when you had spinning rust, the benefits were less realisable - it was hard work and vendors did not make it easy. They certainly wanted to encourage you to use their more expensive tier 1 disk, and moving data around was hard.

But then flash came along and with an eye-watering price-point. The vendors wanted to sell you flash but even they understood that this was a hard sell at the sort of prices they wanted to charge.

So, storage tiering became hot again - and now we have the traditional arrays with flash in and the ability to automatically move data around the array. This appears to work with varying degrees of success but there are architectural issues which mean you never get the complete performance benefit of flash.

And then we have the start-ups who are designing devices which are flash-only - tuned for optimal performance and with none of the compromises which hamper the more traditional vendors. Unfortunately, this means building silos of fast storage and everything ends up sitting on this still expensive resource. When challenged about this, the general response you get from the start-ups is that tiering is too hard and just stick everything on their arrays. Well, obviously they would say that.

This is why I say flash is an interim technology and will be replaced in the next five to 10 years with something faster and better. It seems likely that spinning rust will hang around for longer and we are heading to a world where we have storage devices with radically different performance characteristics. With the growing data explosion, putting everything on a single tier is becoming less feasible and sensible.

We need a tiering technology that sits outside of the actual arrays, so that the arrays can be built optimally to support whatever storage technology comes along. Where would such a technology live? Hypervisor? Operating System? Appliance? File-System? Application?

I would prefer to see it live in the application and have applications handle the life of their data correctly, but that’ll never happen. So it’ll probably have to live in the infrastructure layer and ideally it would handle a heterogeneous multi-vendor storage environment, where it may well break the traditional storage concepts of a logical unit number (LUN) and other sacred cows.

But in order to support a storage environment that is going to look very different or at least should look very different, we need someone to come along and start again. There are a various stop-gap solutions in the storage virtualisation space but these still enforce many of the traditional tropes of today’s storage.

I can see many vendors reading this and muttering: "Hierarchical storage management? It’s just too hard!" Yes, it is hard, but we can only ignore it for so long. Flash was an opportunity to do something, mostly squandered now, but you’ve got five years or so to fix it.

The way I look at it, that’s two refresh cycles - and it’s going to become an RFP question soon. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Choosing a cloud hosting partner with confidence
Download Choosing a Cloud Hosting Provider with Confidence to learn more about cloud computing - the new opportunities and new security challenges.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.