Feeds

Flash is dead ... but where are the tiers?

Storage tiering needs to be separate from arrays

Providing a secure and efficient Helpdesk

Storagebod Flash is dead: it's an interim technology with no future. But yet it continues to be a hot topic and technology. I suppose I ought to qualify that. Flash will be dead in the next five to 10 years and I’m talking about the use of flash in the data centre.

Flash is the most significant improvement in storage performance since the introduction of the RAMAC in 1956. Disks really haven't improved that much, and although we have had various kickers which have allowed us to improve capacity, at the end of the day they are mechanical devices and are limited.

15k RPM disks are pretty much as fast as you are going to get, and although there have been attempts to build faster spinning stuff, reliability, power and heat have really curtailed these developments.

But we now have a storage device which is much faster and has very different characteristics to disk, and as such, this introduces a different dynamic to the market. At first, the major vendors tried to treat flash as just another type of disk. Then various start-ups began to question that and suggested that it would be better to design a new array from the ground up and treat flash as something new.

But what if they are both wrong?

Storage tiering has always been something to which people pay lip service, but no one has ever really done it with a great deal of success. And when you had spinning rust, the benefits were less realisable - it was hard work and vendors did not make it easy. They certainly wanted to encourage you to use their more expensive tier 1 disk, and moving data around was hard.

But then flash came along and with an eye-watering price-point. The vendors wanted to sell you flash but even they understood that this was a hard sell at the sort of prices they wanted to charge.

So, storage tiering became hot again - and now we have the traditional arrays with flash in and the ability to automatically move data around the array. This appears to work with varying degrees of success but there are architectural issues which mean you never get the complete performance benefit of flash.

And then we have the start-ups who are designing devices which are flash-only - tuned for optimal performance and with none of the compromises which hamper the more traditional vendors. Unfortunately, this means building silos of fast storage and everything ends up sitting on this still expensive resource. When challenged about this, the general response you get from the start-ups is that tiering is too hard and just stick everything on their arrays. Well, obviously they would say that.

This is why I say flash is an interim technology and will be replaced in the next five to 10 years with something faster and better. It seems likely that spinning rust will hang around for longer and we are heading to a world where we have storage devices with radically different performance characteristics. With the growing data explosion, putting everything on a single tier is becoming less feasible and sensible.

We need a tiering technology that sits outside of the actual arrays, so that the arrays can be built optimally to support whatever storage technology comes along. Where would such a technology live? Hypervisor? Operating System? Appliance? File-System? Application?

I would prefer to see it live in the application and have applications handle the life of their data correctly, but that’ll never happen. So it’ll probably have to live in the infrastructure layer and ideally it would handle a heterogeneous multi-vendor storage environment, where it may well break the traditional storage concepts of a logical unit number (LUN) and other sacred cows.

But in order to support a storage environment that is going to look very different or at least should look very different, we need someone to come along and start again. There are a various stop-gap solutions in the storage virtualisation space but these still enforce many of the traditional tropes of today’s storage.

I can see many vendors reading this and muttering: "Hierarchical storage management? It’s just too hard!" Yes, it is hard, but we can only ignore it for so long. Flash was an opportunity to do something, mostly squandered now, but you’ve got five years or so to fix it.

The way I look at it, that’s two refresh cycles - and it’s going to become an RFP question soon. ®

Security for virtualized datacentres

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
No biggie: EMC's XtremIO firmware upgrade 'will wipe data'
But it'll have no impact and will be seamless, we're told
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Protecting users from Firesheep and other Sidejacking attacks with SSL
Discussing the vulnerabilities inherent in Wi-Fi networks, and how using TLS/SSL for your entire site will assure security.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.