Flashman and the Mountain of Disk
Data, data everywhere and only a tiny drop of SSD
Musings Flash, flash and more flash seems to be the order of the day from all vendors; whether that is flash today or flash tomorrow; whether it’s half-baked, yesterday’s left-overs rehashed or an amuse-bouche; 2013 is all about the flash.
Large and small, the vendors all have a story to tell but flash still makes up a tiny amount of the total capacity shipped and is a drop in the ocean even on a revenue basis. There doesn’t even seem to be a huge amount of consensus as to how it should be deployed; is it as a cache, is it a networked cache or is an all flash-array? Yes to all of the above seems to be the answer. Storage is making the shift from mechanical to solid state and finally - should be able to keep up with the servers of today. Well, at least until we change to optical computing or something new.
As with the shift from mechanical computing machines, the whole market is in flux and I don’t see anyone who is definitely going to win. What I see is a whole lot of confusion; a focus on stuff and a focus on hype. Data still finds itself in siloed pools and until the data management problem is solved, with data flowing between compute environments to be re-purposed and re-used simply and effectively, computing in general will continue to be hindered.
In future, data will duplicate and replicate. I see people selling the power efficiency of flash … it will be even less power efficient because it is highly likely that one will still have all those mechanical disk arrays and only the active data-set will live on flash. Despite the wishful thinking of some senior sales-guys, few people are going to rip out their existing disk-estate and replace it entirely with flash any time soon.
So, I may be able to replace a few disks but data growth currently means that that saving is far outstripped by the rack-loads of SATA that the average enterprise is having to put in place.
Whilst I continue to read articles full of hyperbole about speeds and feeds, of features not usage models … I simply see a faster disk adding a new complexity into an already overly complex and fragile environment.
So let's see some proper data-management tools and progress in that area! Please? ®
Sponsored: Benefits from the lessons learned in HPC