Don't let the SAN go down on me: Is the storage array on its way OUT?
You can't hide behind the cache anymore
Storagebod With EMC saying it plans to put VMAX into the "capacity tier"* and suggesting that performance cannot be met by the traditional SAN, are we finally beginning to look at the death of the storage array?
The storage array as a shared monolithic device came about almost directly as the result of distributed computing; the necessity for a one-to-many device was not really there when the data-centre was dominated by the mainframe. And yet as computing has become ever more distributed, the storage array has begun to struggle more and more to keep up.
Magnetic spinning platters of rust have hardly increased in speed in a decade or more. Their capacity has grown ever bigger, though. Storage arrays have become denser and denser from a capacity point of view, yet real-world performance has just not kept pace. More and more cache has helped to hide some of this – SSDs have helped, but to what degree?
It also has not helped that the plumbing for most SANs is Fibre Channel: esoteric, expensive and ornery, the image of the storage array is not good.
Throw in the increased compute power and the ever incessant demands for more data processing, coupled with an attitude to data-hoarding at a corporate scale which would make even the most OCD among us look relatively normal.
Then add in the potential for storage arrays to become less reliable and more vulnerable to real data loss as RAID becomes less and less of an viable data-protection methodology at scale.
Cost and complexity with a sense of unease about the future means that storage must change. So what are we seeing?
A rebirth in DAS? Or perhaps simply a new iteration of DAS?
From Pernix Data to ScaleIO to clustered filesystems such as GPFS, the heart of the new DAS is Shared-Nothing-Clusters. Ex-Fusion-IO’s David Flynn appears to be doing something to pool storage attached to servers, and you can bet that there will be a Flash part to all this.
We are going to have a multitude of products, interoperability issues like never before, implementation and management headaches... Do you implement one of these products or many? What happens if you have to move data around between these various implementations? Will they present as a file system today? Are they looking to replace current file systems: I know many sysadmins who will cry if you try to take the VERITAS File System away from them.
What does data protection look like? I must say that IBM's XIV data-protection methods which were scorned by many (me included) look very prescient at the moment (still no software XIV though? What gives, Big Blue?).
And then there is the application-specific nature of much of this storage. So many startups are focused on VMware and providing storage in clever ways to vSphere... when VMware’s storage roadmap looks so rich and so aimed taking that market, is this wise?
The noise and clamour from the small and often quite frankly under-funded startups is becoming deafening – although I’ve yet to see a compelling product which I’d bet my business on. The whole thing feels very much like the early days of the storage array – it’s kind of fun really. ®
* Underneath the "performance tier" aka "the computer-affinity tier" in the EMC conception of the storage world – for an explanation of this EMC jargon, see here.
Sponsored: Are DLP and DTP still an issue?