Location, location: Storage controller functions get moving
Shove it in the server, or bung it in the array?
Opinion Server virtualisation luvvies are looking askance at expensive storage arrays and saying: "Pah! Run the storage controller functions as a system app in a virtual server and use JBODs. That's the way to use commodity hardware."
This is the approach of stealthy startup ZeRTO and also Xiotech. Move the array controller functions up stack and have the virtualised server processor do the heavy lifting while the storage array reads and writes data and looks after drive failures and other low-level storage stuff.
We could note that IBM's SVC and the virtualising storage controllers offered by NetApp (V-Series) and HDS (VSP) are a halfway house, with the storage controller function running in a separate functional box between the servers and the actual storage arrays. The fact that IBM is coalescing the SVC into storage arrays (witness the Storwize V7000) doesn't affect this point.
ZeRTO's CEO presents this as a move from hardware to software. I think that can be disputed, as this is not an argument about which processor architecture runs the storage array functions – we are all X86 now – but about where the processor cycles are located: server or storage controller?
Which is the best location for running storage functions such as replication, backup, encryption and array clustering? Let's add thin provisioning and deduplication: the storage world is our oyster here.
We can say that communications between the server and the storage array are worsened by moving the array controller functions into the server. For example, instead of relatively few high-level requests and data flows with a traditional storage array controller we'll have lots of low-level requests and data flows if the array controller function is in the server, running as just another virtual machine and using up the server's communication resources.
For a storage start-up company it is less expensive to develop your product if the controlling code runs as a virtual machine (VM) on a server. Then you do not have to source your own controller hardware and worry about how that links to storage enclosures in the array. You can buy enclosures that are just bunches of disks (JBODs) and have them controlled by a storage controller VM that is integrated with the hypervisor and plays second fiddle to its storage management functions.
This is less costly to develop than the alternative and can be presented as being in tune with virtualisation and with the notion that commodity hardware will kill proprietary hardware.
Yes, well, it already has. Apart from a few high-end proprietary hardware around the edge holdouts virtually all storage arrays run on Intel now. So let's not pretend this is a move from hardware to software; it isn't. And let's not pretend it's a move from proprietary hardware to commodity X86; that's already happened.
It is an argument about how much you price storage controller value. The location of the X86 processor that runs that code is really not germane to this. Deliver the best storage controller software in the world and whether it runs as a VM in the server, as controlling code in a front-end processor (SVC, VSP, V-Series), or as code in the array controller, you can still charge big bucks for it.
By having it run as a VM and having it look after JBODs the overall storage array cost can be cut down – and you can present your way as the low-cost way. But it will be nothing to do with any inherent advantage due to it running as a VM, you will just happen to have a lower cost development model. However you choose to interface to VMware and however compliant you choose to be with VMware storage management, you can do this with storage controller functions running as a VM, in a front-end box or in the array. It simply doesn't matter.
We're like storage realtors, and the argument at heart is about location, location, and location, and where is it cheapest to live with the best access to the things we need to do. There is no inherent superiority in one location above another. It all depends upon where you are starting from, where you want to go, and how much money you have.
What the ZeRTO CEO is arguing for has already been done: look at the HP LeftHand virtual storage appliance. It's fine to reinvent the wheel with a better wheel, but let's not pretend it is a new form of locomotion when it is, really, just another wheel. ®
Sponsored: Customer Identity and Access Management