Feeds

Location, location: Storage controller functions get moving

Shove it in the server, or bung it in the array?

Boost IT visibility and business value

Opinion Server virtualisation luvvies are looking askance at expensive storage arrays and saying: "Pah! Run the storage controller functions as a system app in a virtual server and use JBODs. That's the way to use commodity hardware."

This is the approach of stealthy startup ZeRTO and also Xiotech. Move the array controller functions up stack and have the virtualised server processor do the heavy lifting while the storage array reads and writes data and looks after drive failures and other low-level storage stuff.

We could note that IBM's SVC and the virtualising storage controllers offered by NetApp (V-Series) and HDS (VSP) are a halfway house, with the storage controller function running in a separate functional box between the servers and the actual storage arrays. The fact that IBM is coalescing the SVC into storage arrays (witness the Storwize V7000) doesn't affect this point.

ZeRTO's CEO presents this as a move from hardware to software. I think that can be disputed, as this is not an argument about which processor architecture runs the storage array functions – we are all X86 now – but about where the processor cycles are located: server or storage controller?

Which is the best location for running storage functions such as replication, backup, encryption and array clustering? Let's add thin provisioning and deduplication: the storage world is our oyster here.

We can say that communications between the server and the storage array are worsened by moving the array controller functions into the server. For example, instead of relatively few high-level requests and data flows with a traditional storage array controller we'll have lots of low-level requests and data flows if the array controller function is in the server, running as just another virtual machine and using up the server's communication resources.

For a storage start-up company it is less expensive to develop your product if the controlling code runs as a virtual machine (VM) on a server. Then you do not have to source your own controller hardware and worry about how that links to storage enclosures in the array. You can buy enclosures that are just bunches of disks (JBODs) and have them controlled by a storage controller VM that is integrated with the hypervisor and plays second fiddle to its storage management functions.

This is less costly to develop than the alternative and can be presented as being in tune with virtualisation and with the notion that commodity hardware will kill proprietary hardware.

Yes, well, it already has. Apart from a few high-end proprietary hardware around the edge holdouts virtually all storage arrays run on Intel now. So let's not pretend this is a move from hardware to software; it isn't. And let's not pretend it's a move from proprietary hardware to commodity X86; that's already happened.

It is an argument about how much you price storage controller value. The location of the X86 processor that runs that code is really not germane to this. Deliver the best storage controller software in the world and whether it runs as a VM in the server, as controlling code in a front-end processor (SVC, VSP, V-Series), or as code in the array controller, you can still charge big bucks for it.

By having it run as a VM and having it look after JBODs the overall storage array cost can be cut down – and you can present your way as the low-cost way. But it will be nothing to do with any inherent advantage due to it running as a VM, you will just happen to have a lower cost development model. However you choose to interface to VMware and however compliant you choose to be with VMware storage management, you can do this with storage controller functions running as a VM, in a front-end box or in the array. It simply doesn't matter.

We're like storage realtors, and the argument at heart is about location, location, and location, and where is it cheapest to live with the best access to the things we need to do. There is no inherent superiority in one location above another. It all depends upon where you are starting from, where you want to go, and how much money you have.

What the ZeRTO CEO is arguing for has already been done: look at the HP LeftHand virtual storage appliance. It's fine to reinvent the wheel with a better wheel, but let's not pretend it is a new form of locomotion when it is, really, just another wheel. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Dell The Man shrieks: 'We've got a Bitcoin order, we've got a Bitcoin order'
$50k of PowerEdge servers? That'll be 85 coins in digi-dosh
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.