Killing the storage array controller
You don't need low-stack intelligence
Comment Stealthy startup ZeRTO's CEO Ziv Kedem could have opened his technology kimono in a comment on a storage controller location story.
The lengthy comment cited Cisco's Nexus 1000V virtual switch. It said: "When you are hypervisor resident, you can support things like VM vMotion and storage vMotion, without requiring any reconfiguration or complex management tasks."
What does this mean? The 1000V operates inside the VMware ESX hypervisor, not being a standard virtual machine (VM). Bearing that in mind let's take a fresh look at ZeRTO's intentions. Its CEO blogged about how hardware boxes are turning into software.
He asked: "When you run on top of a hypervisor and your storage is virtualised anyway (e.g. VMFS), does it really make sense to run all these services and operations inside the storage? Is there any benefit for replication, backup, encryption, clustering, etc. to stay outside the hypervisor, ‘looking up’ to the virtualisation and application layers? I think not."
He then cited the Nexus virtual switch. Put these three things together and we have ZeRTO creating software ethnology to run "replication, backup, encryption, clustering, etc." inside VMware's ESX hypervisor.
Why is this a good idea? Kedem writes: "Check out VMware's own SIOC (Storage I/O Control) functionality. It guarantees responsiveness for different VMs, even if they are running on the same disk.
"This functionality cannot reside in the storage array (or storage virtualisation appliance, even if it is a VM) since the array cannot differentiate different I/Os from different VMs. This is the reason VMware provided SIOC to solve this problem for their customers."
At first glance this idea would seem to chime with Xiotech's belief that storage arrays should concentrate on fast and reliable basic storage operations with upper-level storage controller functions moved up the stack to a server. Up to a point that is true, but the pure application of the ZeRTO idea is that the storage array is a JBOD (Just a Bunch Of Disks) with no inherent ability to cope with disk failures and recover from them.
Xiotech prefers the idea of a "storage brick" aggregating disks together and providing very much faster and very much more reliable access to and storage of data than a JBOD can provide. That needs "low-stack" intelligence as well as upper-stack intelligence. It's a where-do-you-draw-the-line argument.
Of course, with all storage controller functions in the hypervisor the commodity-based JBOD storage will be cheap, with Xiotech likely to point out that you get what you pay for. Hardware is a commodity but data is not, most assuredly it is not, and the hardware needs organising in a way to safeguard the data that doesn't suck up server core processing cycles
It seems likely that ZeRTO will push the idea of having your expensive storage array controller functionality cake while eating your cheap JBODs. No more need to shell out for EMC, HDS, HP, IBM or NetApp replication, mirroring, encryption, clustering, snapshots, deduplication, etc. that come with their arrays. These are software functions carried out by a hypervisor plug-in.
The net cost of the plug-in and JBODs will be far less than the cost of the storage arrays that ZeRTO, in our reading of its intentions, wants to replace. ®
High end brain death
This concept has been around since the start of storage and has had a number of various incarnations such as the former Sun Microsystems trying to convince people that all they need is ZFS for their storage.
So a couple of points come to mind right away. First, yes CPU cycles now days are way cheaper but they still aren't free so why do you want to tie up a core or more of your high end server to do what is effectively grunt work?
But secondly and most important what are you going to do if your server goes down or goes insane and decides fill in all the zeros with crayon? What an external storage system brings to the party is the ability to quickly and easily connect to multiple servers either directly or as part of a Storage Area Network. In a SAN you take a volume and move its ownership from one server to another with a few clicks of your mouse, a line or two of commands or as part of an automated script.
This is the basic reason why PCI based RAID controllers as great and fast as they can be are not recommended for every situation. You simply get a higher and better availability when you use an external storage solution.
No Information, but intriguing
This article contains almost NO information and takes 3 pages to state a paragraphs worth of output. But Ok, its STILL interesting.
You'll pry my RAID5 out of my cold, dead hands.
If enterprises only wanted to have JBODs, EMC and company wouldn't be in that business at all. JBOD means that if and when one of your HDDs goes down, everything will go to the shitter. All the RAID5 stuff means that you only need to swap out the damaged disk for a new one, and the whole thing will reconstruct itself.
These claims from a VM vendor look like someone proposing drivers to do away with seat belts. Bad advice!