VMware 'to work with just five storage companies'
Is EMC's stepchild golden, or red-headed?
VMware is planning logical storage containers that do away with Logical UNits (LUNs) and NFS mount points - and could stifle storage developments outside a group of five suppliers.
VMware's plans were disclosed at a VMworld 2011 presentation (VSP3205) and described by Wikibon analyst David Floyer. They have also been discussed by VMware's Scott Lowe in a blog about the VSP3205 session.
Lowe is chief technical officer for EMC's vSpecialist team. His overall view of the presentation is that it explains that VMware wants VM and storage admins to talk the same language and do away with file and block access differences in favour of a unified VMware storage interface.
Traditionally applications request disk or tape storage resources based on Logical UNits, which are storage devices accessed via SCSI, iSCSI, Fibre Channel or similar protocols. A LUN can refer to a logical disk or volume on a SAN, which the SAN drive array controllers convert to real physical disks. There is more on the subject here.
Virtualised app's also use networked file access services via NFS mount points, and the storage container idea gets rid of them as well as LUNs. However, Floyer concentrates on the LUN aspect of things
According to Floyer, what VMware is proposing is that an app, running in a virtual machine, would address a logical storage container, or VM volume, containing the app's data, metadata about it, and any policies referring to that data. The storage container would have logical channels, an I/O Demultiplexer, that is connected to the host server's external storage ports and thence to external arrays.
An app would talk to its storage container and not to a LUN. There would be API access to the storage container constructs so that external arrays, VM-aware arrays storing VM volumes, could do things like spread a storage container across one or more storage drives, one or more storage tiers, one or more storage arrays, and cache infrastructures, all designed to minimise storage I/O latencies, provide protection against device failures, maximise bandwidth delivered to apps, and provide the best storage cost effectiveness.
In effect, the storage container is an external storage controller which abstracts physical storage controllers and connects with them via an extended set of vStorage APIs. Suppliers of such controllers will have to enable them to work with Microsoft and other hypervisors, as well as continuing normal file and block access for non-virtualised applications.
Who is going to get API access to the storage containers? Floyer says VMware is going to work with EMC (which owns 80 per cent of VMware), Dell, Hitachi, IBM and NetApp, and provide the APIs "to help these traditional storage vendors add value, for example by optimising the placement of storage on the disks." There's an obvious missing major storage supplier from that list – HP – plus a whole host of less established players and start-ups.
EMC blogger Scott Lowe leaves Hitachi Data Systems out of the supplier group he mentions, but includes HP.
Floyer uses the term "cartel" for the group of EMC and other suppliers, saying: "The inclusion of the other members of the cartel (and specific exclusion of all others) is justified by reducing the engineering overhead of considering other ideas."
He wants VMware customers, and its shareholders, to tell the company they expect it to help them use the best storage technologies and not baked-in legacy technologies that will benefit the five partner companies more than their customers.
A VMware spokesperson did not agree with the cartel idea, saying: "In terms of the partners we’ve been working with on these APIs, they are Dell, EMC, HP, IBM and NetApp ... – we generally work with this group of storage vendors as “design partners” for these types of initiatives as they represent a diverse set of customers, deployment situations and other technical and logistical factors that greatly help with the vSphere Storage API design process.
"Note that we’re still in early days on this and none of the partners above have yet committed to support the APIs – and while it is our intent to make the APIs open, currently that is not the case given that what was demo’d during this VMworld session is still preview technology." ®
Storage commentator Jon Toigo criticised VMware for arbitrarily changing SCSI commands and for this:-
Their engineers proudly proclaimed at VMworld that they are planning essentially to move array controller functionality, including RAID and other functions that need to be done close to disk, into their software stack. Customers should just deploy JBODs and let the hypervisor do the rest. I wonder what EMC thinks of that bit of wisdom from its golden stepchild.
I’d like to address the statement about HDS being left out of the next-generation API supplier group. HDS is an Elite level partner in the VMware Technology Alliance Partner program and as such, we work very closely with VMware on future technology development. In fact, HDS was the first company to fully certify virtualized storage with VMware VAAI earlier this year.
HDS did not participate in the demos shown during the session VSP3205, titled “Tech Preview: vStorage APIs for VM and Application Granular Data Management” during VMworld because HDS does not publicly demonstrate technology based upon pre-GA code of ours or our partners. The demos shown during this session were prototypes based upon VMware code that will not be released for at least a year or possibly until the next version of VMware vSphere 6. VMware made a caveat that the vendors who participated in the demonstration have not even committed to supporting the APIs.
Our decision not to participate in the demo during the session doesn’t mean we won’t support future VMware APIs; in fact, the opposite is true. Hitachi Data Systems and VMware are engaged at all levels of interoperability testing and certifications, support and engineering to assure timely and broad qualification and certifications for our mutual customers. HDS will continue to release VMware integrated solutions based upon mature VMware technology in line with VMware general availability releases of technology.
Disclosure: VP of Alliances, Hitachi Data Systems
EMC was Clairion before it was EMC which was Data General's storage arm. This is looking like the old glass house days, which is why we took computing out of the glass house with the PC. Things are coming full circle.
Disclosure - EMCer here.
Chris - the VM Volume "advanced prototype" (shown in VSP3205 at VMworld) was a technology preview of this idea, and yeah, it's an important idea, and a disruptive idea.
Anyone who has managed a moderate to large deployment of virtualization knows that the "datastore" construct (on block or NAS storage) is not ideal - as then the properties of that datastore tend to be shared by ALL the things in it. It would be better if the level of granularity was a a VM, but WITHOUT the management scale problem. That's what was shown.
Today, the storage industry (and of course, I personally think that EMC does this more than anyone, and can prove it) are doing all sorts of things to be more integrated (vCenter plugins, making the arrays "aware of VM objects through bottom up vCenter API integration, VASA, VAAI, etc) - but unless something changes, we're stuck with this core problem - VMs are the target object, but LUNs and filesystems kind of "get in the way".
I'm sure that VMware will run it like all the storage programs they have run. The APIs are open, and available to all - but of course, the early work tends to focus on the technology partners supporting the largest number of customers.
More customers use EMC storage with VMware than any other type; and invests more resources and R&D (both by a longshot) - so it's no surprise that the demonstration in the session featured EMC storage so prominently. Pulling something like that is NOT easy, and a lot of people put in a lot of work into it.
For what it's worth - VMware is simply CHANGING what is important to customers and valuable from storage. Certain data services are moving up (policy-driven placement of VMs), certain ones are pushing down (offload of core data movement), and "intelligent pool" models (auto-tiering, dedupe) become more valuable as they map to simpler policy-driven storage use models.
While this was just a technology preview - if it comes to pass - vendors who are able to deliver strong VM Volume implementations, with VM-level policy and automation will become even more valuable.
Just my 2 cents.