Inside EMC's software-defined ViPR storage tech
ViPRware or the real deal?
EMC spent much of this year's EMC World talking up ViPR, its new system for managing, automating, and controlling storage infrastructure.
Tucci and the gang were scolded for the premature nature of the announcement – ViPR won't launch until late 2013, and it'll be late 2014 when it gets the features that truly differentiate it from just another management console.
The company also proved unwilling to disclose further information on the subject, but after probing various EMC employees, here's what we know.
There is no support for vaunted "commodity hardware" at launch, nor for other vendors besides NetAPP. A slide published by EMC marketing veep Chuck Hollis shows the ViPR controller using "EMC-provided plugins" to interact with IBM, Hitachi, and HP systems as well, but when we asked EMC for further information, a spokeswoman said "the slide is an illustration of support that is possible with ViPR, architecturally speaking. Which platforms will be rolled out and when will be a result of customer demand."
The technology for commodity hardware support is apparently not yet finished. "When you have commodity hardware you still need a persistent layer – that's the world we're working on right now," EMC engineering veep Surya Varanasi told The Register. Much of ViPR's engineering team came from the group that helped build Azure's storage backend – a component that has been widely praised and sometimes ranks better than Amazon on responsiveness.
Right now it supports VMAX, VNX (file and block), Isilon NFS and CIFS, and third-party arrays from NetApp. When it goes into general availability, it will support VPLEX, RecoverPoint, and objects via Isilon.
ViPR relies on the capabilities within the arrays under management to expose these features to the admin, so although it provides a converged management layer, the range of options used – data protection, access policies, et cetera – will entirely depend on what service the array has and can expose to ViPR.
ViPR has been designed to work with cloud environments such as ones that work like Amazon (via its support for the S3 API), and also ones built around OpenStack. However, whether it integrates with EMC spin-off Pivotal and can form a clever infrastructure underlay of Pivotal platform-as-a-service Cloud Foundry is unclear.
"ViPR and Pivotal are a natural fit, and we’re working together to explore integration possibilities. At this time, we don't have more detail to share, but are happy to connect with you when we do," EMC corporate communications chap David Oro told us.
As with many "new" things in this industry, the ViPR approach has many forebears, including EMC's own "Project Bourne" scheme. It also has similarities to HP's own 3PAR systems, which essentially allow management of heterogeneous HP storage albeit within a single vendor environment. It also appears to take influence from iWave, which was a storage automation startup that EMC acquired in January.
So far, ViPR seeks to separate the control plane and the data plane, and will expose the services brought about by this separation to users through a proprietary management layer. "This is a heavy IP environment," EMC chief technology officer John Roese told us.
"The important piece about [ViPR] is it fundamentally needs to be thought of as to some extent the center of the hourglass," Roese said. "Are we saying it's entirely open inside the bowels of VipR? No."
When you compare the approach EMC has taken to commodity infrastructure via ViPR with that taken by the consumer internet startups turned infrastructure operators – Amazon, Google, Facebook, various members of the OpenStack coalition – it seems that ViPR's converged management features come at the expensive of true openness. EMC needs to make money, after all, and with ViPR it hopes to move up from the cheapening storage gear and into more profitable management climes. But as yet, many features compelling enough to make organizations contemplate the sweet embrace of ViPR are lacking. ®
Sponsored: Optimizing the hybrid cloud