We grill another storage startup that's meshing about with NVMe
Says virtual NVMe-based SAN beats shared NVMe drive aaray
Interview Storage startup Excelero is supportive of NVMe drives and of NVMe over fabrics-style networking. It has a unique way of using NVMe drives to create a virtual SAN accessed by RDMA. An upcoming NASA Ames case study will describe how its NVMesh technology works in more detail.
We asked Excelero CTO Yaniv Romem some questions to find out more about how this startup views NVMe and NVMe over fabrics technology.
El Reg: Will simply moving from SAS/SATA SSDs to NVMe drives bottleneck existing array controllers?
Yaniv Romem: Most existing controllers are already bottlenecked on CPU by attempting to provide a uniform wide variety of storage services indiscriminately for all data. Moving to NVMe drives will provide minor benefits as it does not tackle the real bottleneck.
El Reg: Must we wait for next-generation controllers with much faster processing?
Yaniv Romem: Faster processing in the current architecture will do little. Re-architecting the way storage services are delivered is critical. Discrimination between data types and the associated compute resources is key. Service implementation should be spread from a central highly-available controller to a more robust and efficient distributed layout involving hosts, target servers and the drives themselves to enable efficient deployment on COTS-hardware. Excelero NVMesh is leading this paradigm shift.
Excelero NVMesh diagram
El Reg: Will we need affordable dual-port NVMe drives so array controllers can provide HA?
Yaniv Romem: Dual-port NVMes are expensive and inherently inefficient. Utilising them in dual motherboard appliances is reminiscent of using tandem computing for software high-availability. Over the past decade, service availability has been achieved via software evolution without reliance on hardware. For storage this equates to replacing complex duality-based hardware with appropriate software on COTS-hardware.
El Reg: What does affordable mean?
Yaniv Romem: For dual-port NVMe drives to be affordable, the per-GB cost of the system in which they are embedded would have to be practically the same as that of a COTS system with standard drives and motherboards without complex PCI bridging. Otherwise, the software can do it.
El Reg: Are customers ready to adapt NVMeF array-accessing servers with new HBAs and, for ROCE, DCB switches and dealing with end-to-end congestion management?
Yaniv Romem: With the rollout of Purley-based hardware in 2017, customers will have all the required networking gear in place by default. The deployment benefits of a single fabric will push software vendors to provide the means to make NVMeF-accessible storage on that common fabric as robust and manageable as Fibre Channel-based solutions while providing significant performance gains. Easier to deploy congestion management techniques being rolled out (e.g. ECN-based; see Resilient RoCE) will ensure a non-painful transition of storage onto the common network.
El Reg: Do they need routability with ROCE?
Yaniv Romem Yes. NVMeF-accessed storage will be utilised across subnets and will therefore require routability from the underlying network protocols.
El Reg Could we cache inside the existing array controllers to augment existing RAM buffers and so drive up array performance?
Yaniv Romem Host side caching is typically more effective than controller caching, especially in a converged environment. As the price ratio between non-volatile media and DRAM continues to drop and data sets grow, caching becomes less and less effective on the controller.
El Reg: With flash DIMMs say? Or XPoint DIMMs in the future?
Yaniv Romem: 3D XPoint will provide a high speed tier for persistent metadata and other data that is latency-sensitive and frequently updated, both in DIMM and NVMe form factors. Caching is more effective on the host side.
El Reg: Does having an NVMe over fabrics connection to an array which is not using NVMe drives make sense?
Yaniv Romem: In the short term, ubiquitous NVMeF access to all storage elements makes sense, especially for software solutions that can use hardware already deployed.
In the longer term, as NVMe drives reach price parity with non-NVMe flash (parity link) and eventually HDDs, this becomes moot.
El Reg: When will NVMeF arrays filled with NVMe drives and offering enterprise data services be ready? What is necessary for them to be ready?
Yaniv Romem: For certain markets looking for a turnkey all-in-one solution, they should be available in the near term. For data centres where efficiency is purported foremost, COTS hardware will be deployed with storage hardware disaggregated; converged or in a hybrid mode. These data centres will continue to shy away from arrays leveraging software layers to provide the required functionality.
Today, Excelero provides enterprise logical volumes, multi-pathing and data protection in what looks like an array (a 2U 24 system) or in a distributed converged fashion. Scaling has been shown to be practically linear in these architectures (e.g. > 99.5 per cent) making 100 per cent distributed converged systems a natural choice.
We believe our customers and OEM partners can make the best choices of hardware, tailored to their needs. Our decision to offer a software-only solution means we refrain from competition with our OEM partners and they can differentiate with hardware and with custom integrations that take advantage of unique features in their offerings.
Excelero offers the expected shared NVMe storage array, the 2U 24-drive system mentioned above and the unexpected virtual NVMe drive SAN used in NVMesh. It says scaling is “practically linear” in both cases so “100 per cent distributed converged systems a natural choice.”
As far as we know, that does differentiate Excelero from other shared NVMe storage providers, none of whom to our knowledge offer a virtual SAN approach. ®