This article is more than 1 year old

NetApp, storage class memory and hyperconvergence

I'm Darrell, this is my brother Darrell and this is my other brother Darrell

Architecting IT blog There have been rumours that NetApp plans to move into the hyper-convergence market, followed up by The Reg's Chris Mellor here.

If they are indeed true, the intention shouldn’t be a surprise in some respects. As the storage market fragments, the incumbents have to adapt to the needs of customers, and hyper-converged systems are – by any measure – popular.

Storage pure play

It’s debatable whether any company can continue to be a storage “pure play” in the current storage market. I discussed this subject in a short video with Calvin Zito and Mark Peters (ESG) earlier this year). Historically, storage was based around one or two major products, whereas today, there are many (think primary storage, secondary storage, archive, backup/dedupe, SDS, enterprise, midrange, all-flash, hybrid, object, etc) and that means having a portfolio of products to fit all requirements. However more interesting is the move away from centralised storage to hyper-converged and converged solutions. Here we see a range of players that we’ve discussed many times, like Nutanix, Simplivity, VMware (with Virtual SAN), HPE, Atlantis Computing, HyperGrid and many more.

The trend away from traditional arrays is clearly defined in the market revenue of vendors that still have the majority of their business based on this segment of the market. You can see the figures in a recent blog post showing how the storage array business is declining and the revenues from storage in general is rising slightly.

Hyper-converged revenues

The potential in hyper-convergence is seen by looking at the revenues from the freshly IPO'd Nutanix. Check out Chris Mellor’s prior post on its revenues, which shows rapidly rising revenue and flattening losses, here. VMware has been pushing hard with Virtual SAN and via EMC with VxRAIL; Simplivity featured highly on Forrester’s recent hyper-convergence report (link, registration required). Gartner predicts hyper-convergence as a $2bn market this year and $5bn by 2019, so there’s obviously revenue to go for, which in part has come from storage.

NetApp and SCM

Getting back to Chris’s article, there’s an indication that NetApp (which would be late to the hyper-converged market by some margin) could be looking to leverage Storage Class Memory as a way to get a leap over the competition. SCM is a series of storage products that put persistent storage onto the memory bus of the server (see a primer on Diablo Technologies here). SCM devices, like Diablo’s Memory Channel Storage, provide the capability to store data persistently across reboots on memory cards that fit into the DIMM sockets of the server, so called NVDIMMs or non-volatile DIMMs. Other SCM products could include battery-backed memory and 3D-Xpoint, when we see it.

The benefit of being on the memory (rather than I/O) bus of the server is in I/O performance and, in particular, massively reduced latency. Data is also byte-addressable, rather than being stored and retrieved in blocks, as it would for an I/O bus device (although the products themselves may not directly provide byte-addressability, they may emulate it). Low latency provides the capability to run both virtual machine and container instances at a significantly higher rate than typically achieved today, and was the premise of PernixData’s FVP product (although that used flash and volatile memory).

SCM means large volumes of I/O can be served from memory and potentially stored in memory with fewer requirements to create multiple copies to protect against controller or server failure. Exactly how this is done remains to be seen, but there are obvious benefits from not having to continually commit to relatively slow external disk.

What could prove interesting is how NetApp chooses to integrate the idea of hyper-convergence into their existing product line. The acquisition of SolidFire could well be the catalyst for this move. One question not yet answered though is what hypervisor might be used. Would NetApp chose to build their own hypervisor, using KVM and integrate that into (or alongside) the SolidFire ElementOS? Will the solution be a mix of storage and compute nodes in a loosely coupled, hyper-converged solution? How will this business affect the NetApp relationship with Cisco and FlexPod?

The Architect’s view

NetApp continues to transform its business as the company moves away from a single platform to deliver services to a wider audience. A hyper-converged solution was always on the drawing board, the question was when, not if. At the most recent NetApp/SolidFire analysts day in June this year, George Kurian closed the session by stressing that NetApp was a data management company, further re-emphasising a focus on the Data Fabric. Hyper-convergence meets this strategy, even if it is a little more “left field” than the traditional storage platforms of old. However if NetApp wants to compete, this is a market it needs to be in.

More about

TIP US OFF

Send us news


Other stories you might like