Data Centre

Storage

I'll admit, NetApp's NVMe fabric-accessed array sure has SAS, but it could be zippier

Jet plane, meet bike

By Chris Mellor

1 SHARE

Analysis NetApp's E570 array supports NVMe over fabrics yet it does not use NVMe drives, potentially slowing data access.

The E-Series all-flash E570 was launched in September, with a sub-100µs latency number through its support of NVMe-over-fabric access.

It used RDMA over InfiniBand to its 24 flash drives, and this enables accessing host servers to avoid latency-consuming storage software stack code and have data written to or read from their memory directly.

Part of the logic for using NVMe in this way was that accessing flash drives (SSDs) with SATA and SAS protocols which are basically disk drive access protocols, is sub-optimal because they are slow and add latency to data access from an SSD.

But having NVMe-accessed disk drives in an array exposed the latency contributed by network protocols when accessing a SAN, such as iSCSI or Fibre Channel. So the NVMe over Fabrics protocol was devised, with 65,000 queues and 64,000 commands per queue. It provides remote direct memory access (RDMA) and by-passes the network rotocol stacks at either end of the link. In the E570 case Mellanox ConnectX InfiniBand adapters are used.

Existing NVMe-over-fabrics storage write access latencies are in the 30µs (Mangstor NX array write) to 50µs (E8 write)) to 100/110µs (E8/Mangstor array read). The E570's 100µs latency is pretty damn good considering that it uses SAS SSDs, with a SCSI access stack, and has an NVMe-to-SAS bridge.

We imagine that it could cut the latency down a notch if it used NVMe flash drives and daresay a future E-Series array could do exactly that.

At the NetApp Insight event in Berlin, a Brocade stand showed NVMe-over-Fibre Channel access to a NetApp array, and that also did not have an end-to-end NVMe access scheme. Instead the array controller terminated the NVMe over fabrics connection and then despatched the incoming request to a specific drive or drives.

Once again we can envisage that were NetApp to implement end-to-end NVMe, with last-mile NVMe access, as it were, to the flash drives, then access latency could be cut even more.

It seems though that, were such end-to-end NVMe access to be implemented, the array controller software would not know what changes had been made to the data contents of drives in the array, and so could not trigger data services based on data content changes. The implications of that could be far reaching. ®

Sign up to our NewsletterGet IT in your inbox daily

1 Comment

More from The Register

NetApp takes slow boat to China: Inks deal with Lenovo on arrays, software

Setting up joint-venture to sell into Middle Kingdom

NVMe? You should. Mangstor hauls in $7.1m for array development shop

And Austin-based firm is hiring

Mangstor drapes itself in NVMe fabric and 'presto, change-o', brand new name-o

Arrays in a manger? Mmm. Who knows what EXTEN Tech'll get up to...

Bloodbath as Broadcom slashes through CA Technologies personnel

I liked it so much, I bought the company – and fired 40 per cent, 2,000, of its US staff

Kick-Kaas: NetApp gobbles cloudy Kubernetes upstart StackPointCloud

CEO to stay on as NetApp Kubernetes takes to the skies

Lenovo tells Asia-Pacific staff: Work lappy with your unencrypted data on it has been nicked

Exclusive That's thousands of employees' names, monthly salaries, bank details

Broadcom, its baffling $19bn CA biz gobble, and the fake Pentagon memo crying about national security

Senator calls for real probe into 'Chinese-controlled' outfit

Broadcom sweeps up winnings after enterprise storage gamble pays off

It wasn't all Brocade! insists CEO as revenues jump 13%

Qualcomm disappointed by Broadcom's 'inadequate' shrinking package

Snapdragon giant confirms: Size really does matter

Broadcom confirms anti-trust probe, professes zero worries

Says probe doesn't impact wireless lines, leaving about a gazillion other products in play