Data Centre

Storage

I'll admit, NetApp's NVMe fabric-accessed array sure has SAS, but it could be zippier

Jet plane, meet bike

By Chris Mellor

1 SHARE

Analysis NetApp's E570 array supports NVMe over fabrics yet it does not use NVMe drives, potentially slowing data access.

The E-Series all-flash E570 was launched in September, with a sub-100µs latency number through its support of NVMe-over-fabric access.

It used RDMA over InfiniBand to its 24 flash drives, and this enables accessing host servers to avoid latency-consuming storage software stack code and have data written to or read from their memory directly.

Part of the logic for using NVMe in this way was that accessing flash drives (SSDs) with SATA and SAS protocols which are basically disk drive access protocols, is sub-optimal because they are slow and add latency to data access from an SSD.

But having NVMe-accessed disk drives in an array exposed the latency contributed by network protocols when accessing a SAN, such as iSCSI or Fibre Channel. So the NVMe over Fabrics protocol was devised, with 65,000 queues and 64,000 commands per queue. It provides remote direct memory access (RDMA) and by-passes the network rotocol stacks at either end of the link. In the E570 case Mellanox ConnectX InfiniBand adapters are used.

Existing NVMe-over-fabrics storage write access latencies are in the 30µs (Mangstor NX array write) to 50µs (E8 write)) to 100/110µs (E8/Mangstor array read). The E570's 100µs latency is pretty damn good considering that it uses SAS SSDs, with a SCSI access stack, and has an NVMe-to-SAS bridge.

We imagine that it could cut the latency down a notch if it used NVMe flash drives and daresay a future E-Series array could do exactly that.

At the NetApp Insight event in Berlin, a Brocade stand showed NVMe-over-Fibre Channel access to a NetApp array, and that also did not have an end-to-end NVMe access scheme. Instead the array controller terminated the NVMe over fabrics connection and then despatched the incoming request to a specific drive or drives.

Once again we can envisage that were NetApp to implement end-to-end NVMe, with last-mile NVMe access, as it were, to the flash drives, then access latency could be cut even more.

It seems though that, were such end-to-end NVMe access to be implemented, the array controller software would not know what changes had been made to the data contents of drives in the array, and so could not trigger data services based on data content changes. The implications of that could be far reaching. ®

Sign up to our NewsletterGet IT in your inbox daily

1 Comment

More from The Register

Mangstor drapes itself in NVMe fabric and 'presto, change-o', brand new name-o

Arrays in a manger? Mmm. Who knows what EXTEN Tech'll get up to...

NVMe? You should. Mangstor hauls in $7.1m for array development shop

And Austin-based firm is hiring

NetApp goes all in on Fibre Channel-based NVMe-over-Fabrics

On-premises boost paired with public cloud blanket extension

Envy NVMe? Mangstor flashes $5m cash to bulk up array line

NVMe over fabrics flasher finds fund

Mangstor has unleashed an NVMe TITAN over fabrics array software

Shared external access under 200 microseconds

None of my flash rivals NVMe: Analyst spills tea on who's who in fabric-access NVMe arrays

Array types, riders and runners

PC nerds: Can't get no SATA-isfaction? Toshiba flaunts NVMe SSD action

Claims years-in-dev tech doubles speed

100Gbit/s Mangstor array blows interconnect cobwebs right away

Mangstor's MX-Series drives enter the vSphere to soup up VM performance

HPE primes storage networking pipes for NVMe-oF data deluge

FC director module and switch cranked up to 32Gbit/s

The Fibre Channel NVMe cookbook: QED from a storage whizz's POV

Interview Let's nerd out with Greg Scherer again