DSSD bridges access latency gap with NVMe fabric flash magic
Networked flash array with local flash speed
The DSSD outline scheme has a layer of I/O modules with each connecting to all the Flash Modules in the array. There are a pair of CMs (Control Modules?) linking to each other and the I/O modules. This reminds me a little of the backplane you find in monolithic arrays such as VMAX, that enable every engine to talk to all disks.
The performance is shaping up to be close to that of an in-memory data copy operation:
Shapiro believes we should get ready for initial NVMe over Fabrics implementations in 2016. He envisages NVMe for mobile devices such as notebooks, tablets and smartphones.
Huffman describes a 64-node, hyper-converged system set-up with a 500TB all-flash virtual SAN array. It powered 6,400 VMware virtual machines and achieved 6.7 million random read IOPS, 2.4 million random write IOPS, 70GB/se sequential reads and 33GB/sec sequential writes.
This configuration had 2,304 Xeon processors, meaning each node had 36 of them.
You get the picture: NVMe-connected systems will deliver performance that you can only dream of today if you spend millions of bucks on an in-memory system. Think of the next-generation VMAX-equivalent being a DSSD all-flash, NVMEe Fabric-connected array.
This kind of system, if it is affordable – and the existence of 3D TLC NAND suggests it could be – would alter Big Data analytics. You would no longer need dozens of Hadoop nodes, each analysing its own local data. Instead, you would have a single big badass flash resource enabling servers to answer queries in real time or near-real time.
Let's extend this thinking a little. If we think of NVMeF as a (very fast) network interface then existing networked arrays could, in theory, be upgraded to use it. This only makes sense for all-flash arrays as there's little point of getting rid of network latency for a disk drive array only to carry on waiting for the disk latency.
El Reg would hazard a guess that, come 2017, all all-flash arrays will be using NVMeF technology to serve up data to servers at near in-memory speed.
Get a copy of Mike Shapiro’s and Amber Huffman’s 47-slide presentation here. ®