EMC's DSSD rack flashers snub Fibre Channel for ... PCIe
Well, OK. That's one way of hooking up servers to data
Blocks and Files The rack-scale flash array technology EMC gained in its DSSD gobble connects to a server using PCIe. That's according to an interview by Barrons with EMC’s product ops head Jeremy Burton.
A DSSD flash vault can fill a rack and hooks into a server using a PCIe connection – actually it need many, many lanes to have a rack o’ flash talk to a single server.
Burton said raw flash latency can be 60 microseconds, and claimed a networked all-flash array, using Fibre Channel, will have a read latency of about a millisecond, roughly 17 times longer. Your mileage may vary, of course.
The DSSD technology uses a PCIe connection instead of Fibre Channel and, somehow, parallelises flash module reads, and writes we suppose, to lose that longer latency, it's claimed.
We understand that each DSSD flash module has an embedded controller running Linux, and that the systems store objects and not files. The modules and flash chips can be written to in parallel, and you add modules to scale-out the system. Protection comes from a 3D RAID scheme rather than erasure coding.
Logically, the DSSD flash is seen as storage memory, an adjunct to DRAM, and it does not need applications to use it via proprietary APIs, as is the case with Fusion-io ioMemory technology, using PCIe flash inside a server. A Hadoop application can use HDFS to get at data inside a DSSD array, for instance. Other example DSSD workloads are:
- Compute and IO-intensive in-memory software (SAP HANA, GemFire, etc.)
- Big data
- Real-time analytics
- High-performance apps such as:
- Real-time historical financial analysis
- Entity tracking and querying
- Latent Semantic Indexing (LSI)
The Fusion-io tech can provide, Burton says, up to 10TB of capacity (although Fusion-io insists that figure should be 32TB). That's not enough for the workloads EMC has in mind. A rack of flash could provide a petabyte or more of capacity.
Could a PCIe-connected DSSD flash rack link to several servers and be a server fabric-attached flash SAN? The Reg's storage desk thinks so. DSSD’s premise is about cutting latency in three ways:
- Getting rid of storage network protocols and connects like Fibre Channel or iSCSI, and replacing it with PCIe.
- Getting rid of the traditional app-to-storage-medium stack, and replacing it with simpler and more direct access.
- Doing parallel IO operations inside the flash array.
Also Evan Powell, CEO and co-founder at startup StackStorm, points out that the DSSD array has a native key-value facility in the box. He added: "The point about ... having native on-the-box key value store is that this helps make sure you don't give up in protocol overhead what you gain thanks to other advances including the PCIe attach."
It’s feasible, this back-of-an-envelope-flash-hack thinks, for a DSSD flash rack to serve as a very high-speed flash vault for several servers connected to it across a PCIe fabric, and to function as a server fabric-attached flash SAN. ®