This article is more than 1 year old

We asked a maker of PCIe storage switches to prove the tech is more interesting than soggy cardboard

Why not just use 10GE?

PCIe switching and NVMe fabrics

El Reg How does PCIe switching relate to NVMe fabrics?

Ray Jang PCIe switching will be an important part of NVMe over Fabrics, as it will provide the performance connectivity on the back side of the fabric. NVMe over Fabrics will be used to add distance and scale to the NVMe standard, and as such, will allow clients to connect to large pools of NVMe SSDs. Those pools of drives need to be connected together and to the fabric NIC, and this is where a PCIe switching fabric will be needed.

El Reg Why not use InfiniBand instead?

Ray Jang Taking Infiniband (or iWARP or RoCE) all the way to an individual NVMe SSD is probably too extreme in terms of cost and power for most deployment scenarios. Infiniband and the other RDMA-capable fabrics offer great scalability and manageability, but do cost more in terms of dollars and Watts. In many applications it makes more sense to use RDMA to connect to a pool of NVMe SSDs, and then use PCIe switching to connect the drives within the pool together.

El Reg Why not use 10Gbit Ethernet or 40Gbit Ethernet instead? Or 100Gbit Ethernet?

Ray Jang Using basic Ethernet to connect to NVMe SSDs is not a good choice because there has to be a protocol translation between Ethernet frames and NVMe commands. If we use a basic Ethernet NIC, then this translation has to be done in a CPU. This increases CPU load, adds latency, and causes problems when we try and scale performance because the CPU becomes the bottleneck.

In NVMe over fabrics there is a desire to use RDMA because it allows the protocol translation to be done in hardware and that improves performance and reduces CPU load. Again, it does not make sense to bring this fabric all the way to each individual drive, and that’s where PCIe switching has a very important role to play.

'NVMe SSDs are all about performance and latency'

El Reg Is there anything else we should understand about the PMC PCIe switching technology and products?

Ray Jang NVMe SSDs are all about performance and latency. These devices will deliver the best performance when coupled with PCIe switching for local connectivity and RDMA for longer-range connectivity. PCIe switching allows us to connect many NVMe drives together without any protocol translation, and that is key for performance optimization.

In addition, the PMC Switchtec products have some unique features that make managing such a pool of drives cheaper, lower power, and easier. The combination of PMC Switchtec storage switches and Flashtec SSD controllers is enabling the next generation of performance storage solutions, and then combined with RDMA technology they can do so with distance and scale.

El Reg We’re convinced NVMe connectivity to SSDs and other format flash modules, such as PCIe cards, is effectively an industry standard already. The fabric extension enables shared flash storage between a set of servers and the subsequent provision of storage memory. That will give us faster memory-level transfer between server DRAM and shared flash storage instead of slower data transfer across a server disk-based IO stack.

Speed, baby, speed; it’s all about reducing data access latency. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like