This article is more than 1 year old

Welcome to the fast-moving world of flash connectors

A guided tour

PCIe outside

SATA Express (SATAe) is the result of the move to PCIe. So universal was the push to simply dump the middle man and connect drives directly up to PCIe that the SATA standards body simply threw in the towel.

SATA revision 3.2, or SATAe, is actually just a transition standard defining a bunch of interconnects that can support traditional SATA drives as well as make PCIe lanes available directly for consumption by drives. The protocol used is NVMe, and SATAe can most accurately be called "NVMe over PCIe".

SCSI Express is the evolution of SCSI to fill the same role. The SOP/PQI extensions to the SCSI protocol will result in "SCSI over PCIe”, and we are off to the races once more.

M.2, formerly known as the Next Generation Form Factor (NGFF), is the PCIe-enabled replacement for mSATA. Whereas mSATA simply borrowed the PCI Express Mini Card form factor (resulting in some confusion), M.2 is a completely new connector.

M.2 is to all intents and purposes a SATAe connector. This means it exposes both a traditional SATA interface as well as PCIe lanes. It also exposes a USB 3.0 port, though given that M.2 is pretty much always an internal connector, I still haven't figured out why.

It's complicated

The dual nature of these new connectors makes understanding backwards compatibility a little difficult. Simple questions like "how do you plug in a SATA drive to your new motherboard?” don't have simple answers.

In the new SATAe world, the host plug is the bit on the motherboard (or RAID card) that you plug into. You can plug two traditional 3.5in SATA cables into this plug, and it has an extended bit where the PCIe lanes are exposed. Using the SATAe plug for traditional SATA drives disables the PCIe lanes bit.

A full proper SATAe cable has a host cable receptacle on the end, which is the bit that plugs into the motherboard, and it covers the whole host plug. It eats up the PCIe connector and both the traditional SATA connectors. This would typically be connected up to one drive, usually an SSD that wants the PCIe goodness so as to go fast.

In the world of server-style hot-swap trays things get a little stickier. The SAS connector we have used for the past decade or more is known as the SFF-8482 connector. SATA devices can plug into it. SAS devices can plug into it.

There was a SATA-only connector as well, and SAS devices couldn't plug into it. So long as you knew "SATA can plug into SAS but SAS doesn't plug into SATA" you were fine.

Today we have the 12Gbps Dual Port SAS (SFF-8680) connector as well as the 12Gbps MultiLink (Quad Port) SAS (SFF-8630) connector. As you might expect, traditional SATA as well as SAS drives plug into these ports just fine.

In theory, the Quad Port SAS SFF-8630 connector could have PCI-e lanes attached and thus SATAe drives could plug in as well, though it is rare to actually see support for this and everything has to support it from start to finish.

In addition to the above, the SATAe standard defines the SATAe host receptacle. This is a separate plug that is backwards compatible with a traditional SATA device, but not a SAS device.

Only very horrible people filled with rage at humanity will ever employ this in their designs, so expect it to be sprinkled everywhere.

The wonderfully named SFF-8639 can theoretically solve all ills. It will support SATA, SAS, SATAe and most likely SAS Express. This is the interface that will be deployed by champions and people who have love in their hearts for their fellow man.

Of course, to be difficult, SFF-8639 can be deployed without the PCIe lanes, though I don't expect this to occur in the real world. Almost everyone is simply calling SFF-8639 a "hybrid" port, "hybrid SATA/NVMe" or simply "NVMe". As you can see from the above, that is inaccurate but it won't stop all of us from using that terminology anyway.

March of progress

The need to use PCIe to keep up with SSDs moves beyond just the connectors in the box. PLX has ExpressFabric and A3Cube has RONIEE. Both seek to extend PCIe outside the server to bring its advantages to high-performance computing and data centre storage in a way that older technologies such as Fibre Channel or iSCSI simply cannot.

As PCIe becomes the connector of choice for the average SSD, memory channel storage (MCS) – SSDs in the RAM slots – is taking up the role once served by PCIe SSDs.

MCS is faster than PCIe with far lower and more consistent latency when under load. It is even closer to the CPU than PCIe storage, and it suffers from the same drawbacks as PCIe storage did back in the day.

MCS requires a server with compatible BIOS. You need appropriate drivers, though MCS modules that speak NVMe are emerging. To swap a bad module you need to power down the server.

Still, MCS looks to be the next evolution of storage, already explored for high-end server usage just as we are dipping our toes into the commoditisation of PCIe-based standards.

The wheel never stops turning, but one thing's for sure: SSDs are no longer in the future. They are a fundamental design element of the tablets, notebooks and servers of today. ®

More about

TIP US OFF

Send us news


Other stories you might like