This article is more than 1 year old

Need speed? Then PCIe it is – server power without the politics

No longer for nerds and HPC geeks

Today, all over again

Today technologies have evolved somewhat to fill the gap. For some time storage devices have been able to do Direct Memory Access: they dump the information directly into RAM without bothering the CPU. Network cards now offer this capability, allowing a remote computer to write directly to a server's RAM without having to wait on the slow network stack.

This is important because without DMA, network cards can only move at the speed of the operating system. A request by a remote computer to add information to RAM would arrive at the network card, go to the driver layer, be processed by the OS – which may or may not have to look things up from storage in order to know what to do with that request – and then information could finally be written.

Taking the OS (and its driver) out of the equation simplifies things. With Remote DMA (RDMA), the network card dumps the data into RAM and informs the OS where the data is. It requires a level of trust between the server and the remote device, but within the confines of a data centre the risk can be made acceptable.

Of course, it's never fast enough. We still want that Hypertransport computer where everything talks directly to the CPU. PCIe SSDs are considered not quite fast enough by some, and so Memory Channel Storage has evolved, to bring that storage even closer to the CPU, and make it even lower latency.

Similarly, RDMA networks are fast, but there's still a translation happening where PCIe is converted into Ethernet or Infiniband and then back again. A new wave of startups such as A3Cube are emerging. A3Cube was started by Emilio Billi, one of the folks behind Hypertransport.

He was (and remains) a strong proponent of extending Hypertransport outside the server. Unfortunately, Hypertransport outside the server hasn't yet taken the world by storm — and without a lot of politics and much broader support it probably will never do so.

So Billi is now extending PCIe outside the server and using it to lash nodes together. And A3Cube isn't the only company trying this, though A3Cube claim to be "the only company that is extending the PIO mode and the DMA out of the box without using RDMA but the direct memory mapping provided by PCIe".

I am aware of other stealth-mode startups eyeing the PCIe-as-an-intersystem-interconnect space. Unlike Hypertransport, there may well be a good head of steam under PCIe for this purpose, and it may well move beyond "just a niche".

A3Cube is thusly headed firmly into the territory of "multiple nodes behaving as one computer" instead of "PCIe solely as a replacement for (or carrier of) Ethernet". (Though A3Cube can do that too.) Some of the other startups are more interested in a "make how clusters of computers work today go faster".

More about

TIP US OFF

Send us news


Other stories you might like