PCI recast for supercomputing future
Double speed or more by 2015
The next generation of the PCI interconnect standard will be aimed squarely at high-performance computing, and it will be developed using a different scheme than were previous generations.
"The solution space that we're targeting for 'gen-four', if you will, is going to be directly focused to service the needs of HPC applications," the PCI-SIG 's Serial Communications Workgroup chair Ramin Neshati told The Reg during this week's Intel Developers Forum .
"Gen-three" – aka PCIe 3.0 , which was released  last November after years of work – runs at a healthy 8GT/s (gigatransfers per second ). The target for gen-four is 16GT/s over copper, a transfer rate snappy enough to have few applications outside of HPC.
"By and large, we believe that gen-one, gen-two, and even gen-three will be good enough for the broad spectrum of applications for a long, long time," Neshati said. When asked what "a long, long time" means, his answer was simple and straightforward. "Forever."
"Gen-four will be more of a boutique-type application for very few topologies," he said. "Gen-three will be good enough for the world."
Initial studies for gen-four, aka PCIe 4.0, have begun, and the same low-cost, high-volume, and compatibility goals underpin those studies and the discussions they involve. Although the goal is 16GT/s over copper, Neshati says that higher transfer rates might be possible.
The development process for PCIe 4.0 will be different from previous generations. "In gen-one, gen-two, gen-three," he said, "we identified a worst-case scenario – say, for example, a server channel of 20-inch [with] two connectors. A very tough topology to solve."
The advantage of building a standard around a worst-case scenario is easy to understand: if it can handle a worst case, less-demanding cases should be a walk in the park.
For PCIe 4.0, however, the PCI-SIG is taking a different tack – what Neshati described as "a more optimistic channel" – using as its design base a short channel of eight to 10 inches with one connector.
"If you solve it for that topology, then any worse topology – longer channel – will have to pay to get there," Neshati said. This "pay as you go" scheme, as he called it, would for example require the extra expense of a repeater if an implementation required a longer channel.
"So there's a mental shift here," he said, "from a 'solve it for the worst case' to 'solve it for the best case', and then add costs to solve it for the worst cases."
The reasoning behind the shift is simple: at these performance levels, solving for the worst case would introduce costs that would burden implementers of less-demanding applications.
So, how will 16GT/s over copper be accomplished? "We're looking at connector improvements, keeping it mechanically the same but electrically improving the connector," Meshati said. Other improvements to be investigated might include changes in silicon design, channel improvements to mitigate crosstalk and discontinuity, and using different materials in the channel.
"With these knobs," Neshati said, "we think we have line of sight to get to 16 gig on copper – maybe even higher."
But the HPC world will need to wait a bit before incorporating PCIe 4.0 into their installations. Neshati thinks that the bit rate will be set late this year or early next, which will then become the basis of further studies leading to a specification, and then to silicon to test the spec.
"We're targeting – based on member feedback – a 2015, 2016 adoption cycle for gen-four," he said. To get products into the field by then, he believe that the spec will need to be finalized by 2013 – an aggressive timeline, to say the least.
The question, of course, arises that if PCIe 3.0 will last "forever" and PCIe 4.0 will start in HPC and only gradually trickle down into servers, how will the PCI-SIG itself remain relevant?
"As long as there are two pieces of silicon that need to talk to each other, and they need to talk to each other through a standard interface," Neshati told us, "then PCI-SIG will be relevant." ®