Original URL: https://www.theregister.com/2007/08/28/intel_csi_kanter/

Pesky hack divulges Intel's 'Project Copy Hypertransport'

CSI Mountain View

By Ashlee Vance

Posted in Channel, 28th August 2007 22:07 GMT

Intel's plan to unveil its copy of AMD's Hypertransport technology at next month's Intel Developer Forum (IDF) has been spoiled by a rather thorough analyst.

Back in June, Intel confirmed some dates around its CSI interconnect for the first time, saying the technology will appear with products that ship in 2008. CSI, which will apparently be called QuickPath, is a big deal since it moves Intel in the direction of rivals by ditching the front side bus. Intel looked set to use IDF as its early brainwashing platform for CSI.

David Kanter at Real World Technologies, however, had his own ideas. The young fella today dished out a whopping 13 pages on CSI. Kanter, we're told, spent months poring over Intel's patents and talking to engineers to craft what is a remarkably detailed report on the technology.

You'll find the paper here. Semiconductor amateurs need not apply.

Kanter's coverage beats out anything we're going to regurgitate, so we'll keep this short, highlighting just a couple of choice tidbits from the report.

First we have the basics.

Unlike the front-side bus, CSI is a cleanly defined, layered network fabric used to communicate between various agents. These ‘agents’ may be microprocessors, coprocessors, FPGAs, chipsets, or generally any device with a CSI port. There are five distinct layers in the CSI stack, from lowest to highest: Physical, Link, Routing, Transport and Protocol.

And then some performance notes.

Initial CSI implementations in Intel’s 65nm and 45nm high performance CMOS processes target 4.8-6.4GT/s operation, thus providing 12-16GB/s of bandwidth in each direction and 24-32GB/s for each link. Compared to the parallel P4 bus, CSI uses vastly fewer pins running at much higher data rates, which not only simplifies board routing, but also makes more CPU pins available for power and ground.

Kanter then closes with CSI's place in the chip game.

CSI will be a turning point for the industry. In the server world, CSI paired with an integrated memory controller, will erase or reverse Intel’s system architecture deficit to AMD. Intel’s microprocessors will need less cache because of the lower memory and remote access latency; the specs for Tukwila call for 6MB/core rather than the 12MB/core in Montecito. This in turn will free up more die area for additional cores, or more economical die sizes. These changes will put Intel on a more equal footing with AMD, which has had a leg up in system architecture with their integrated memory controller and HyperTransport. As a result, Intel will be in a good position to retake lost market share in the server world in 2008/9 when CSI based systems debut.

In some ways, CSI and integrated memory controllers are the last piece of the puzzle to get Intel’s servers back on track. The new Core microarchitecture has certainly proven to be a capable design, even when paired with the front side bus and a discrete memory controller. The multithreaded microarchitecture for Nehalem, coupled with an integrated memory controller and the CSI system fabric should be an even more impressive product. For Intel, 2008 will be a year to look forward to, thanks in no small part to the engineers who worked on CSI.

Enjoy. ®