Feeds

Battle of the Buses? Not really

InfiniBand, HyperTransport, RapidIO, PCI-Express

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Server Briefing The bottom line: today's servers need more system bandwidth - a lot more bandwidth. Faster processors need to be pumped with instructions and data if they're to process information efficiently. Increasingly, that information comes from multiple sources within the box and outside of it. And the processed data has to be passed on to ever more users through permanent 24x7 connections.

In short, the old PCI bus has been pushed to its limit. Some applications have gone beyond it, necessitating alternative technologies to supply the bandwidth that PCI can't provide. Almost all PC graphics cards now require a dedicated bus, AGP. And in almost all high-end systems, chip-to-chip technologies like Intel's HubLink have ousted PCI from the role.

In the server world, the strategy of utilising multiple PCI buses has emerged, but at the cost of adding the extra host-to-PCI and PCI-to-PCI bridge chips, increasing power consumption, size and prices.

Clearly then PCI needs a successor, and there are already four main challengers for the role: InfiniBand, HyperTransport, RapidIO and PCI-Express. It's easy to view them as competitors, as fervent AMD and Intel supporters are keen to do, given each companies support for certain technologies, but while there's some cross-over, there's also plenty of opportunity for co-existence. And perhaps even interoperability, as witnessed by moves to align RapidIO and PCI-Express, for instance.

All the players certainly improve on PCI, offer many times greater bandwidth and far lower latencies. All seek to minimise power consumption and system real estate through lower pin counts. All transfer network-style packet-based data structures over point-to-point links. InfiniBand, HyperTransport and PCI-Express are serial architectures; RapidIO a parallel one (though a serialised version of the bus is in the works).

Not that PCI is entirely past it, having evolved into PCI-X with its system bus-level 133MHz frequency and 1.066GBps bandwidth. It's already being used in servers to hook up high-bandwidth fibre channel, SCSI and Gigabit Ethernet cards. The recently released preliminary 2.0 spec. takes PCI-X to 266MHz and 533MHz, to double and quadruple, respectively, the available bandwidth. That paves the way for 20Gbps InfiniBand 4x mode support, which is rapidly becoming the InfiniBand entry level - hence Intel's decision to get out of InfiniBand silicon development, having devoted its resources to 1x.

PCI-Express will offer sufficient bandwidth for a direct connection to 4x InfiniBand, but with hardware unlikely to appear before late 2003 or early 2004, PCI-X 2.0 should continue as an interim technology for now.

Unless, of course, HyperTransport proves a success. Like PCI-Express, it addresses the limitations of PCI and PCI-X, and with AMD committed to building it into Opteron server chips scheduled to ship early next year, there's a real opportunity to get in ahead of PCI-Express and provide high-bandwidth connections between InfiniBand fabrics and the CPU.

InfiniBand support is essential, since it's clearly the basis for data centres of the future - 3-4 million of them by 2005, according to InfiniBand chip maker Mellanox. InfiniBand is really about box-to-box communications, allowing servers and storage units to talk to each other directly across a mesh of switched peer-to-peer links.

That leaves HyperTransport and PCI-Express battling it out for the local bus. But it's hard not to see the real choice being not system logic but between the CPUs they connect. AMD customers will get HyperTransport; Intel customers PCI-Express. Both will hook up to InfiniBand and to legacy PCI hardware.

HyperTransport, with its very low latencies, also targets the embedded and networking markets, as does the Motorola and IBM-backed RapidIO. Again, choice of processor should govern choice of system logic - certainly both offer comparable performance and functionality. PowerPCs with RapidIO built in are expected to start shipping later this year, some of which are likely to make it into low-power blade servers. ®

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.