Feeds

Intel pits QDR-80 InfiniBand against Mellanox FDR

And hints at integrated networking on future CPUs

Secure remote control for conventional and virtual desktops

Intel doubles up QDR for two-socket boxes

So, with Mellanox pushing 56Gb/sec InfiniBand on adapters that are well suited to the on-chip PCI-Express 3.0 ports on the Intel Xeon E5-2600 processors (you need the 2X increase in bandwidth over PCI-Express 2.0 to push the FDR InfiniBand card hard), what is Intel countering with? Something it calls QDR-80.

By the way, Yaworski tells El Reg that 75 per cent of the HPC market can be addressed by QDR InfiniBand, And if you really want to be honest, many clusters where latency or cost is more important than bandwidth are still being built with Gigabit Ethernet switches, although 10GE switches are catching on as they come down in price and offer very low latency.

Intel's QDR-80 gives each socket its own QDR InfiniBand adapter

Intel's QDR-80 gives each socket its own QDR InfiniBand adapter

When Intel was doing the QLogic deal, Yaworski was perfectly upfront with El Reg, saying that QLogic was not sure before the acquisition if it would do FDR InfiniBand – and in the wake of it, Intel remained unsure. The QDR-80 approach splits the difference by giving each socket in a two-socket server node its own QDR InfiniBand card rather than trying to push them both to talk over a single FDR InfiniBand card.

The important thing about QDR-80, says Yaworski, is that one socket does not have to send its traffic over the pair of QuickPath Interconnect (QPI) links that glue the two processors to each other to make a single system image. The Intel compilers know about QDR-80 and how to arrange code so it doesn't try to go over the QPI link.

It is hard to argue with this logic, particular when Yaworski says that the regular QLogic QDR adapters are cheap enough that you can have two of them for the price of a single FDR InfiniBand adapter.

That is, it may be hard to argue, but not impossible.

For one thing, twice as many adapters take twice as many slots and twice as many cables. And that latter bit is an issue, but doubling up components does have a certain amount of redundancy.

To El Reg's thinking, this QDR-80 approach might argue for a single-socket Xeon E3-1200 v2 server node with a QDR or FDR InfiniBand adapter welded right onto the motherboard. It is not like HPC customers put a lot of memory on their nodes, anyway – look at the data above. QDR-80 can be thought of as a kind of networking SMP or turning a two-socket box into a shared microserver pair.

So with InfiniBand and Ethernet both pushing up to 100GE speeds soon and more beyond that, what is Intel's plan for the future of InfiniBand?

Yaworski is not making any commitments. "Obviously, going above 80Gb/sec is a goal," says Yaworski, hedging with a laugh. "It will definitely be greater than 80Gb/sec and less than 200Gb/sec."

The real issue, of course, is not bandwidth. It's latency. And that is the stickler that is going to make exascale systems a challenge.

"We think we can drive it down lower," says Yaworski, and part of that is done by eliminating hops across the QPI links and the PCI-Express bus. "Every foot of cabling adds latency," so getting server nodes closer to each other is as important as getting network interfaces down onto the processors. But because of legacy peripheral needs, Intel does not expect to ditch the PCI bus any time soon, so don't get excited and have flashbacks to the original InfiniBand plan.

"We know that fabrics are the next bottleneck," says Yaworski. "We know we need a new initiative to address these needs. We know we need to drive the fabric down closer and closer to the processor, which drives up bandwidth and drives up scalability by reducing latency."

Intel is not making any promises about how and when it will add networking to Xeon processors, although we surmise that "Haswell" Xeons could have Ethernet ports on the die, and maybe, just maybe, "Ivy Bridge" Xeons not in the X3 family will, too.

Diane Bryant, general manager of Intel's Data Center and Connected Systems Group, admitted to El Reg last fall that the future "Avoton" 22-nanometer Atom S-series chip, due later this year, would have an Ethernet controller on the die.

This is in contrast to the distributed Layer 2 switch and virtual Ethernet ports that multicore ECX-1000 ARM server chips from Calxeda have on the die, which can lash up to 4,096 nodes into a network (with various topology options) and in future years will grow to do 100,000 nodes and more, if all the plans work out. All this without a single top-of-rack switch.

Think about that for a minute and see how this may upset Intel's switching-market dominance plans. Think about the lock-in that gives a vendor like Calxeda. Ask yourself why Calxeda is not just saying to hell with Dell and HP and building its own servers on Open Compute enclosures.

Intel, being professionally paranoid, is keenly aware of this and has spent what is probably on the order of $425m to buy those three networking companies and get their engineers focused on this and other problems in the hyperscale data centers of the future. The way Yaworski talks about it, InfiniBand is being positioned for the volume HPC space, and the Cray Aries interconnect (and, more importantly, its follow-ons) are being positioned for the real exascale systems.

What Intel almost certainly does not want to do is make custom versions of Xeon processors with hooks into the Cray interconnect that are distinct from Xeon processors that have controllers that speak InfiniBand or Ethernet (as the ConnectX adapters and SwitchX ASICs from Mellanox do).

In this thought experiment, what you want is for a Xeon chip to have network controllers that can speak Ethernet, InfiniBand, or Aries, and that link out through optical fiber ports out to an InfiniBand or Ethernet switch or an Aries router that is outside of the motherboard. Now you only have one Intel chip, but you can deploy it in many ways and not be bottlenecked by the PCI bus on the server. Yaworski would not confirm or deny that this is the plan.

"All I can say is that we will drive the fabric closer and closer to the CPU," he said, "and eventually achieve integration into the CPU that allows us to cover the key market segments that we are targeting, which is HPC, data center, and cloud." ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Driving business with continuous operational intelligence
Introducing an innovative approach offered by ExtraHop for producing continuous operational intelligence.
Why CIOs should rethink endpoint data protection in the age of mobility
Assessing trends in data protection, specifically with respect to mobile devices, BYOD, and remote employees.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Simplify SSL certificate management across the enterprise
Simple steps to take control of SSL across the enterprise, and recommendations for a management platform for full visibility and single-point of control for these Certificates.