Feeds

Mellanox forges switch-hitting ConnectX-3 adapters

The server companion to SwitchX switch chips

Internet Security Threat Report 2014

Networking chip, adapter card and switch maker Mellanox is rounding out its converged InfiniBand-Ethernet product line with the debut of the ConnectX-3 integrated circuits and network adapter cards built using the chips.

Mellanox has been selling multi-protocol chips and adapter cards for servers for a number of years, and back in April the company announced its first switch-hitting chips, called SwitchX, to implement both 40Gb/sec Ethernet and 56Gb/sec InfiniBand on the same piece of silicon. Those SwitchX chips came to market in May at the heart of the sx1000 line of 40GE Ethernet switches. Later this year, the SwitchX silicon will be used to make a line of InfiniBand switches and eventually, when the multiprotocol software is fully cooked, will come out in a line of switches that can dynamically change from Ethernet and InfiniBand on a port-by-port basis.

The long-term goal at Mellanox – and one of the reasons it bought two-timing InfiniBand and Ethernet switch maker Voltaire back in November for $218m – is to allow customers to wire once and switch protocols on the server and switch as required by workloads. Mellanox can presumably charge a premium for such capability, and both the SwitchX and ConnectX-3 silicon allows Mellanox to create fixed adapters and switches at specific speeds to target specific customer needs and lower price points, too.

Mellanox ConnectX-3 chip

The Mellanox ConnectX-3 adapter chip

The ConnectX-3 silicon announced today is the first Fourteen Data Rate (FDR, running at 56Gb/sec) InfiniBand adapter chip to come to market. When running the InfiniBand protocol, it supports Remote Direct Memory Access (RDMA); Fibre Channel over InfiniBand (FCoIB); and Ethernet over InfiniBand (EoIB). RDMA is the key feature that lowers latencies on server-to-server links because it allows a server to bypass the entire network stack and reach right into the main memory of an adjacent server over InfiniBand links and grab some data.

The ConnectX-3 chip supports InfiniBand running at 10Gb/sec, 20GB/sec, 40Gb/sec, and 56Gb/sec speeds. On the Ethernet side, the ConnectX-3 chip implements 10GE or 40GE protocols and supports RDMA over Converged Ethernet (RoCE), Fibre Channel over Ethernet (FCoE), and Data Center Bridging (DCB). The new silicon also supports SR-IOV – an I/O virtualization and isolation standard for Ethernet networks that allows multiple operation systems to share a single PCI device – and IEEE 1588, a standard for synchronizing host server clocks to a master data center clock.

John Monson, vice president of marketing at Mellanox, tells El Reg that the important thing about the ConnectX-3 adapter card chip is that it is tuned to match the bandwidth of the forthcoming PCI-Express 3.0 bus. PCI-Express 3.0 slots are expected to come out with the next generation of servers later this year, and Ethernet and InfiniBand adapter cards usually are created for x8 slots. The ConnectX-3 chip can also be implemented on PCI-Express 1.1 or 2.0 peripherals if companies want to make cards that run at lower speeds on slower buses.

The ConnectX-3 chip is small enough to be implemented as a single chip LAN-on-motherboard (LOM) module, which is perhaps the most important thing for allowing for widespread adoption of 10GE and, later, 40GE networking in data centers. The ConnectX-3 chip includes PHY networking features, so you don't have to add these to the LOM; all you need are some capacitors and resistors and you are good to go, says Monson. The ConnectX-3 chip will also be used in PCI adapter cards and in mezzanine cards that slide into special slots on blade servers. Hewlett-Packard, IBM, Dell, Fujitsu, Oracle, and Bull all OEM Mellanox silicon, adapter, or mezz cards for the respective server lines to support InfiniBand, Ethernet, or converged protocols. It is not entirely clear if blade server makers will go with their current mezz card designs or implement LOM for 10GE networking. "It will be interesting to see how this will play out," Monson says.

The ConnectX-3 chip has enough oomph to implement two 56Gb/sec InfiniBand ports, two 40Gb/sec Ethernet ports, or one of each. Obviously, with an x8 PCI-Express 3.0 slot running at 8GT/sec, you have a peak of 64Gb/sec across eight lanes on the bus, and with encoding, you might be down somewhere around 56Gb/sec for a single x8 slot. So putting two FDR InfiniBand or 40GE ports on the same bus could saturate it, depending on the workload. (It is a wonder why network cards are not made to HPC servers that plug into x16 slots, but for whatever reason, they are not.)

Mellanox is happy to sell its ConnectX-3 silicon to anyone who wants to make network adapters, but is keen on selling its own adapters, of course. The ConnectX-3 chip is sampling now and will be generally available in a few months.

Beginner's guide to SSL certificates

Next page: Shiny new cards

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.