Virtualized PCIe switch
NextIO revs products
Virtualized PCIe technology start-up NextIO has rev'ed and re-marketed products which were first announced in May.
It announced its ioGateway family of ExpressConnect products, the N1400-PCM IBM blade server module and N2800-ICA back in May and has now anounced its Adaptive Connect products, comprising the, yes, you got it, N1400-PCM and N2800-ICA once again.
The technology virtualizes PCI Express slots on servers by directing then to a PCIe switch and thence via various I/O adapters out to Fibre Channel, Ethernet, InfiniBand, SAS storage, video, T1 lines, etc. Up to fourteen physical servers can connect their own PCIe busses to the N2800 (rear pictured with 4 1U servers connected) which provides 14 outbound PCIe slots into which can be connected 1 or 10Gig Ethernet NICs, 4 or 8Gbit/s Fibre Channel HBAs, InfiniBand or other adapters. Each server, or virtual server inside it come to that, then has potential access to any of these outbound I/O channel which it can share with the other 13 servers connected to the 3U rack-mounted N2800. The outbound I/O channels can be switched from server to server and shared between them via the nControl management SW.
With it virtual machines can be switched from server to server without worrying about whether the destination server has the same physical I/O cards as the starting server. NextIO supports any blade, rack or other server and any O/S or hypervisor; this is all PCIe grist to its mill.
The N1400 is for IBM BladeCentre servers and hooks up 14 IBM blade servers to the N2800. It runs the same ASIC as the N2800. This ASIC now uses second generation silicon and throughput has improved from 35Gbit/s to 5Tbit/s.
The ioGateway and ExpressConnect terms have vanished. Oddly the N2800 can have an internal RAID array of up to 16 SAS or SATA disk drives although NextIO leaves its use up to our imagination - DAS for blade servers? Perhaps the N2800's 3U is too large for the ASIC board alone and this is just a useful way to fill up the cavity.
NextIO technology effectively virtualizes all I/O that a server's PCIe slots can use. In this it competes with 'point I/O virtualization products' from, for example, Emulex, for Fibre Channel HBAs.
NextIO claims that its PCIe switch enables 'over 40 per cent lower power (draw), 35 per cent rack density (space) increase, and is more than 50 per cent cheaper than' competing products from 3Leaf, Cassatt, HP Virtual Connect, XSigo, Egenera, Scalent and others.
No pricing information is available for the two NextIO products which are meant to be sold through OEM partners. It is working with IBM, Broadcast Intl, and Neterion on integrated products, and is looking for resellers and system integrators to implement its hopefully OEM'd kit. The company was founded in 2003 by Kenton Murphy, CEO and chairman, and Chris Pettey, CTO, and is funded by Dell and some venture capitalists. Rick Marz, an LSI Logic man, sits on its board.
NextIO pulled in $10 million from investors in a Series B funding round in June, 2005, and $18.8 million in February this year from the same investors. Total funding is thought to be around $40 million. The new money is intended to build a sales and marketing channel. If that doesn't deliver the revenue goods then additional funding may be sought in 2009 or 10.
An HP executive, Shane Robison, was on NextIO's board in 2004 but now HP, with its Virtual Connect technology, is seen as a competitor.
Cisco and Brocade are pushing 10 gig Ethernet as the data center converged network fabric, on which would be carried other protocols (except InfiniBand obviously). NextIO's technology answers a need to share network protocol access through a common channel and virtualizes at the PCIe level. The Cisco/Brocade data centre Ethernet idea virtualizes at the Ethernet level and would take value away from NextIO's technology.