NextIO grabs $19m for virtualized PCI Express extravaganza
Dell licks lips
NextIO has been stealthing away down in Austin, Texas, flying under the radar of most server customers. You can, however, be sure that the major server vendors know about this small shop.
The start-up last week revealed a $19m funding round led by Adams Capital Management and Crescendo Ventures. That cash infusion brings NextIO's total funding to $40m and should help the firm grow from about 35 workers to 50 workers, along with aiding continued product development.
NextIO's play revolves around virtualizing networking connections particularly on blade servers and also rack-mounted servers. You slot a NextIO module into an existing server chassis or rack and can then share the PCI Express I/O flow between physical servers. In addition, NextIO lets customers connect just about any type of switch - Fibre Channel, Fibre Channel over Ethernet or iSCSI - to the chassis and then to the corresponding back-end storage.
Looking just at the virtualized I/O, NextIO claims a cost advantage over current set-ups. Rather than purchasing networking gear for each blade or server, a customer can buy the NextIO module and then trick the servers into thinking they have their own switch. According to the company's figures, this approach can reduce I/O hardware costs by "at least" 50 per cent.
The story grows more intriguing as you look out to the storage connections. NextIO's hardware fits into existing switch bays on server chassis and lets customers use standard switches rather than often customized, pricey gear for blade boxes.
Traditional blade server designs limit users to only one additonal I/O technology from the integrated ones on the motherboard. This legacy design forces users to remove power from their blades and physically replace the I/O daughter cards to change or update their I/O technology. Additionally, if a new I/O technology is chosen for the blade, a fabric switch for the chassis I/O is typically purchased to extend the technology within the datacenter.
With NextIO's PCI Express shared switch module solution, blades only need to extend PCIe from their chipset through the chassis midplane to a module bay. In a chassis module bay, a NextIO shared I/O module provides users the ability to offer multiple I/O technologies to one or more blade servers and to reassign or change I/O technologies without powering down blades. All shared I/O devices are compatible with existing operating systems and device drivers without modification.
Dell has invested in NextIO, and the vendor is tracking the technology closely, as is Fujitsu which has held demonstrations of the NextIO technology. In addition, NextIO once counted HP's Shane Robison as a board member.
We should expect to see OEMs picking up the NextIO systems this year.
You can check out the NextIO gear here. ®
No free lunch
This is the usual story about IO virtualization: one wire to rule them all. NextIO's claims are just like many others. Which unified wire do they use ? Looking at the management's background, I would say Infiniband since most of them comes from RIP Banderacom. However, doing the virtualization at the PCI Express level is a bad move as its flow-control is too tight to allow for contention. If they stretch that part of the spec, who knows what else they relax.
It's a crowded space, but I just don't get the PCI-E angle.
Seems like a natural fit for Sun blade chassis
Strikingly similar to what is already there on the Sun blade chassis. This should plug right in the back of the Sun blade chassis. virtualized PCI express IO for the blades would be something Sun needed yesterday.