The dark horse in data centre I/O simplification
Three candidates, not two
Comment Data centre I/O needs simplifying, everyone is agreed.
There are reckoned to be two candidates; datacentre Ethernet (DCE) which Cisco and Brocade support, and InfiniBand, pushed by Voltaire and Mellanox. Both DCE and InfiniBand virtualize I/O by having other network protocols run inside their fat pipes. But there is a third fat pipe that can be used and it does not require every server to have an expensive fat pipe adapter. Think PCIe. Hold that thought and read on.
Data centres have servers, storage boxes and networking resources. Servers are getting virtualized, racked and bladed. The net effect is that a rack of physical servers which are virtualized can have two I/O adapters per server. There can be Ethernet NICs and Fibre Channel HBAs. The storage area network (SAN) fabric is expensive and complex with tiers of switches and directors linking storage arrays to accessing servers. By taking the Fibre Channel protocol and layering it on Ethernet (FCoE) the separate Fibre Channel fabric can be ditched and Ethernet used as a common transport.
But existing Ethernet drops packets and has indeterminate latencies, two things which will break Fibre Channel links. The data centre world is waiting for DCE which will solve both problems. It will require servers to have FCoE-supporting NICs, converged network adapters (CNAs) for them to link to the DCE infrastructure.
The view from planet InfiniBand (IB) is that high performance computing already has a unified fabric in....InfiniBand. Use IB as the unified data centre fabric. IB is transitioning to 40Gbit/s with Mellanox and Voltaire both launching switches running at that rate. Run Ethernet and Fibre Channel on top of an InfiniBand backbone.
Instead of looking at the data centre backbone network to simplify things we could look alternatively at the data centre network edge. VirtenSys, a start-up, intends to offer virtualised I/O for servers based on PCIe that could also radically simplify data centre networking by substantially reducing the number of networking adapters involved.
VirtenSys says servers have I/O across networks, such as Ethernet, Fibre Channel and InfiniBand, via adapters, and also to local storage using SAS or SATA protocols. In a server, an X86 server, everything ultimately gets hooked up to a PCIe bus and goes to the server's memory and CPU for processing. This bus can be extended outside the server. Why not extend each server's PCIe bus to a switch which has adapters plugged into it instead of into each server. This switch would virtualize those adapters such that each server thinks it still has its own local adapter.
So, say, ten servers could share one NIC and one HBA, meaning that nine NICs and nine HBAs could be thrown away, saving power and cost. Put disk drives into this switch and the servers could be diskless as well with the switch providing shared storage to each server. Such I/O virtualization reduces the number of devices - adapters, HBAs, disks - being virtualized and uses them more efficiently.
In effect there would be a PCIe cloud linking servers and virtualized I/O switch and this has Ethernet, Fibre Channel and maybe InfiniBand links off it to other resources in the data centre, to storage and networking for example. The PCIe cloud is not a data centre fabric. It's not setting out to become a backbone fabric that replaces everything else. But it still simplifies, radically simplifies, data centre I/O in that server I/O is simplified. A rack of 20 or more servers would no longer need 20 or more separate NICs and HBAs and what have you. There would be a rack PCIe cloud with every server in the rack sharing the NICs, HBAs and InfiniBand adapters plugged into it.
Ditto a rack of blades which would have a PCIe cloud for the blades. The data centre backbone could still be a data centre class Ethernet or InfiniBand but it would need far fewer adapters, one per server rack or blade rack PCIe cloud.
The cost implications seem pretty good. Let's say a data centre has five server racks with 20 NICs and 20 HBAs per rack. It could move to, let's be a bit cautious, two shared NICs and two shared HBAs per rack and so throw away 90 NICs and 90 HBAs. Each server rack or blade rack could also share a pool of directly-attached storage (DAS) so that the number of disk drives could be reduced as well.
What that says to me is that the cost of a rack-full of server resource would be much less and it would need less power too.
The suppliers that would not be keen on this would be the network adapter vendors who rely on NIC and HBA volumes to drive their revenues.
VirtenSys website carries a couple of interesting white papers and a set of press releases detailing its progress from its starting up in December, 2005. But it has no track record, no product, and no customers; a dark horse indeed.
Remind yourself of the big picture again: servers are getting virtualized to be more efficient and save cost. Networking is getting virtualized to become more efficient and save cost. Storage is becoming virtualized to....you can fill in the blanks yourself.
So the concept of extending the PCIe bus outside servers and into a kind of cloud so that server I/O can be virtualized to become more efficient and save cost might look like an idea whose time might be about to come.®
Sponsored: RAID: End of an era?