FCoE: Divergence vs convergence
Comment FCoE seems to be a harbinger of network divergence rather than convergence. After discussion with QLogic and hearing about 16Gbit/s Fibre Channel and InfiniBand as well as FCoE, ideas about an all-Ethernet world seem as unreal as the concept of a flat earth.
This train of thought started when talking with Scott Genereux, QLogic's SVP for w-w sales and marketing. It's not what he said but my take on that, and it began when Genereux's EMEA market director sidekick Henrik Hansen said QLogic was looking at developing 16Gbit/s Fibre Channel products. What? Doesn't sending Fibre Channel over Ethernet (FCoE) and 10Gbit/s, 40Gbit/s and 100Gbit/s Ethernet negate that? Isn't Fibre Channel (FC) development stymied because all FC traffic will transition to Ethernet?
Well, no, not as it happens, because all FC traffic and FC boxes won't transition to Ethernet. We should be thinking FCaE - Fibre Channel and Ethernet, and not FCoE.
FC SAN fabric users have no exit route into Ethernet for their FC fabric switches and directors and in-fabric SAN management functions. The Ethernet switch vendors, like Blade Network Technologies, aren't going to take on SAN storage management functions. Charles Ferland, BNT's EMEA VP, said that BNT did not need an FC stack for its switches. All it needs to do with FCoE frames coming from server or storage FCoE endpoints is route the frames correctly, meaning a look at the addressing information but no more.
Genereux said QLogic wasn't going to put a FC in its Ethernet switches. There is no need to put a FC stack in Ethernet switches unless they are going to be a FCoE endpoint and carry out some kind of storage processing. Neither BNT nor QLogic see their switches doing that. Cisco's Nexus routes FCoE traffic over FC cables to an MDS 9000 FC box. Brocade and Cisco have the FC switch and director market more or less sewn up and they aren't announcing a migration of their SAN storage management functionality to Ethernet equivalents of their FC boxes although, longer term, it has to be on Brocade's roadmap with the DCX.
Genereux and Hansen said that server adapters would be where Ethernet convergence would happen. The FCoE market is developing much faster than iSCSI and all the major server and storage vendors will have FCoE interfaces announced by the end of the year. OK, so server Ethernet NICs and FC host bus adapters (HBAs) could turn into a single CNA (Converged Network Adapter) and send out FC messages on Ethernet. Where to?
They go to a FC-capable device, either a storage product with a native FC interface or an FCOE switch, like QLogic's product or Brocade's 8000, a top-of-rack-switch which receives general Ethernet traffic from servers and splits off the FCoE frames to send them out through FC ports.
There's no end-to-end convergence here, merely a convergence onto Ethernet at the server edge of the network. And even that won't be universal. Hansen said: "There is a market for for converged networks and it will be a big one. (But) converged networking is not an answer to all... Our InfiniBand switch is one of our fastest-growing businesses.... Fibre Channel is not going away; there is so much legacy. We're continuing to develop Fibre Channel. There's lots of discussion around 16Gbit/s Fibre Channel. We think the OEMs are asking for it... Will Ethernet replace InfiniBand? People using InfiniBand believe in it. Converged networking is not an answer to everyone."
You get the picture. These guys are looking at the continuation of networking zones with, so-far, minor consolidation of some FC storage networking at the storage edge onto Ethernet. Is QLogic is positioning FCoE as a FC SAN extension technology? It seems that way.
If it ain't broke...
Other people suggest that the customer organisational boundaries will also inhibit any end-to-end convergence onto Ethernet. Will the FC storage networking guys smoothly move over to lossless and low-latency Ethernet even if end-to-end FCoE products are there? Why should they? Ethernet, particularly the coming lossless and low-latency version, is new and untried. Why fix something that's not broken? What is going to make separate networking and storage organisational units work together?
Another question concerns native FCoE interfaces on storage arrays. If FC SAN storage management functions are not migrating to Ethernet platforms then they stay on FC platforms which do I/O on FC cables to/from storage arrays with FC ports. So what is the point of array vendors adding FCoE ports? Are we looking at the possiblility of direct FCoE communication between CNA-equipped servers and FCoE-equipped storage arrays, simple FCoE SANs, conceptually similar to iSCSI SANs? Do we really need another block storage access method?
Where's the convergence here, with block storage access protocols splintering into iSCSI, FCoE and FC, as well as InfiniBand storage access in supercomputing and high-performance computing (HPC) applications?
Effectively FCoE convergence means just two things. Firstly and realistically, server edge convergence with the cost advantages being limited to that area, to a total cost of ownership comparison between NICs + HBAs on the one hand and CNAs on the other, with no other diminuition in your FC fabric estate. The second and possible thing is a direct FCoE link between servers and FCoE storage arrays with no equivalent of FC fabric SAN management functionality.
This could come if IBM adds FCoE ports to its SVC (SAN Volume Controller) so that it can talk FCoE to accessing servers and to the storage arrays it manages. Another possible alternative would be for HDS to add FCoE interfaces to its USP-V and USP-VM controllers, which virtualise both HDS and other vendors' storage arrays.
If customers have to maintain a more complex Ethernet, one doing general LAN access, WAN access, iSCSI storage and FCoE storage, possibly server clustering, as well as their existing FC infrastructure then where is the simplicity that some FCoE adherents say is coming? FCoE means, for the next few years, minimal convergence (and that limited to the server edge) and increased complexity. Is that a good deal? You tell me. ®