FCoE: Divergence vs convergence
Comment FCoE seems to be a harbinger of network divergence rather than convergence. After discussion with QLogic and hearing about 16Gbit/s Fibre Channel and InfiniBand as well as FCoE, ideas about an all-Ethernet world seem as unreal as the concept of a flat earth.
This train of thought started when talking with Scott Genereux, QLogic's SVP for w-w sales and marketing. It's not what he said but my take on that, and it began when Genereux's EMEA market director sidekick Henrik Hansen said QLogic was looking at developing 16Gbit/s Fibre Channel products. What? Doesn't sending Fibre Channel over Ethernet (FCoE) and 10Gbit/s, 40Gbit/s and 100Gbit/s Ethernet negate that? Isn't Fibre Channel (FC) development stymied because all FC traffic will transition to Ethernet?
Well, no, not as it happens, because all FC traffic and FC boxes won't transition to Ethernet. We should be thinking FCaE - Fibre Channel and Ethernet, and not FCoE.
FC SAN fabric users have no exit route into Ethernet for their FC fabric switches and directors and in-fabric SAN management functions. The Ethernet switch vendors, like Blade Network Technologies, aren't going to take on SAN storage management functions. Charles Ferland, BNT's EMEA VP, said that BNT did not need an FC stack for its switches. All it needs to do with FCoE frames coming from server or storage FCoE endpoints is route the frames correctly, meaning a look at the addressing information but no more.
Genereux said QLogic wasn't going to put a FC in its Ethernet switches. There is no need to put a FC stack in Ethernet switches unless they are going to be a FCoE endpoint and carry out some kind of storage processing. Neither BNT nor QLogic see their switches doing that. Cisco's Nexus routes FCoE traffic over FC cables to an MDS 9000 FC box. Brocade and Cisco have the FC switch and director market more or less sewn up and they aren't announcing a migration of their SAN storage management functionality to Ethernet equivalents of their FC boxes although, longer term, it has to be on Brocade's roadmap with the DCX.
Genereux and Hansen said that server adapters would be where Ethernet convergence would happen. The FCoE market is developing much faster than iSCSI and all the major server and storage vendors will have FCoE interfaces announced by the end of the year. OK, so server Ethernet NICs and FC host bus adapters (HBAs) could turn into a single CNA (Converged Network Adapter) and send out FC messages on Ethernet. Where to?
They go to a FC-capable device, either a storage product with a native FC interface or an FCOE switch, like QLogic's product or Brocade's 8000, a top-of-rack-switch which receives general Ethernet traffic from servers and splits off the FCoE frames to send them out through FC ports.
There's no end-to-end convergence here, merely a convergence onto Ethernet at the server edge of the network. And even that won't be universal. Hansen said: "There is a market for for converged networks and it will be a big one. (But) converged networking is not an answer to all... Our InfiniBand switch is one of our fastest-growing businesses.... Fibre Channel is not going away; there is so much legacy. We're continuing to develop Fibre Channel. There's lots of discussion around 16Gbit/s Fibre Channel. We think the OEMs are asking for it... Will Ethernet replace InfiniBand? People using InfiniBand believe in it. Converged networking is not an answer to everyone."
You get the picture. These guys are looking at the continuation of networking zones with, so-far, minor consolidation of some FC storage networking at the storage edge onto Ethernet. Is QLogic is positioning FCoE as a FC SAN extension technology? It seems that way.
Next page: If it ain't broke...
Brocade's Take -- Hear Here! (or something like that)
Completely agreed on more than a couple points. Have a couple of pieces here illustrating same, one is a video of CTO Dave Stevens discussing FCoE reality from the (financial) analyst day. It's the featured video on our YouTube channel here: http://www.youtube.com/brocadevideo.
The other is a blog post from SVP of Products and Offerings Marc Randall posted just today, and addressing the organizational issues brought up on page two here. It's available on our Wingspan blog here: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/06/25/what-you-do-speaks-so-loudly . You can tell from the title, "“What you do speaks so loudly…” where this is going.
Ultimately, we are supportive (we have launched FCoE products and will continue to do so) but pragmatic about "convergence", looking to and developing for future needs while supporting the current and near-term.
Thanks for helping your readers keep their feet on the ground and their productivity in focus with this piece.
Brocade Public Relations/Social Media
Server Ports = Most Ports on Any Network
Whether SAN or LAN, the highest number of ports in any network are the access ports. In a SAN, hundreds or thousands of server ports are multiplexed out of a handful of storage array ports. In between might be some number of inter-switch link ports.
So if you want to consolidate a network with the biggest ROI, consolidate at the edge first. This is why Ethernet has been so successful. Four 100BT Fast Ethernet ports were consolidated to one GigE port. Now servers with four GigE ports and two 4Gb FC ports can consolidate down to two 10GigE FCoE ports. That is real savings, not in the CNAs, but in the access layer switch ports.
As for CNAs, all of the major vendors who make LAN on Motherboard silicon are looking at 10GigE LOMs which support FCoE. FCoE will become a feature of the NIC, much like other hardware based protocol offload is today. Then customers will be able to make a simple choice based on protocol. To leverage existing FC storage, and existing FC storage drivers and multipathing software, using FCoE. To access file storage, use NFS. If you want to buy new iSCSI based storage (or leverage iSCSI ports on existing storage), choose iSCSI.
Too many people think FCoE is a new form of iSCSI which will require end to end iSCSI storage arrays, dedicated networks, and server adapters. FCoE is not about that. It is not about introducing new protocols, but leveraging a single wire type. The very fact that it does not replace everything (or force replacement of everything), is what makes it a good solution.
Users see through the hype
You make some great points. I think confusion stems from the notion that FCoE is a one-size-fits all technology. It is not. There are plenty of users that like Fibre Channel and have no need or desire to move storage to Ethernet. These users will continue to use Fibre Channel for a long time. There are also plenty of users that do not use Fibre Channel (only 20% of servers are SAN-attached), but instead use NAS or iSCSI. So why would they switch to FCoE if they have no storage management schemes to leverage and are already “converged” onto ethernet? They probably won’t. Also, simply upgrading Ethernet to 10G by itself reduces much of the cabling complexity and operating costs that FCoE claims to address (provided it is power efficient and low cost). With that said, many users will adopt FCoE, but they will be careful to make sure all the new pieces (lossless ethernet, driver stacks ported to new controller architectures, switch management ported to Ethernet switches) work before they deploy it for storage, much less converge all their server traffic on it. I believe the likely scenario is that users who deploy FCoE will do so because they are already using Fibre Channel and want to migrate their storage networks to an all Ethernet scheme. Convergence at the server is something to consider after the kinks have been worked out down the road.