Limited end-to-end FCoE
Hors d'oeuvre only
VMware has certified a Cisco and NetApp end-to-end FCoE connection scheme but this doesn't mean a new dawn as core switches still don't support passing on FCoE messages, only top-of-rack-switches (TORS).
FCoE means Fibre Channel over Ethernet and involves sticking Fibre Channel messages inside Ethernet packets so that server-SAN storage Fibre Channel requests and data can be carried over Ethernet links instead of specialised Fibre Channel fabrics.
As soon as the FCOE packets have to separated from the mass of other Ethernet packets by a switch and sent on to storage arrays by more than one hop, the end-to-end FCoE connection breaks down. How have Cisco and NetApp got around this showstopper?
The link components are a server fitted with a CNA (Converged Network Adapter) which offers both standard Ethernet network interface card (NIC) functions and the FCoE functions previously provided by a host bus adapter connecting the server to a Fibre Channel cable.
The CNA is connected with an Ethernet cable to a Cisco Nexus 5000 TORS and then, still by Ethernet, to a NetApp FAS storage array with an FCoE target function, using QLogic silicon as its front-end. NetApp has previously announced a simplified end-to-end FCoE scheme by connecting Brocade and QLogic CNAs directly to its FAS arrays.
At the time it said it would OEM Brocade's 8000 TORS and a 10-24 FCoE blade that fits into Brocade DCX backbone switch or DCX-4S switch. However onward Fibre Channel array connectivity then uses physical Fibre Channel and not Ethernet. Multi-hop FCoE functional depended, according to Brocade, on TRILL (Transparent Interconnection of Lots of Lines) which is layer 2 multi-pathing, a developing IETF Standard. My understanding is that TRILL adds routability to Ethernet.
Cisco announced FabricPath in June, as a feature that adds TRILL to its data centre operating system NX-OS, bringing layer 3 routing benefits to layer 2 switched networks. It also announced a Nexus 7000 F-series I/O module supporting 1GigE and 10gigE links. It said it supports the Data Centre Bridging (DCB) and TRILL standards with Fibre Channel over Ethernet (FCoE) to be enabled in the near future through a software upgrade." So the Nexus 7000 will be able to support multi-hop FCoE when it gets the software upgrade but can't do so yet.
A third part of this Cisco FCoE announcement was the FabricPath Switching System (FSS), an integrated hardware and software offering to build verily large scalable domains using FabricPath, based on the NX-OS FabricPath feature and FabricPath hardware like the Nexus 7000.
However, it is the Nexus 5000 TORS and not the Nexus 7000 core switch which is included in the VMware certified end-to-end FCoE set up. The Nexus 5000 supports DCB but not specifically TRILL.
DCB involves priority flow control, enhanced transmission selection and the DCB exchange protocol. Because of its DCB support the Nexus 5000 can connect via FCoE to storage targets. But there is no mention of FabricPath in the Cisco description of the Nexus 5000 and so this limited end-to-end FCoE certification by VMware is just an hors d'oeuvre; we're still waiting for the main event, fabric switch support for multi-hop FCoE. ®
You don't need TRILL for multiple links running in parallel
I mailed the Cisco person and said: "People are telling me spanning tree is not enough. You need multiple links for example." Here is his reply:
"Whoever those "people" are, they are missing a fundamental aspect of standards-based FCoE as defined in FC-BB-5 today, namely:
A VLAN carrying FCoE traffic does NOT run Spanning Tree (STP). It runs FSPF. it can handle multiple links in parallel today just the same way that FC does.
What those people you're talking to may be referring to is the way that NPV works where a 'link' is chosen (mostly in a 'static' manner). but that does not mean all vendors operate in that way. it also does not mean that you cannot have a FCoE VLAN with NPV bundled on the same logical PortChannel bundle (N physical links in a single logical bundle) effectively using N links all active.
What it likely shows is that there simply aren't that many "switch vendors" that have a FC stack and can operate as an E_Port in the same way that you have E_Ports and ISLs in FC.
Brocade and Cisco, effectively being the last 2 FC switch vendors left standing have the luxury there (I guess QLogic does to some extent too). But it makes life hard for every other switch vendor with a vision of "Unified I/O" as having a fully-functional and field-hardened/proven/qualified/certified stack is by no means an easy feat.