Directing Fibre Channel storage traffic over Ethernet
The benefits of convergence
Fibre Channel over Ethernet (FCoE) has a hard act to follow. Fibre Channel storage fabrics accomplish an amazing feat, making shared storage arrays equivalent to direct-attached disk drives to the many servers accessing them.
When servers access data from directly attached disks, the data arrives fast and none is lost en route. The Fibre Channel fabric replicates these characteristics, with packets of data arriving within dependable time periods and no loss of packets.
The result is storage area networks that enable a shared pool of thousands of disk drives in many arrays to provide storage for hundreds of mission-critical servers.
By contrast, Ethernet LAN and IP WAN networking are less reliable, with variable, or "indeterminate", delivery time and packets that can get lost and must be retransmitted. Neither are acceptable substitutes for physical Fibre Channel.
It is relatively easy to encode Fibre Channel frames within Ethernet packets; the hard part is making an indeterminate Ethernet behave in the same determinate and loss-free way as Fibre Channel.
Data Centre Ethernet
That is being accomplished by devising a new form of Ethernet, Data Centre Ethernet (DCE), sometimes also called Converged Enhanced Ethernet, to carry the FCoE traffic.
There has been a concerted effort by Ethernet and Fibre Channel vendors such as Brocade, Cisco, Juniper and others, as well as the Ethernet standards bodies, to develop DCE that can be used for FCoE traffic.
Vendors have also been busy engineering FCoE functionality into their Ethernet switching, routing and edge access devices, so that FCoE traffic can traverse a multi-hop DCE network to storage arrays capable of receiving and transmitting FCoE commands and data traffic.
Data Centre Bridging is the main Ethernet capability being standardised for FCoE . This transforms Ethernet into the loss-free and time-keeping paragon needed by FIbre Channel.
An IEEE 802.1 work group is busy working up this standard. An Incits (International Committee for Information Technology Standards) T11 committee is developing the FCoE protocol and the FCoE Initialisation Protocol.
The Internet Engineering Task Force is working on standards that flatten Ethernet to enable lower-level protocols do more and thus simplify the network infrastructure.
For example, it is working on a protocol called Trill (transparent interconnection of lots of links) that can enable multi-pathing and multi-hop routing at layer 2 of Ethernet.
With these standards Ethernet will be able to carry both LAN traffic and high-end storage networking traffic at the same time, providing convergence benefits such as lower costs and simplified management.
From a server point of view the first thing needed is an FCoE stack. This is a piece of code that receives data or a request to be sent to an FCoE target device, such as a storage array, and wraps that up inside an Ethernet packet.
This "initiator" can be software-executed on the server's own processor, or it can be the responsibility of an adapter or converged network adapter, which adds FCoE functionality to a standard Ethernet network interface card.
Intel has released an openly available software FCoE initiator, which means that servers with spare processing power can send and receive FCoE traffic using standard Ethernet interface cards.
Where the maximum server processing resource is needed for applications or virtual machines, then a converged network adapter ensures FCoE processing is offloaded from the server's CPUs.
Initially, server host bus adapter suppliers such as Emulex, QLogic and Brocade designed converged network adapters to graft FCoE functionality on top of basic Ethernet interface cards. Storage arrays began getting FCoE interfaces, with NetApp being one of the pioneers.
At this point servers could send and receive FCoE messages and FCoE-capable storage arrays could receive and respond to FCoE requests, but the intervening Ethernet cables and switches could not transport and handle Ethernet packets containing FCoE frames.
The next step was to provide FCoE functionality in Ethernet edge switches, and these started appearing in April 2009.
For example, Brocade announced a top-of-rack Ethernet switch that could receive FCoE traffic from connected servers.
However, this was single-hop FCoE – the switches could not transmit FCoE traffic onwards. It had to progress across the network to the storage arrays using Fibre Channel cables.
Multi-hop, end-to-end FCoE became possible once Brocade and Cisco introduced core Ethernet switches that could receive and send FCoE messages in both directions: to and from servers on the one hand, via edge switches, and to and from storage arrays.
We are now in a position where customers with networked storage arrays can carry both iSCSI and Fibre Channel traffic, as well as LAN traffic, over Ethernet.
But although such convergence is a real possibility, there is no rush to implement it.
Data centre admin staff may be divided between LAN and Fibre Channel administrators, and this is a problem. It will take time for each to become adept in the other's skills.
But the main reason why there is little rush to FCoE is that Fibre Channel networks just work
More efficient networking management will be a benefit but it could be a while before it trickles down to staff.
But the main reason why there is little rush to FCoE is that Fibre Channel networks just work.
There is no speed restriction. Indeed 8GBit/s Fibre Channel switches and host bus adapters are here and usable. They can replace maxed-out 4Gbit/s Fibre Channel host bus adapters and switches and make a move to 10GbitE unnecessary on speed grounds.
Also, 16Gbit/s Fibre Channel product is under development, giving Fibre Channel users plenty of network speed headroom.
In the longer term, 40GbitE and then 100GbitE will provide speed benefits and may encourage FCoE adoption, particularly if there is a need for Fibre Channel networking faster than 16Gbit/s and 32Gbit/s is not forthcoming.
It is unlikely that iSCSI users will migrate to FCoE as they can obtain the advantages of a loss-free and deterministic Ethernet by using FCoE-capable Ethernet equipment, without having to replace iSCSI adapters, and their known iSCSI protocol skill set, with the FCoE equivalents.
It is possible that Fibre Channel users will implement an FCoE pilot project when there is a need to bring new Fibre Channel-using servers online.
Then they can compare and contrast real-world implementations of FCoE and Fibre Channel in their own data centres. They can expand FCoE use at their own pace if they see the benefit – and unless Ethernet convergence benefits have been oversold, they will. ®
Sponsored: Hyper-scale data management