The Ethernet traffic mix-up
Ethernet communications convergence conumdrum
Expert Clinic The possibility of converging storage traffic, with its restrictive profile, on to general messy Ethernet LAN traffic is now a distinct possibility. What are the underlying problems and how are they being dealt with and countered?
Given that they have been dealt with how should you think about convergence and what general principles should you have in mind as you start a convergence project?
Three experts tell you how they think it is. FIrst up is network architect Greg Ferro.
Greg Ferro - Network Architect and Senior Engineer/Designer.
When mixing Storage and Data traffic in a common network fabric, it’s important to understand that storage networking was intended to create a dedicated channel from server to array and that data networking is dynamic.
A SCSI connection expects dedicated bandwidth and zero contention like a cable between the motherboard and the HDD. When SCSI was moved into a network protocol, the standards didn’t change this. The standard used the concept of a “channel” and specified fibre optic cable for reliability so the protocol was called Fibre Channel. Fibre Channel builds a network “fabric” where each switch has state about every device connected to the network, and each host “logs in” and notifies the fabric about its configuration and status. Fibre Channel uses an XON/XOFF protocol that only sends data when the remote host signals back to the sender that it can receive data.
A data network has no knowledge of the hosts or that path that the data will take, paths through the system are dynamically determined and the protocols that control the forwarding are loosely controlled and can change without the knowledge of the host. The first Ethernet frame is received by a switch and then flooded to all ports to attempt to discover where the end host is located. All deliver.
The first Ethernet frame is received by a switch and then flooded to all ports to attempt to discover where the end host is located.
How, then, does a Data Network handle storage traffic and data at the same time ? By creating conditions where storage traffic is prioritised across the Ethernet Fabric to ensure that it emulates a channel. At every point in the data network, the designer specifies configuration for a specific traffic flow to be prioritised through the network fabric. This isn’t new to data networks as Voice Over IP / IP Telephony has already driven much of the underlying Quality of Service technology.
Therefore the art of the converging storage and data onto a single network relies on detecting different traffic types and then forwarding them differently, according to their needs. Thus, storage traffic must be lossless, reliable and low latency. Data traffic needs bandwidth and flexibility. These are not incompatible. The underlying need to share traffic means that Ethernet switches have adopted many of the features of Storage Fabrics. So much so that we now call Data Centre networks “Ethernet Fabrics” as standard terminology.
Greg Ferro describes himself as Human Infrastructure for Cisco and Data Networking. He works freelance as a Network Architect and Senior Engineer/Designer, mostly in the United Kingdom and previously in Asia Pacific region. He is currently focussing on Data Centre, Security and Application Networking technologies and spending a lot of time pondering design models, building operational excellence and creating business outcomes.
Next up is Duncan Hughes from Brocade, who reckons FCoE is usable and great for virtualised server environments.
Duncan Hughes - Pre-Sales Engineering Manager at Brocade
When and how far to converge the Ethernet (IP) and Fibre Channel (FC) traffic—or not to converge—is a decision that should be made in the context of the unique requirements of each organisation. In the short term, organisations can phase-in network convergence and reduce complexity while still supporting virtualisation and cloud computing services. But how?
For data centres that can take advantage of the converged LAN/SAN environment, there are solutions available in the market that provide end-to-end FCOE (Fibre Channel over Ethernet) capabilities (using the IEEE DCB protocol), enabling traditional IP and storage traffic to exist on the same network. This converged design allows for a truly lossless communication within the Ethernet or storage fabric taking advantage of the Priority-based Flow Control (PFC) and Enhanced Transmission Selection capabilities defined within DCB to sustain critical network traffic including FCoE. The architecture provides shared storage access and connectivity for servers over a high-performance, multi-pathing, reliable, resilient, and lossless converged fabric. As a result, storage traffic is protected whilst 10 Gbps connections maximised.
Virtual machine environments rely heavily on shared storage platforms. The FC storage fabric (SAN) has been an industry standard for more than a decade and its tried and trusted capabilities will continue to be the solution of choice for a large number of organisations for the foreseeable future. However, Ethernet Fabrics present an alternative. With the inherent lossless nature of an Ethernet fabric FCoE becomes a reality whilst the benefits of a fabric solution can also bring enhancements to ISCI and NFS storage alternatives with significant cost benefits.
Ethernet fabrics are implemented at Layer 2, flattening the network, and allowing it to scale beyond the boundaries associated with traditional Ethernet, and, at the same time, reducing capital and operating costs. Like storage fabrics before them, Ethernet fabrics are self-aggregating, scale efficiently, and are lossless and deterministic.
With the inherent lossless nature of an Ethernet fabric FCoE becomes a reality
In Ethernet fabrics, all switches are aware of all end-devices so virtual machine mobility does not require manual reconfiguration of the network. Finally, the fabric is extensible between data centres via core routers and Ethernet tunnels in the IP network. Virtual machines with their applications can now move across a server cluster “stretched” between private and public cloud data centres.
Cluster traffic runs through the Ethernet tunnel whilst storage traffic can also be tunnelled over IP using the industry-standard Fibre Channel over IP (FCIP) protocol so that application data can be quickly replicated between public and private cloud data centres.
Duncan Hughes is a pre-sales Engineering Manager at Brocade, joining when Brocade acquired Foundry Networks, where he was also a systems engineering manager, having previously been at Anite Networks.
Our third expert is analyst Tony Lock who believes that new processes and procedures could be a wise investment when converging Fibre Channel onto Ethernet.
Tony Lock - Programme Director, Freeform Dynamics
For much of the past decade the “networking” that connects users to their applications and services has often been taken for granted. But if general networking has been less than widely appreciated, the networks that tie servers to storage have been almost invisible to everyone bar the all-too-few skilled storage administrators who delve into to the art. Many organisations are contemplating significant changes to their IT architectures and IT vendors are promoting a raft of new technologies in the storage arena. This brings up the issue of whether it is it feasible to bring specialised storage networks and general Ethernet data networks together?
General networking is now very firmly based around the TCP/IP protocol and Ethernet whilst storage networking is still grappling with many protocols and network technologies. Amongst these, perhaps the most firmly entrenched is Fibre Channel, a lossless, deterministic protocol designed to ensure that any data sent to the storage disks gets there with minimum latency and very little chance of data corruption. These characteristics were not originally enshrined in the standard Ethernet protocols employed for general networking. While work to incorporate these has continued apace, support is still far from universal and the need for new approaches, equipment and tools has a resulting uptick in the equipment cost.
The networks that tie servers to storage have been almost invisible to everyone bar the all-too-few skilled storage administrators who delve into to the art.
So is it possible to get storage traffic and general network traffic to cost-effectively share a common cabling system, namely Ethernet? Technically the answer is yes, as protocols such as FCoE (Fibre Channel over Ethernet) and iSCSI have now matured sufficiently for mainstream adoption. It is apparent that organisations are beginning to contemplate converging their networking stacks as well as management and with it the cabling infrastructures they employ.
The cost savings of cabling only one storage and data network is attractive as is the flexibility potentially available for dynamic reconfiguration. But there are significant challenges between contemplation of such a change and things happening on the ground. For one, network cable infrastructures have very long lifetimes and replacing them is by no means simple to achieve.
Freeform Dynamics research finding.
Despite this, as the chart indicates, organisations are becoming aware of the impact that converged networks could have, even if their implementations have yet to ramp up. That said, there are questions concerning the feasibility of running storage and application traffic on the same physical networks without either users or applications, being affected by service degradation?
The answer is ‘yes’, but expecting it to just work out-of-the-box would be asking a bit much. Managing the complexity of convergence will require sophisticated traffic monitoring and management tools to ensure that service quality is maintained at desired levels. An understanding of the base line for the current usage and service quality combined with projections of future growth requirements of each of the services to be delivered over the network is essential – yet few organisations undertake such management processes routinely today.
To use a network to support all forms of traffic will require new processes to be put in place as well as the use of new tools. The lack of much “established best practice” to date coupled with significant costs and operational challenges make it highly likely that the adoption of combined networks will take place over many years, not months.
Tony Lock is a Programme Director at Freeform Dynamics, responsible for driving coverage in the areas of Systems Infrastructure and Management, IT Service Management, Outsourcing, and emerging hosting models such as Software as a Service and Cloud Computing. He also considers the role of financial models in relation to IT investment.
Still a conundrum or all clear?
What can we draw from this? Technically running FCoE over Ethernet is okay, as Greg Ferro explains, and it presents interesting possibilities with virtual machines (Duncan HUghes) but don't expect it to work fresh out of the box. It has to be planned, managed and monitored carefully because this is going to be a "version 1" initiative. Converged Ethernet networking is not a silver bullet and needs careful and thorough preparation and implementation so you don't shoot yourself in the foot.
Go forward, but step carefully. ®