QLogic: Software FCoE can't cut it
Interview What does a HW Fibre Channel over Ethernet (FCoE) vendor think of Intel's software FCoE stack? We asked QLogic's Host Solutions Group through Henrik Hansen, its European marketing director.
Intel has released an openly available Fibre Channel over Ethernet (FCoE) software stack so users can send and receive messages to Fibre Channel access storage systems without having to use FCoE converged network adapters (CNAs), specialised Ethernet interface cards with FCoE functions added.
The CNA supplier thinking is, generally, that servers should run applications and offload storage I/O processing to peripheral cards.
So El Reg asked Hansen about QLogic's position on Intel's SW FCoE. We've edited the replies to bring out what we think are the essential points. The resulting Q&A session has some QLogic marketing puffery in it but the general message is clear enough: software FCoE can’t cut it in the data centre.
El Reg: What are the main differences between Intel's open source FCoE software used on a standard Ethernet NIC and QLogic's FCoE-capable CNA?
Henrik Hansen: Host systems using Intel or NICs running FCoE software initiators will be limited in scalability as storage workload processing by the CPU can command up to 70 per cent of the CPU resources, leaving little for application processing or for virtual machines.
A CNA on the other hand offloads FCoE to the adapter and requires between 4-10 per cent of the CPU to process the same storage workloads. FCoE software initiators have limited O/S support (currently Linux only), with Microsoft not yet not commenting on a support time frame, and issues within the way Hypervisors and the O/S exchange protocol processing. The current implementation of SW initiators will not work. Changes must first be made in the SW architecture for hypervisor support.
SW initiators will require years of development and real world deployment before they are ready for enterprise-wide storage deployments: Intel currently only has five vendors qualified while QLogic has over 80 qualified vendors with thousands of products.
We have support for OEM partitioning (HPs Flex-10, IBM's Virtual Fabric Adapter and QLogic’s own, switch-agnostic NPAR (with embedded layer 2 switching)) including QoS to segment networks and improve application performance compared to Intel’s limited VLAN partitioning.
El Reg: The HW CNA vendors say they offer much better management facilities as well. Moving on, what effect do you think Intel's FCOE software will have on the storage networking market?
Henrik Hansen: QLogic does not believe data centre administrators will trust their enterprise storage application to an unproven technology that is receiving limited endorsment by storage manufactures. It will take many years for [Intel's SW FCoE to] receive pervasive storage qualification, OS and hypervisor support. With most of the OEMs supporting proven storage driver stacks, it will in fact stall acceptance of S/W initiators.
El Reg: With multi-core X86 processors do you really need a dedicated FCoE engine on a CNA instead of using a standard Ethernet NIC?
Henrik Hansen: Virtual Machine (VM) density continues to grow, and this will have a material effect on servers attempting to run software initiators and VM’s simultaneously. In order to efficiently scale these servers, CPU [cycles] must be conserved for the hypervisors, and not used for processing storage I/O.
El Reg: As there are now more software iSCSI initiators than iSCSI-capable Ethernet interface card initiators, won't the FCOE market follow iSCSI, with the majority of FC ports on servers being driven by software FCoE code stacks and not CNAs?
Henrik Hansen: iSCSI initiators have mainly been adopted by small to medium business (SMB) where enterprise reliability is not as much of a concern as it is within enterprise storage market.
A new dichotomy is occurring within the data centre; this is being driven by virtualisation and convergence onto one wire. Implementing both with an FCoE software initiator will starve the CPU and limit the scalability of virtual machines as well as throughput requirements for storage-demanding applications.
In order to scale efficiently within this new dichotomy, CPU resource will be better used serving VMs and applications than processing of I/O.
Sponsored: Benefits from the lessons learned in HPC