Emulex pins adapter future to Ethernet and encryption
Adapter maker goes for FCoE
Emulex has announced a second generation Ethernet Fibre Channel adapter and added encryption to its Fibre Channel adapter range using RSA technology.
Emulex supplies Fibre Channel Host Bus Adapters (HBAs) with virtualisation capabilities so that physical servers can access Fibre Channel SANs, and virtual machines share their physical server host's HBA. It is supporting the layering of the Fibre Channel protocol over Ethernet (FCoE) and has already produced a converged network adapter (CNA) that combines Ethernet NIC and HBA functionality.
It describes the OneConneX, its second generation CNA, as a universal CNA and positions it for use in a data centre world currently using up to three networks: Ethernet; InfiniBand; and Fibre Channel. Emulex believes that Ethernet has the installed base and market momentum to become the dominant data centre networking fabric, in its 10gigE form, and that OneConneX is 'universal' in that it converges Fibre Channel onto Ethernet. There is no support for InfiniBand Host Channel Adapter (HCA) functionality and no intention of providing it.
FCoE takeup depends upon there being a loss-less and reliably low latency Ethernet connection to a SAN. This will depend upon the formal standardisation of Data Centre Ethernet, expected in early 2010. That means OneConneX will mostly be used in trial FCoE installations for now.
The product offloads the host physical server CPU in that iSCSi TCP/IP and FCoE stack processing are carried out on the card, which also performs RDMA (Remote Dynamic Memory Access) processing for linking servers in an Ethernet cluster. RDMA, says Sean Walsh, Emulex' corporate marketing VP, gives Ethernet InfiniBand's low latency.
SecureConneX is an 8Gbit/s HBA with added RSA encryption and key management. All traffic passing through the HBA can be encrypted meaning that sysadms can manage encryption at the server edge of the network instead of at the SAN fabric director, individual encrypting storage product or component level and do so on a key-per-virtual machine level with support for VMotion. Walsh said that not using RSA, going it alone, "would be an absolute nightmare."
He also said: "I would expect to see convergence of HBA encryption and the CNA technology over time."
There is a new OneCommand management package, a development of the existing HBAnywhere software. Emulex has also revamped its partner program, calling it EmulexConneX. Lastly there is a calculator capability of Emulex' which can, the company says, show how much you can save by moving from separate adapters to the new CNA.
No product pricing and availability information was supplied. ®
From the Horse’s Mouth: What OneConnect Supports
I’m a director of product marketing at Emulex and as Magellen posted on 2/18, Emulex OneConnect referenced in the article is a new platform, not to be confused with the current shipping Emulex LP21000 FCoE CNA. The new Emulex OneConnect platform will enable a new class of products called universal CNAs that provide universal connectivity for the data center and replace a number of point products such as NICs, iSCSI HBAs and FCoE CNAs used today. With full offload and hardware acceleration for a number of protocols including FCoE and iSCSI, the technology delivers the efficiency necessary to extract maximum value from deployed servers, irrespective of the customer’s choice of protocols. Emulex has a reputation of bringing storage products with enterprise class scalability and reliability to the market, and will be delivering this value again as we introduce products based on the OneConnect platform later this year. Stay tuned for more details on the OneConnect family.
This is a new card from Emulex, not the old one.
This is a new card, and apparently uses new Ethernet silicon (not the Intel Oplin Ethernet ASIC on the current CNA). The new Emulex NIC is not yet shipping, so it is very possible it does support full offload (TOE), and the associated benefits for iSCSI and iWARP/RDMA protocols.
That said, partial offload may be adequate for iSCSI, depending on the workload.
The benefit of FCoE is it works with existing FC storage, without the need for special gateways. While it requires enhanced Ethernet (CEE/DCE), it only requires that enhanced link until the storage traffic is separated and put onto a standard FC link, which typically will only be from the server NIC to the access layer DCE switch.
iSCSI is not supported by this card
The article says that iSCSI stack is offloaded, but that seems incorrect - Emulex cards cannot offload iSCSI, they even cannot offload TCP in statefull mode (full offload). Only ordinary partial IP optimizations are supported like large send offload but these things are supported even on cheap network adapters for many years already. Without TCP offload I found very little difference between this and ordinary fibre channel card.
There are two approaches to connecting storage:
1. FCoE (over modified Ethernet, special switches required)
2. iSCSI (over ordinary IP network)
1. FCoE is not yet standardized and the biggest problem with it is that it tries to add Fibre Channel features to the Ethernet, and that is trying to merge too completely different worlds. This will make switches more complex and might make them more expensive.
2. iSCSI is much more simple storage protocol running over very ordinary TCP/IP transport protocol. Connections are made as simple IP connections. It does not require special Ethernet switches nor any Ethernet modifications. Standards are in place, but there are problems also.
First problem with iSCSI is Broadcom company (long product support delays) practices. Majority of servers (Hewlett Packard Proliant, Dell servers and others) have Broadcom chips with TCP offload, and there was unacceptable delays to release TCP offload drivers for these Broadcom chips. Only a week ago drivers appeared for 1Gbps Broadcom chips that fully support all the offload features (currently IPv4 only), and for the Broadcom flagship 10Gbps iSCSI chip 57711 (currently part of HP BL495c servers) still there is no driver available that support iSCSI offload. 57711 10Gbps universal network chips (IP, RDMA, iSCSI) look like ideal solution on datasheets, but still there is no drivers for them that completely support the features.
Common, Broadcom, get the drivers done, you are killing the iSCSI market and your market share for IP storage networks!
Another problem is lack of disk arrays with 10Gbps iSCSI interfaces. Majority of disk arrays have 1Gbps connections. The good part is that many have several 1Gbps interfaces and they can be combined into one faster connection.
Third problem is the iSCSI stability under heavy loads. For example, iSCSI does not work stable in default Windows 2008 configuration under heavy loads. I was able to achieve acceptable stability only after two non-public Windows 2008 patches were applied and disk firmwares upgraded to the latest versions.
For iSCSI to become as a technology of choice the following must be done:
1. Broadcom must release working iSCSI offload drivers for their 10G iSCSI chip.
2. Windows 2008 SP2 must contain all necessary patches for iSCSI stability and all timeout parameters must be OK in their default configurations. iSCSI must work out of the box.
3. Disk arrays with native 10Gbps iSCSI interfaces must be released, and tested to run stable under maximum possible heavy loads.
If these little remaining things will be fixed/implemented soon (I hope this will happen if proper Broadcom and HP executives are reading this), I see no need for FCoE.
Personally I like more iSCSI because of its simplicity and that it does not require to modify Ethernet, as it is IP based it can run over very long distances etc. And any server with network adapter can access and mount the iSCSI disk. Performances may differ depending on IP network speeds/latencies, but iSCSI (TCP) connectivity is universal.