Related topics
  • ,
  • ,
  • ,

FCoE: Divergence vs convergence

Splitter!

If it ain't broke...

Other people suggest that the customer organisational boundaries will also inhibit any end-to-end convergence onto Ethernet. Will the FC storage networking guys smoothly move over to lossless and low-latency Ethernet even if end-to-end FCoE products are there? Why should they? Ethernet, particularly the coming lossless and low-latency version, is new and untried. Why fix something that's not broken? What is going to make separate networking and storage organisational units work together?

Another question concerns native FCoE interfaces on storage arrays. If FC SAN storage management functions are not migrating to Ethernet platforms then they stay on FC platforms which do I/O on FC cables to/from storage arrays with FC ports. So what is the point of array vendors adding FCoE ports? Are we looking at the possiblility of direct FCoE communication between CNA-equipped servers and FCoE-equipped storage arrays, simple FCoE SANs, conceptually similar to iSCSI SANs? Do we really need another block storage access method?

Where's the convergence here, with block storage access protocols splintering into iSCSI, FCoE and FC, as well as InfiniBand storage access in supercomputing and high-performance computing (HPC) applications?

Effectively FCoE convergence means just two things. Firstly and realistically, server edge convergence with the cost advantages being limited to that area, to a total cost of ownership comparison between NICs + HBAs on the one hand and CNAs on the other, with no other diminuition in your FC fabric estate. The second and possible thing is a direct FCoE link between servers and FCoE storage arrays with no equivalent of FC fabric SAN management functionality.

This could come if IBM adds FCoE ports to its SVC (SAN Volume Controller) so that it can talk FCoE to accessing servers and to the storage arrays it manages. Another possible alternative would be for HDS to add FCoE interfaces to its USP-V and USP-VM controllers, which virtualise both HDS and other vendors' storage arrays.

If customers have to maintain a more complex Ethernet, one doing general LAN access, WAN access, iSCSI storage and FCoE storage, possibly server clustering, as well as their existing FC infrastructure then where is the simplicity that some FCoE adherents say is coming? FCoE means, for the next few years, minimal convergence (and that limited to the server edge) and increased complexity. Is that a good deal? You tell me. ®

Sponsored: Designing and building an open ITOA architecture