Fast SANs seek speedy networks
The race is on
The fastest storage area network (SAN) on the planet needs the fastest server-storage network links available. So what are they?
There are three candidates: Ethernet, Fibre Channel and InfiniBand.
Ethernet SANs use the iSCSI storage protocol and link servers and storage across Ethernet. Examples are HP’s P4000, Dell’s EqualLogic arrays and unified storage FAS arrays from NetApp.
You can get Ethernet links running at 1Gbit/s or 10Gbit/s. A 10Gbit/s link is the fastest Ethernet practicable, and that gives us our baseline iSCSI SAN speed: 1Gbit/s at worst and 10Gbit/s at best.
We know 40Gbit/s Ethernet is coming, but it isn’t here yet.
Fibre Channel fastness
The fastest Fibre Channel runs at 16Gbit/s. Brocade has Fibre Channel switches, such as the 6510, which run at that speed, pushing 2GB/sec through each of up to 48 ports.
Arrays, such as HP's new P10000, support 16Gbit/s SAN connectivity.
The 6510 delivers 768 Gbit/s (96GB/sec) aggregate full-duplex throughput if all 48 ports are used. That is a heck of a lot of data.
Cisco, the other main Fibre Channel switch and director supplier, does not have a 16Gbit/s product. Its strategy is to push users towards using Fibre Channel over Ethernet (FCoE) instead of physical Fibre Channel. This limits individual switch, server and array ports to 10Gbit/s Ethernet, significantly slower than 16Gbit/s Fibre Channel.
Brocade is also working on doubling Fibre Channel speed to 32Gbit/s, a huge jump.
Today you can buy a 16Gbit/s-based SAN that gives you 60 per cent more server storage network speed than Ethernet. Can any SAN go faster still?
Rorke Data had InfiniBand-based SANs for the broadcast and media market a few years ago. Its Galaxy Aurora IB was an InfiniBand-based SAN, using Mellanox InfiniBand gear, that shipped data out at 1.7GB/sec back in 2008. That was way faster than the 2Gbit/s and 4Gbit/s Fibre Channel of the time.
Rorke today supplies Fibre Channel and Ethernet-based SAN storage. It has felt the pull of standards in storage networking – InfiniBand SANs just didn't sell in enough quantities.
Purely from a speed point of view that is a pity, as quad data rate InfiniBand runs at 8Gbit/s on one lane and lanes can be combined to produce higher speeds. A 12X aggregation carries 96Gbit/s, which would make a very fast SAN indeed.
In fact, you can still buy an InfiniBand SAN, but only from Oracle and only for running Oracle software.
The company’s Exadata Database Machine and Storage Server data warehousing products are direct descendants of Sun's Honeycomb and Thumper storage-rich server systems.
The X2-8 Database machine has two 8-socket database servers and 14 Storage Servers linked through InfiniBand switches. Although it is, conceptually, a single system, inside we have 16 Database Server CPUs linked across InfiniBand to 14 servers.
The Storage Servers each have two 6-core Xeon L5640 CPus and 12 disk drives: one core per disk drive and two InfiniBand links.
These servers can do SQL processing and so offload the main database server CPUs. Put that aside. We have 14 individual storage arrays hooked up to 16 database server CPUs across InfiniBand: a SAN in a rack.
With its 40Gbit/s link capacity it is a faster SAN than a 16Gbit/s Fibre Channel implementation, and will even be faster than any coming 32Gbit/s Fibre Channel implementation
Breaking the barrier
Can any SAN implementation beat the speed of the Exadata SAN?
It would have to use an even faster link than InfiniBand. EMC is floating the idea of spare engines – Intel servers in essence – in its VMAX arrays running applications inside virtual machines.
VMAX, running under overall ESXi control, would have two kinds of controllers or engines: storage engines running Enginuity, the VMAX operating system, and application (app) engines running application software.
These app engines would use data accessed through the storage engines, and the engines connect to each other using a Rapid IO interconnect operating at 2.5GB/sec: that's 20Gbit/s notionally. Each VMAX engine has four Rapid IO connections, giving it a notional 10GB/sec bandwidth.
That would equate to 80Gbit/s. If the app engines have the same four Rapid IO interconnects, then they would have the same 80Gbit/s link to the storage engines, meaning that we would have a shared storage resource connected by an 80Gbit/s Rapid IO network to application servers. That's a SAN.
Like the Exadata Database Machine, it would be a "SAN in a can". It would have links twice as fast as the Oracle Exadata SAN and so become the fastest SAN available.
Meanwhile, Oracle is the SAN speed king with its hot-rod Exadata product, and that's great, so long as you only want to use the Oracle database.
The fastest open SAN is a 16GBit/s Fibre Channel one. As ever, speed brings limits. ®
and who pays your wages I wonder??
"Do I see end devices (storage arrays and servers) congesting SAN links? At 2G - yes frequently, at 4G - rarely, at 8G not yet seen one."
Since the vast majority of all SAN ports are 8G now due to Brocade's market leadership, you must be looking in the wrong places. And the next paragraph about ISLs is typical Cisco BS and simply leads to far greater management and operational overhead and a pretty clunky delivery of what is streamlined from the market leaders.
Looks like a response from someone who has Cisco coloured dollars in the back pocket. Just because Cisco are unable to produce innovative or market leading products without acquisition, that is no good reason to limit progress and development elsewhere. As with Japanese car manufacturers who gave you features you did not know you needed until you got them, looks like 16G is here to stay and no amount of naysaying from luddites like yourself will change that.
I am sure Cisco have never reached ahead for the sake of a few dollars. Nor used Marketing to hide the plain deficiencies in so much of their product sets............ask RIM.
Reality Check - Who is using this bandwidth ???
Having seen statistics from enterprise datacentres across the world. Do I see end devices (storage arrays and servers) congesting SAN links? At 2G - yes frequently, at 4G - rarely, at 8G not yet seen one. That's not to say there aren't out there. There are corner cases out there for specialist applications.
Where bigger and faster pipes can be of benefit is between switches (ISL's) as they can simplify inter switch and inter data centre link design and configuration.
VIrtualised servers were supposed to be pushing up the demand for host side traffic, I'm seeing lots of virtualised servers connecting to SAN, are they driving storage bandwidth significantly higher? Not yet. Most likely as customers haven't yet virtualised their I/O intensive Apps.
8G FC is more than enough for the vast majority of datacentres today (and most likely for the next year or so). 16G FC appears to be the industry reaching ahead of market demand to turn over product.
Speed is a hard thing to grasp. I mean, often times the "filling of a pipe" is done so using aggregate users of the SAN. So... perhaps a better way of looking at things is how many SANs using a singular client, can saturate the line. It should be fairly easy to saturate a line when considering requests over a multitude of clients.
It's possible that a SAN CAN saturate 16Gbit, for example, if using ONE client... but how many drives and what config was able to pull that off? Again, it sort of matters when considering how a particular SAN storage unit scales.
So things to consider:
1. Number of clients
2. Number of pathways
3. Number of drives
4. RAID level
And probably a lot more....