This article is more than 1 year old

Fast SANs seek speedy networks

The race is on

The fastest storage area network (SAN) on the planet needs the fastest server-storage network links available. So what are they?

There are three candidates: Ethernet, Fibre Channel and InfiniBand.

Ethernet SANs

Ethernet SANs use the iSCSI storage protocol and link servers and storage across Ethernet. Examples are HP’s P4000, Dell’s EqualLogic arrays and unified storage FAS arrays from NetApp.

You can get Ethernet links running at 1Gbit/s or 10Gbit/s. A 10Gbit/s link is the fastest Ethernet practicable, and that gives us our baseline iSCSI SAN speed: 1Gbit/s at worst and 10Gbit/s at best.

We know 40Gbit/s Ethernet is coming, but it isn’t here yet.

Fibre Channel fastness

The fastest Fibre Channel runs at 16Gbit/s. Brocade has Fibre Channel switches, such as the 6510, which run at that speed, pushing 2GB/sec through each of up to 48 ports.

Arrays, such as HP's new P10000, support 16Gbit/s SAN connectivity.

The 6510 delivers 768 Gbit/s (96GB/sec) aggregate full-duplex throughput if all 48 ports are used. That is a heck of a lot of data.

QLogic Fibre Channel

Cisco, the other main Fibre Channel switch and director supplier, does not have a 16Gbit/s product. Its strategy is to push users towards using Fibre Channel over Ethernet (FCoE) instead of physical Fibre Channel. This limits individual switch, server and array ports to 10Gbit/s Ethernet, significantly slower than 16Gbit/s Fibre Channel.

Brocade is also working on doubling Fibre Channel speed to 32Gbit/s, a huge jump.

Today you can buy a 16Gbit/s-based SAN that gives you 60 per cent more server storage network speed than Ethernet. Can any SAN go faster still?

InfiniBand SANs

Rorke Data had InfiniBand-based SANs for the broadcast and media market a few years ago. Its Galaxy Aurora IB was an InfiniBand-based SAN, using Mellanox InfiniBand gear, that shipped data out at 1.7GB/sec back in 2008. That was way faster than the 2Gbit/s and 4Gbit/s Fibre Channel of the time.

Rorke today supplies Fibre Channel and Ethernet-based SAN storage. It has felt the pull of standards in storage networking – InfiniBand SANs just didn't sell in enough quantities.

Purely from a speed point of view that is a pity, as quad data rate InfiniBand runs at 8Gbit/s on one lane and lanes can be combined to produce higher speeds. A 12X aggregation carries 96Gbit/s, which would make a very fast SAN indeed.

Exadata Database Machine

In fact, you can still buy an InfiniBand SAN, but only from Oracle and only for running Oracle software.

The company’s Exadata Database Machine and Storage Server data warehousing products are direct descendants of Sun's Honeycomb and Thumper storage-rich server systems.

The X2-8 Database machine has two 8-socket database servers and 14 Storage Servers linked through InfiniBand switches. Although it is, conceptually, a single system, inside we have 16 Database Server CPUs linked across InfiniBand to 14 servers.

The Storage Servers each have two 6-core Xeon L5640 CPus and 12 disk drives: one core per disk drive and two InfiniBand links.

These servers can do SQL processing and so offload the main database server CPUs. Put that aside. We have 14 individual storage arrays hooked up to 16 database server CPUs across InfiniBand: a SAN in a rack.

With its 40Gbit/s link capacity it is a faster SAN than a 16Gbit/s Fibre Channel implementation, and will even be faster than any coming 32Gbit/s Fibre Channel implementation

Breaking the barrier

Can any SAN implementation beat the speed of the Exadata SAN?

It would have to use an even faster link than InfiniBand. EMC is floating the idea of spare engines – Intel servers in essence – in its VMAX arrays running applications inside virtual machines.

VMAX, running under overall ESXi control, would have two kinds of controllers or engines: storage engines running Enginuity, the VMAX operating system, and application (app) engines running application software.

These app engines would use data accessed through the storage engines, and the engines connect to each other using a Rapid IO interconnect operating at 2.5GB/sec: that's 20Gbit/s notionally. Each VMAX engine has four Rapid IO connections, giving it a notional 10GB/sec bandwidth.

That would equate to 80Gbit/s. If the app engines have the same four Rapid IO interconnects, then they would have the same 80Gbit/s link to the storage engines, meaning that we would have a shared storage resource connected by an 80Gbit/s Rapid IO network to application servers. That's a SAN.

Like the Exadata Database Machine, it would be a "SAN in a can". It would have links twice as fast as the Oracle Exadata SAN and so become the fastest SAN available.

Meanwhile, Oracle is the SAN speed king with its hot-rod Exadata product, and that's great, so long as you only want to use the Oracle database.

The fastest open SAN is a 16GBit/s Fibre Channel one. As ever, speed brings limits. ®

More about

TIP US OFF

Send us news


Other stories you might like