This article is more than 1 year old

Ease data traffic jams with some network improvements

Keeping up with bigger drives

Autoroutes, inter-states, autostrada, motorways and autobahns: they all arose out of the same realisation. Roads had become bottlenecks and traffic was coming to a standstill. The fix was to divide roads into two, limit access and add more lanes.

Trucks, vans, cars and motorcycles had more road space and could get there faster and more reliably. High-capacity, multi-lane, dual-direction highways revolutionised road transport.

They were needed because traffic was increasing inexorably. The same is true of IT. Our local area networks (LANs) are overloaded and 1Gbit/E is no longer enough.

Our LAN admin staff are holding out their network begging bowls like Oliver Twist and saying: "Please, Sir, can I have some more?"

Multiplication tables

Why is this happening? The fundamental reasons is that there is more data – much, much more data – and more processing power.

Our laptops, PCs and servers have processors that have multiple cores, each of which can execute several threads. Servers often have multiple processor sockets, and it is not uncommon to have a dual-socket, six-core Xeon processor: that makes 12 cores in all, each roughly as powerful as an old Pentium processor.

Such a server can process ten to 12 times more data in any given period than an old Pentium server.

It gets worse. Our servers are being virtualised with VMware and Hyper-V and XEN, so that a single processing core can be running two or more virtual machines.

So now our 12-core server is running 20 to 30 or so virtual machines and looking like that number of virtual servers rather than our former single Pentium server.

Server memory has increased in size and server PCIe bus has increased in bandwidth as CPU use and power have grown. At the other end of the LAN link, storage arrays have also bulked up their capacity and processing power.

Disk drives have increased their capacity from a few hundred gigabytes to two and thee terabytes. We can say there has been a roughly tenfold rise in disk capacity over the past few years.

Choking hazard

There has also been an increase in the number of drives an array can support, meaning that the storage arrays have pushed up their capacity into the petabyte area.

The introduction of solid state drives, which respond much faster to storage I/O requests than disk drives – a hundred times faster or more – is another aspect of storage array development.

Several of these drives feeding data out onto on the wire can choke a network link if it does not have enough bandwidth.

In the Fibre Channel storage area network (SAN) area, this has meant network speeds have risen from 4Gbit/s to 8Gbit/s. Now 16Gbit/s product is being introduced, a tripling of basic Fibre Channel wire speed.

As the effective size and speed of the storage facility across the LAN from servers has increased, the arrays want to shift vastly more data across network links to servers.

Sticking with 1GbitE would be like using cart tracks to link two cities

The two main destinations either side of the LAN, server and storage, have vastly increased their data I/O needs, yet the network road between them has not been upgraded.

Sticking with 1GbitE would be like using cart tracks to link two cities. It won't do any more. We need to upgrade to 10GbitE.

There is an alternative. We could look at InfiniBand, currently running at quad-rate: an 8Gbit/s data rate across a single link. A 14Gbit/s FDR (fourteen data rate) is coming and a 25Gbit/s EDR (enhanced data rate) scheme will arrive after that.

But it seems most unlikely that customers will take on the expense and upset of tearing up Ethernet LAN links between servers and storage and replacing them with InfiniBand.

Tried and tested

And setting aside the expense, there is also the need to retrain storage and network admin staff and ensure that all the software layered between the wire and the applications using the network function correctly.

This simply won’t happen. The strength and pervasiveness of Ethernet technology and its economics has been proven time and again.

There's more. Ten gig Ethernet is not the destination, it is just a point along the way. As Fibre Channel is augmented and possibly eventually replaced by Fibre Channel over Ethernet, substituting a 10GbitE wire for a 16Gbit/s Fibre Channel one does not make sense – especially now that 32Gbit/s Fibre Channel is being discussed by development engineers.

Fortunately, we have 40Gbit/s Ethernet in the wings with 100Gbit/s Ethernet planned after that. Ethernet promises to be the kind of graduated sequentially increasing network pipe technology we need.

And unlike motorways, it can be upgraded relatively easily. You won't see traffic cones, roadworks and lane closures on the data centre's Ethernet highways. ®

More about

TIP US OFF

Send us news


Other stories you might like