Feeds

The dark horse in data centre I/O simplification

Three candidates, not two

Internet Security Threat Report 2014

Comment Data centre I/O needs simplifying, everyone is agreed.

There are reckoned to be two candidates; datacentre Ethernet (DCE) which Cisco and Brocade support, and InfiniBand, pushed by Voltaire and Mellanox. Both DCE and InfiniBand virtualize I/O by having other network protocols run inside their fat pipes. But there is a third fat pipe that can be used and it does not require every server to have an expensive fat pipe adapter. Think PCIe. Hold that thought and read on.

Data centres have servers, storage boxes and networking resources. Servers are getting virtualized, racked and bladed. The net effect is that a rack of physical servers which are virtualized can have two I/O adapters per server. There can be Ethernet NICs and Fibre Channel HBAs. The storage area network (SAN) fabric is expensive and complex with tiers of switches and directors linking storage arrays to accessing servers. By taking the Fibre Channel protocol and layering it on Ethernet (FCoE) the separate Fibre Channel fabric can be ditched and Ethernet used as a common transport.

But existing Ethernet drops packets and has indeterminate latencies, two things which will break Fibre Channel links. The data centre world is waiting for DCE which will solve both problems. It will require servers to have FCoE-supporting NICs, converged network adapters (CNAs) for them to link to the DCE infrastructure.

The view from planet InfiniBand (IB) is that high performance computing already has a unified fabric in....InfiniBand. Use IB as the unified data centre fabric. IB is transitioning to 40Gbit/s with Mellanox and Voltaire both launching switches running at that rate. Run Ethernet and Fibre Channel on top of an InfiniBand backbone.

Instead of looking at the data centre backbone network to simplify things we could look alternatively at the data centre network edge. VirtenSys, a start-up, intends to offer virtualised I/O for servers based on PCIe that could also radically simplify data centre networking by substantially reducing the number of networking adapters involved.

VirtenSys says servers have I/O across networks, such as Ethernet, Fibre Channel and InfiniBand, via adapters, and also to local storage using SAS or SATA protocols. In a server, an X86 server, everything ultimately gets hooked up to a PCIe bus and goes to the server's memory and CPU for processing. This bus can be extended outside the server. Why not extend each server's PCIe bus to a switch which has adapters plugged into it instead of into each server. This switch would virtualize those adapters such that each server thinks it still has its own local adapter.

So, say, ten servers could share one NIC and one HBA, meaning that nine NICs and nine HBAs could be thrown away, saving power and cost. Put disk drives into this switch and the servers could be diskless as well with the switch providing shared storage to each server. Such I/O virtualization reduces the number of devices - adapters, HBAs, disks - being virtualized and uses them more efficiently.

In effect there would be a PCIe cloud linking servers and virtualized I/O switch and this has Ethernet, Fibre Channel and maybe InfiniBand links off it to other resources in the data centre, to storage and networking for example. The PCIe cloud is not a data centre fabric. It's not setting out to become a backbone fabric that replaces everything else. But it still simplifies, radically simplifies, data centre I/O in that server I/O is simplified. A rack of 20 or more servers would no longer need 20 or more separate NICs and HBAs and what have you. There would be a rack PCIe cloud with every server in the rack sharing the NICs, HBAs and InfiniBand adapters plugged into it.

Ditto a rack of blades which would have a PCIe cloud for the blades. The data centre backbone could still be a data centre class Ethernet or InfiniBand but it would need far fewer adapters, one per server rack or blade rack PCIe cloud.

The cost implications seem pretty good. Let's say a data centre has five server racks with 20 NICs and 20 HBAs per rack. It could move to, let's be a bit cautious, two shared NICs and two shared HBAs per rack and so throw away 90 NICs and 90 HBAs. Each server rack or blade rack could also share a pool of directly-attached storage (DAS) so that the number of disk drives could be reduced as well.

What that says to me is that the cost of a rack-full of server resource would be much less and it would need less power too.

The suppliers that would not be keen on this would be the network adapter vendors who rely on NIC and HBA volumes to drive their revenues.

VirtenSys website carries a couple of interesting white papers and a set of press releases detailing its progress from its starting up in December, 2005. But it has no track record, no product, and no customers; a dark horse indeed.

Remind yourself of the big picture again: servers are getting virtualized to be more efficient and save cost. Networking is getting virtualized to become more efficient and save cost. Storage is becoming virtualized to....you can fill in the blanks yourself.

So the concept of extending the PCIe bus outside servers and into a kind of cloud so that server I/O can be virtualized to become more efficient and save cost might look like an idea whose time might be about to come.®

Internet Security Threat Report 2014

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
IBM storage revenues sink: 'We are disappointed,' says CEO
Time to put the storage biz up for sale?
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.