Careful with 'fibre speed record' hype: which record's been broken?

91 Gbps seems fast, but isn't much of a record

Top 5 reasons to deploy VMware with Tegile

Last January, NASA's Energy Science Network (ESNet) ran a test that achieved 91 Gbps end-to-end data transfers.

The test made belated news this week in another outlet that touted ESNet as a "shadow network" faster than Google fibre. Leaving aside the inappropriateness of that hype, it got El Reg thinking about the business of the perennial story of communications speed records.

In just about every case – right now, we'll stick to stories about fibre network links – you can't claim a speed record, and certainly shouldn't proclaim someone else's record, if you don't understand the context.

ESnet's test was hyped as "the fastest of its type ever reported.” That's true. But "of its type” is the key phrase.

To understand why, let's look at other records – along with their context.

  • Fastest ever transfer over fibre of any kind – this seems to still belong to Bell Labs, even though it's five years old. In 2009, Bell boffins achieved a transmission speed described as 100 Petabits per second.kilometer.

    However – the company didn't actually ship 100 Pbps of anything. Bell Labs strapped together 155 lasers with a specified maximum transfer speed of 100 Gbps each. That aggregated to 15.5 Tbps link over 7,000 km of fibre. The “100 Pbps second.kilometre” is a mathematical artefact – multiplying 15.5 by 7,000.

    Still, at 15.5 Tbps, Bell Labs puts the 100 Gbps “shadow Internet” quite in the shade.

  • In 2012, Deutsche Telekom demonstrated a transmission between Hanover and Berlin, kicking along at 512 Gbps, when adjusted down for bit errors delivered 400 Gbps of usable bandwidth on unmodified, existing carrier-grade fibre. The "old fibre" angle was the story.
  • Earlier this year, BT connected BT Tower to Adastral Park with a 1.4 Tbps link. Again, the fibre rather than the speed was what was different.
  • In 2011, a group from Caltech and the University of Victoria spewed 186 Gbps between the University of Victoria and the Super Computing 2011 show floor in Washington, Seattle, a distance of 217 km. They used the BCnet and CANARIE research networks.

    The necessary context here, however, is one that's relevant to this case. The Caltech demo stipulated that its bidirectional 186 Gbps was a record for memory-to-memory transfers, not an absolute fibre speed record.

After all, carriers routinely carry more than 100 Gbps between locations. Terabit networks are easy, if you lay enough fibres.

So what has ESNet actually set a record in?

The link wasn't actually the news (neither was it news, really: the test was announced in January). It was what was going on at either end of the link.

As ESNet clearly explains here, the “special sauce” was that the researchers “achieved a record single host pair network data transfer rate of over 91 Gbps for a disk-to-disk file transfer”.

Quoting again from ESNet:

“To achieve 91+ Gbps disk-to-disk network data transfer rate between a single pair of high performance RAID servers, this demo required a number of techniques working in concert to avoid any bottlenecks in the end-to-end transfer process. This required parallelisation using multiple CPU cores, RAID controllers, 40G NICs, and network data streams; a buffered pipelined approach to each data stream, with sufficient buffering at each point in the pipeline to prevent data stalls, including application, disk I/O, network socket, NIC, and network switch buffering; a completely clean end-to-end 100G network path (provided by ESnet and MAX) to prevent TCP retransmissions; synchronisation of CPU affinities for the application process and the disk and network NIC interrupts; and a suitable Linux kernel.”

All of which is very impressive – but considered as a network link alone, 91 Gbps isn't so hot. It's all the other work that makes the jaw drop. The Caltech-Victoria University test's disk-to-disk transfer rate, for example, managed just 60 Gbps.

Our point is that a fibre speed record needs more context than "how many DVDs in how many seconds". You have to understand what previous record the researcher has in the cross-hairs.

Another point The Register wishes to emphasise: the existence of dark fibre links is not some kind of “shadow Internet”. It's quite normal – unless you never knew that such things existed.

If (to pick an example) Google decides it needs a hot-backup disaster recovery site a few km away from a facility in a major city, do you suppose it asks Comcast for 100 Gbps in megabit chunks? Or does it talk to a contractor and a carrier, and buy a brand-new fibre all its own? ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story


Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.