This article is more than 1 year old

Can a new TCP scheme give wireless a 16-fold boost?

Understanding MIT’s latest save-the-wireless-world technology

A group of MIT researchers is touting a change to TCP – the transmission control protocol – that it says can yield sixteenfold and better improvements in performance in lossy networks.

The claim, made by Muriel Médard’s Network Coding and Reliable Communications group at MIT, has been published in Technology Review. In this article, startling performance improvements are claimed: on a network with 2 percent packet loss, user throughput is described as lifting from 1 Mbps to 16 Mbps; on a network with 5 percent packet loss, “the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second”, the article states.

“Network coding” at the TCP layer isn’t a new idea: a variety of different techniques exist. The problem is that TCP isn’t particularly wireless-friendly, because it takes random packet loss as a signal of a congested network. That, in turn, invokes the protocol’s rate-control mechanisms, with the sender going through this logic: “A packet was not acknowledged, therefore the packet was lost, therefore the network is congested, therefore I should slow down until I am receiving an ACK for each packet”.

This is great if the packets are lost because of congestion - but not if it was merely because someone carried their tablet computer into a bad spot for a moment.

To beat this, network coding proposes adding what amounts to a kind of forward error correction to the kit. Since processing power is cheap, coding schemes propose that the transmitter buffer several packets, encode them, and send them as a single transmission. In this way, a single ACK from the receiver will serve for more than one of the original packets, and reduce the chance of triggering TCP’s rate-control.

This goes back at least as far as 2000, when the IEEE published a paper called Network Information Flow by Ahlswede et al.

However, the MIT work claims performance far in advance of previous schemes.

In support of these claims, MIT Review editor pointed to two papers by Dr Médard: Modeling Network Coded TCP Throughput, which was prepared for a conference last year, and Trade-off between cost and goodput in wireless: Replacing transmitters with coding, which is available on Arxiv.

Is it too good to be true?

There are, however, two questions the MIT Review article leaves to one side.

The first is this: can a two pecent packet loss really degrade a network’s performance by nearly 95 percent?

On this, The Register suspect the answer may be “yes” – in some circumstances. WAN acceleration companies, for example, use similar estimates of the impact of packet loss on throughput. This F5 white paper serves as an example (see Figure 2 on page 6).

Note, however, that in this example, the author states that the apparent 90 percent-plus degradation is a combination of packet loss and latency.

What MIT hasn’t discussed (nor has The Register received a response to an e-mail sent to Dr Médard) is how its proposed coding can recoup nearly all of that 90 percent of lost throughput.

Nor, for that matter, are the real-world test conditions discussed: for example, which wireless standard was in use, and how many competing users were sharing capacity with the test system.

The Register would welcome a more detailed discussion of MIT’s coded TCP approach. ®

Bootnote: Dr Médard has responded with some further technical papers which El Reg is now studying for a follow-up. She has also advised that the MIT approach should be considered "erasure correction" rather than "error correction".

More about

TIP US OFF

Send us news


Other stories you might like