This article is more than 1 year old

Google 'cubists' fix bug in Linux network congestion control, boost performance

It's a wonder the 'net works at all, really

A bit of “quality, non-glamorous engineering” could give a bunch of Linux servers a boost by addressing an unnoticed bug in a congestion control algorithm.

This little code snippet addresses the ten-year-old slip-up in the open-source kernel's net/ipv4/tcp_cubic.c code:

static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event)
{
 if (event == CA_EVENT_TX_START) {
 s32 delta = tcp_time_stamp - tcp_sk(sk)->lsndtime;
 struct bictcp *ca = inet_csk_ca(sk);

 /* We were application limited (idle) for a while.
 * Shift epoch_start to keep cwnd growth to cubic curve.
 */
 if (ca->epoch_start && delta > 0) ca->epoch_start += delta;
 return;
 }
}

So what's it all about, Alfie?

The patch was provided by Googlers in the Chocolate Factory's transport networking team, with contributions from Jana Iyengar, Neal Cardwell, and others.

It fixes an old flaw in a set of routines called TCP CUBIC designed to address the “slow response of TCP in fast long-distance networks," according to its creators.

Like any congestion control algorithm, TCP CUBIC makes decisions based on congestion reports: if the network becomes jammed with traffic, hosts are told to slow down.

As Mozilla developer Patrick McManus explains here, the bug was simple: TCP CUBIC interprets a lack of congestion reports as an opportunity to send data at a faster rate.

That condition could, however, arise merely because the system hasn't been getting any congestion reports.

What's supposed to happen in congestion control is that the operating system starts sending data slowly, increases its transmission rate until the network says “that's enough”, and then backs off.

The bug in TCP CUBIC fools the system into thinking it has a clear run at the network and should transmit at the maximum possible rate, crashing into other traffic, and ruining performance and efficiency.

“The end result is that applications that oscillate between transmitting lots of data and then laying quiescent for a bit before returning to high rates of sending will transmit way too fast when returning to the sending state,” McManus explained.

That condition could be quite common, he notes. A server may have sent a short burst of data over HTTP containing a web form for someone to fill out, and go quiet waiting for a response, then assume there's no congestion, and burst out of the blocks at top-rate when it gets the user's response.

“A far more dangerous class of triggers is likely to be the various HTTP based adaptive streaming media formats where a series of chunks of media are transferred over time on the same HTTP channel”, McManus added.

That's why a fix for the ancient bug could be important: Linux is used in many media servers, and for the last decade, an important chunk of congestion control hasn't been working quite right. The patch forces the kernel to act a little more intelligently after an idle period.

A more technical description is included with the bug fix. ®

More about

TIP US OFF

Send us news


Other stories you might like