Original URL: https://www.theregister.com/2007/12/13/bennett_eff_neutrality_analysis/

Dismantling a Religion: The EFF's Faith-Based Internet

An Expert View

By Richard Bennett

Posted in Channel, 13th December 2007 17:05 GMT

The Electronic Frontier Foundation likes to portray the internet as under attack. But the activist group is doing more to imperil its future than any of its favourite targets.

The latest salvo in the utopians' war is a report on Comcast's traffic management policies. It's an amazingly conflicted piece of work, bristling with fierce language (the term "forgery" is used 33 times in ten pages), but very light on substance.

At least the authors - attorney Fred von Lohmann, copyright specialist Peter Eckersley, and computer guy Seth Schoen - concede that Comcast has a legitimate interest in controlling bandwidth hogs.

"It is true that some broadband users send and receive a lot more traffic than others, and that interfering with their traffic can reduce congestion for an ISP," they write. Which leaves them, ultimately, only quibbling over the methods the cable giant uses.

Their complaint consists of a laundry list of suggested alternative mechanisms for dealing with congestion, that are either unworkable or only trivially different from the "Reset Spoofing" technique Comcast uses.

(Reset spoofing merely rations the number of Bittorrent seeding sessions a user can offer to the internet at a given time. It doesn't affect BitTorrent downloads, and in fact improves them for most users.)

Among the EFF's suggestions we find:

[Comcast] can set a limit on the amount of data per second that any user can transmit on the network. They can also set these limits on a dynamic basis, so that (1) the limits are gradually relaxed as the network becomes less congested and vice-versa and (2) so that the limits primarily slow the traffic of users who are downloading large to very large files that take minutes to transfer.

Here, the EFF confuses upload and download issues, erroneously assuming that cable modem (DOCSIS) networks have the same capabilities for managing upstream flows that they have for downstream ones - a serious error.

DOCSIS networks are grafted onto systems that were built to deliver analog television programs. They employ separate frequency channels for upstream and downstream traffic, and manage them very differently. In the downstream direction, where the cable company's CMTS controller is the only transmitter, traffic can indeed be managed dynamically and usage-sensitive limits used. This is the cable company's equipment and they can manage it as they see fit. Upstream traffic is completely different, however; it comes from multiple transmitters using equipment they may either own outright or lease from the cable company.

The multiple transmitter problem is thorny. While computers operating on other shared-cable systems such as co-ax Ethernet could see whether anyone else was transmitting before jumping on the cable, DOCSIS transmitters are unable to do so because of the separation of transmit and receive channels. The best they can do is wait for a time synchronisation message, take a random guess, and pray that their message (initially a request for bandwidth to the CMTS) will be transmitted successfully. If their prayer is answered, they're given a reserved time slot and everybody's happy. If their request for bandwidth collides with another computer's request for bandwidth, nothing happens and both have to try again, after a suitable delay.

The issue that destabilises cable modem networks is not strictly related to bandwidth: a lot of short packets are worse for the network than a smaller number of large packets consuming more bandwidth.

That's why the EFF's suggestion about dynamic bandwidth caps, even if it were possible to implement, wouldn't solve the problem. But it's not possible to implement in any case: DOCSIS 1.1 cable modems accept a hard bandwidth limit when they boot up and attach to the network for the first time, but it remains in place until the next reboot. This limit has to be set reasonably high (384 kbit/s) in order to provide good performance for the short bursts of traffic that are characteristic of web browsing and gaming. It should probably be supplemented by more sophisticated controls, and will be someday.

But for now, DOCSIS is what it is and does what it does, and no amount of screaming "forgery" is going to change it. Besides, the customers who've purchased their own DOCSIS modems shouldn't be treated as badly as the people who bought last year's Mac.

The Cost of Technical Illiteracy

Comcast's challenge is to make their residential network stable and responsive for the majority of its users despite the desire of a few users of such peer-to-peer file-sharing software such as BitTorrent to consume unlimited bandwidth.

BitTorrent's basic approach to bandwidth consumption actually conflicts quite strongly with a key assumption of the internet's architects, that the relationship between users and traffic flows is essentially a constant. On networks where people browsing the web use four connections in short bursts while BitTorrent users consume 40 or 50 constantly, this is no longer the case.

In contrast to the EFF, serious network people are exploring ways to extend the internet's traditional traffic management methods - packet dropping and slow-start - into the new reality where fairness and congestion have to be managed together.

Bob Briscoe of BT and UCL presented a paper to the IETF (the Internet's technical advisory body) in March. Flow Rate Fairness: Dismantling a Religion [PDF] attacking the problem head-on. In Briscoe's abstract we see that the problem afflicting the EFF has its roots in the internet design community's received wisdom:

Resource allocation and accountability keep reappearing on every list of requirements for the Internet architecture. The reason we never resolve these issues is a broken idea of what the problem is. The applied research and standards communities are using completely unrealistic and impractical fairness criteria. The resulting mechanisms don't even allocate the right thing and they don’t allocate it between the right entities. We explain as bluntly as we can that thinking about fairness mechanisms like TCP in terms of sharing out flow rates has no intellectual heritage from any concept of fairness in philosophy or social science, or indeed real life. Comparing flow rates should never again be used for claims of fairness in production networks. Instead, we should judge fairness mechanisms on how they share out the ‘cost’ of each user’s actions on others.

In other words, the internet's traditional method of ensuring fairness doesn't work any more - not for Comcast, not for BT, not for any network that hosts peer-to-peer file-sharing applications designed to grab all the bandwidth they can get. Internet routers can randomly drop packets all the way to the Restaurant at the End of the Universe, and peer-to-peer users will still consume most of the bandwidth on the internet's first and last hops.

The EFF's quibble with Comcast is therefore bankrupt. Home network providers have to provide some measure of fair access to each user they serve, and they can only do so with mechanisms that actually produce a result. The internet's traffic toolkit is nearly barren, so it's no wonder that Comcast and its peers would use mechanisms such as Reset Spoofing to accomplish an end that all rational people agree is worthwhile.

Truth or dare?

So why does the EFF complain? They're aware that file-sharing is troublesome for cable networks, but remain fully committed to the religious view that the internet's protocols were born fully-formed and inviolate in the mind of a virgin engineer in Bethlehem some 40 years ago, IETF discussions to the contrary notwithstanding.

Like many advocacy groups dealing with technical subjects, the EFF represents the view that technologies are meant to liberate the human spirit from the chains of exploitation, hence it's bewildered by the sight of people using the internet for such mundane purposes as downloading porn, bullying, and stealing music.

So it manufactures a fake crisis of network management to avoid the truth about the inanities of the internet. Problem solved. ®

Richard Bennett is a network architect and occasional activist in Silicon Valley. He wrote the first standard for Ethernet over twisted-pair wiring and contributed to the standards for WiFi and the Ultra-Wideband wireless networks. His eleven-year old blog is at bennett.com.