Original URL: https://www.theregister.com/2006/07/17/net_neut_slow_death/

How 'Saving The Net' may kill it

The engineer's case against Net Neutrality

By Andrew Orlowski

Posted in Legal, 17th July 2006 17:49 GMT

Interview If you've followed the occasionally surreal, and often hysterical debate around 'Net Neutrality' on US blogs and discussion forums, you may have encountered Richard Bennett. The veteran engineer played a role in the design of the internet we use today, and helped shaped Wi-Fi. He's also been blogging for a decade. And he doesn't suffer fools gladly.

Bennett argues that the measures proposed to 'save' the internet, which in many cases are sincerely held, could hasten its demise. Network congestion is familiar to anyone's who has ever left a BitTorrent client running at home, and it's the popularity of such new applications that makes better network management an imperative if we expect VoIP to work well. The problem, he says, is that many of the drafts proposed to ensure 'Net Neutrality' would prohibit such network management, and leave VoIP and video struggling.

We invited him to explain, from a historical perspective.

Q: You say the internet is breaking down. Why?

A: Remember that the internet isn't the network - it was a means of interconnecting networks - historically Ethernets, but now WiFi and WiMax and others as well.

It was a fashion in network design at the time to distribute functionality to the edge of the network. Ethernet was designed like this - the network was just a cable. Control was just in the conversion layer in the transceivers at each point where a system tapped-in to the cable, which did transmissions, looked for collisions, and if there was did backoff and attempted to retry transmission. It was a completely distributed system, and TCP/IP was based on this.

Primarily TCP/IP was a way of connecting Ethernets, so the assumption was that it was going to be running over Ethernet; and it was optimized for the Ethernet case. So it should generalize. The primary problem protocol designers had at the time was that a fast server didn't overrun a slow client. The TCP/IP windowing mechanism was a way of solving that problem; so it didn't get overrun.

So on January 1, 1983 when TCP/IP was deployed, it all worked fine. Primarily the net was used for email. Then there were more FTP sessions, and it began to melt down.

So people were writing a lot of papers in mid-1984 about what was then called "congestion collapse" Some of the design features of TCP windowing actually made congestion worse; so protocol engineers went to work. They made enhancements to TCP such as Exponential Backoff - another thing stolen directly from old Ethernet and Slow Start - where the initial window size is small. They re-engineered TCP to solve IP's congestion problem.

Today, the internet is only stable to the extent people are using TCP over it. People also tend to miss that you can defeat TCP's attempt to limit traffic over something less than congestion of the backbone if you simply have multiple instances of TCP.

So this congestion management is based on TCP controlling packet traffic, but it depends on it being used in a very gentlemanly fashion.

But running BitTorrent is not nearly as gentlemanly. When it's delayed, it spins off more and more threads and tries harder to get more traffic. And remember that with non-TCP applications, such as UDP, they don't have any congestion management at all. UDP is a stateless protocol, and VoIP and IPTV use RTP, a version of UDP.

Q: So you object because it cripples network management?

A: On the technical side, my objection to the 'Net Neutrality' bills (Markey, Snowe-Dorgan, Sensenbrnner, Wyden) is the ban on for-fee Quality of Service [QoS]. QoS is a legitimate service offering, especially in the day of BitTorrent and what's to follow it.

QoS is perfectly permissible under the original architecture of the Internet - IP packets have a Type of Service field - and it's necessary if you want to offer telco-quality voice. The original architecture was flawed in that it didn't have overload protection.

TCP was hacked in the late 80s to provide overload protection, but that's the wrong place to put it because it's easily defeated by running several TCP streams per end point. Who does that? Only HTTP and every other new protocol.

Thus, End-to-End is fine for error recovery in file transfer programs, not so fine for congestion control in the interior links of the Internet. For the latter, we need QoS, MPLS, and address-based quotas.

In the name of opening up the Internet for big content, the 'Net Neutrality' bills criminalize good network management and business practices. Why can't we have more than one service class on the Internet? Even Berners-Lee says that's OK, but the bills he claims to support forbid it. Something's not right in this 'net neutrality' movement.

Q: So what fires the 'End to End' utopians?

A: When Ethernet was being designed, people felt a hub is a control point - so they came up with this decentralized, "democratic" p2p grassroots model.

It seems to be an aesthetic call. People make some connection between the structure of the network, and the structure of decision making in our political system. So when you have hubs and monitors and filters, they're authoritarian. These people are more concerned with the metaphorical value of network architecture than the utility of the network architecture. At some level they've thoroughly convinced themselves of it - that End to End is really best thing from engineering point of view. But they're not qualified to make that judgment.

The Stevens Bill has now become a consumers bill of rights, enshrining the four freedoms that [former FCC chief] Michael Powell articulated.

The problem isn't just that packet networks aren't like the political system - they're not really like the switched network. A lot of 'Net Neutrality' thinking is coming from traditional telco regulation - the same common carrier principles that were refined for telegraph and the early Bell monopoly. But this needs to be rethought.

Certainly packet networks like the internet are becoming a more important part of our society - and they have unique properties compared to circuit switched networks, just as oxcarts on dirt roads have unique properties and have to make routing decisions!

If we're honest, we don't know how to regulate the internet at a technical level. But we should stop pretending it's a telephone network, and see how it handles packets. The 'net neutrality' lobby is saying all packets are equal - but that's unsound and even inconsistent with common carrier law. There's nothing to stop a transport offering different service levels for different prices.

They all seem to be worried that ISPs have secret plan to sell top rank - to pick a search engine that loads faster than anyone else's. But it's not clear that a), anyone has done that; b), that it's technically achievable; or c) that it is necessarily abusive; or d) that their customers would stand for it.

Q: What would do you think will happen, assuming Net Neutrality dies?

A: I'd like to see service level tiering, and probably that service plan will enable you to use four levels of Quality of Service. Packets will have to be tagged, so BitTorrent can move it at background priority, http and mail move at the best effort, and you have two highest level priorities for Voice and Video.

When we were designing Wi-Fi, some of us envisaged priority levels through differential interpacket gaps. That idea didn't make it in the original standard, but a version was added to 802.11e based on a more elegant but substantially similar idea (802.11 randomizes the delay for each packet to avoid collisions, and 11e allows the randomization to be constrained by priority.) Tests show that 11e enables four times as many voice calls with QoS as without it.

Remember that the internet isn't the network - it is a means of interconnecting networks. It was refined and rewritten. Now we have Quality of Service applications and Wi-Fi networks that use QoS, we have to refine it one more time. The Internet should be a faithful servant to the networks it interconnects.

Q: The religious attachment to End-to-End seems to come from non-technical people.

A: Engineers are very practical. If something doesn't work as designed, if the experiment shows results different to the ones they expected, then they don't pound sand. They go back and try another approach.

Engineers get paid to make it better.

Q: People only seem to object to a 'two lane' highway until you point out one slow lane for everyone isn't any better. Who stands to benefit from the 'Net Neutrality'?

A: I think Google and Yahoo! have made the calculation that IPTV may be lucrative in the long term, and this would put them at an advantage. Google is building massive server farms to enable them to pump enormous amounts of data onto the Internet. The one in Oregon is so big they had to build it close to a dam to get enough electricity - see Markoff's article in the New York Times.

With net neutrality, whoever generates the most traffic controls the network.

Q: But 'Net Neutrality' is presented as a grassroots lobby.

A: I think most people at the grassroots level are really sincere - they really think they're saving the internet.

Up at the MyDD and DailyKos level, there's a lot of manipulation going on. There, it's really about exposure of the brand, and fighting the virtuous fight.

The Stevens Bill is too big and complicated; the bulk of it about video franchising and that's a very contentious matter. So if you're out to exploit the political process, what better way than to find a big bill that's going to be delayed anyway, and jump on it.

[More views on Net Newt tomorrow - ed.]®