This article is more than 1 year old

How 'Saving The Net' may kill it

The engineer's case against Net Neutrality

Interview If you've followed the occasionally surreal, and often hysterical debate around 'Net Neutrality' on US blogs and discussion forums, you may have encountered Richard Bennett. The veteran engineer played a role in the design of the internet we use today, and helped shaped Wi-Fi. He's also been blogging for a decade. And he doesn't suffer fools gladly.

Bennett argues that the measures proposed to 'save' the internet, which in many cases are sincerely held, could hasten its demise. Network congestion is familiar to anyone's who has ever left a BitTorrent client running at home, and it's the popularity of such new applications that makes better network management an imperative if we expect VoIP to work well. The problem, he says, is that many of the drafts proposed to ensure 'Net Neutrality' would prohibit such network management, and leave VoIP and video struggling.

We invited him to explain, from a historical perspective.

Q: You say the internet is breaking down. Why?

A: Remember that the internet isn't the network - it was a means of interconnecting networks - historically Ethernets, but now WiFi and WiMax and others as well.

It was a fashion in network design at the time to distribute functionality to the edge of the network. Ethernet was designed like this - the network was just a cable. Control was just in the conversion layer in the transceivers at each point where a system tapped-in to the cable, which did transmissions, looked for collisions, and if there was did backoff and attempted to retry transmission. It was a completely distributed system, and TCP/IP was based on this.

Primarily TCP/IP was a way of connecting Ethernets, so the assumption was that it was going to be running over Ethernet; and it was optimized for the Ethernet case. So it should generalize. The primary problem protocol designers had at the time was that a fast server didn't overrun a slow client. The TCP/IP windowing mechanism was a way of solving that problem; so it didn't get overrun.

So on January 1, 1983 when TCP/IP was deployed, it all worked fine. Primarily the net was used for email. Then there were more FTP sessions, and it began to melt down.

So people were writing a lot of papers in mid-1984 about what was then called "congestion collapse" Some of the design features of TCP windowing actually made congestion worse; so protocol engineers went to work. They made enhancements to TCP such as Exponential Backoff - another thing stolen directly from old Ethernet and Slow Start - where the initial window size is small. They re-engineered TCP to solve IP's congestion problem.

Today, the internet is only stable to the extent people are using TCP over it. People also tend to miss that you can defeat TCP's attempt to limit traffic over something less than congestion of the backbone if you simply have multiple instances of TCP.

So this congestion management is based on TCP controlling packet traffic, but it depends on it being used in a very gentlemanly fashion.

But running BitTorrent is not nearly as gentlemanly. When it's delayed, it spins off more and more threads and tries harder to get more traffic. And remember that with non-TCP applications, such as UDP, they don't have any congestion management at all. UDP is a stateless protocol, and VoIP and IPTV use RTP, a version of UDP.

Q: So you object because it cripples network management?

A: On the technical side, my objection to the 'Net Neutrality' bills (Markey, Snowe-Dorgan, Sensenbrnner, Wyden) is the ban on for-fee Quality of Service [QoS]. QoS is a legitimate service offering, especially in the day of BitTorrent and what's to follow it.

QoS is perfectly permissible under the original architecture of the Internet - IP packets have a Type of Service field - and it's necessary if you want to offer telco-quality voice. The original architecture was flawed in that it didn't have overload protection.

TCP was hacked in the late 80s to provide overload protection, but that's the wrong place to put it because it's easily defeated by running several TCP streams per end point. Who does that? Only HTTP and every other new protocol.

Thus, End-to-End is fine for error recovery in file transfer programs, not so fine for congestion control in the interior links of the Internet. For the latter, we need QoS, MPLS, and address-based quotas.

In the name of opening up the Internet for big content, the 'Net Neutrality' bills criminalize good network management and business practices. Why can't we have more than one service class on the Internet? Even Berners-Lee says that's OK, but the bills he claims to support forbid it. Something's not right in this 'net neutrality' movement.

Q: So what fires the 'End to End' utopians?

A: When Ethernet was being designed, people felt a hub is a control point - so they came up with this decentralized, "democratic" p2p grassroots model.

It seems to be an aesthetic call. People make some connection between the structure of the network, and the structure of decision making in our political system. So when you have hubs and monitors and filters, they're authoritarian. These people are more concerned with the metaphorical value of network architecture than the utility of the network architecture. At some level they've thoroughly convinced themselves of it - that End to End is really best thing from engineering point of view. But they're not qualified to make that judgment.

The Stevens Bill has now become a consumers bill of rights, enshrining the four freedoms that [former FCC chief] Michael Powell articulated.

The problem isn't just that packet networks aren't like the political system - they're not really like the switched network. A lot of 'Net Neutrality' thinking is coming from traditional telco regulation - the same common carrier principles that were refined for telegraph and the early Bell monopoly. But this needs to be rethought.

Certainly packet networks like the internet are becoming a more important part of our society - and they have unique properties compared to circuit switched networks, just as oxcarts on dirt roads have unique properties and have to make routing decisions!

If we're honest, we don't know how to regulate the internet at a technical level. But we should stop pretending it's a telephone network, and see how it handles packets. The 'net neutrality' lobby is saying all packets are equal - but that's unsound and even inconsistent with common carrier law. There's nothing to stop a transport offering different service levels for different prices.

They all seem to be worried that ISPs have secret plan to sell top rank - to pick a search engine that loads faster than anyone else's. But it's not clear that a), anyone has done that; b), that it's technically achievable; or c) that it is necessarily abusive; or d) that their customers would stand for it.

More about

TIP US OFF

Send us news


Other stories you might like