Original URL: https://www.theregister.com/2008/02/28/bennett_fcc_neutrality_hearing/

Cool Rules for the FCC: In the Lions's Den

Bennett goes to Boston

By Richard Bennett

Posted in Networks, 28th February 2008 11:00 GMT

Opinion Testifying as an expert witness on bandwidth management at the FCC's field hearing in snowy Cambridge this Monday was a heady experience. The hearing took place in a cramped corner of the Harvard Law School, a building that was already decorated with pickets, banners, and reporters when I arrived. Gingerly stepping through the snow in my California sailing shoes enabled me to avoid the protesters and find my way into the hallowed Ames courtroom. The room itself was full of buzz, and packed with a heavily Comcast-friendly crowd thanks to the cable giant's exploitation of the first-come, first-seated rule. Comcast had gamed the hearing's seating rules, hiring place-holders.

The composition of the crowd wasn't apparent until Comcast VP David Cohen got an overly enthusiastic round of applause at the end of his prepared remarks, but pretty much only then. They didn't hiss and boo - unlike the free-speech-loving neutralitarians who replaced them. I was invited to present an afternoon session.

The first panel was composed of the usual suspects from the policy community, Tim Wu, Yochai Benkler, and the engaging Chris Yoo, as well as Free Press's earnest general counsel, senior execs from Comcast and Verizon, and a local legislator. To one who's been engaged in this debate for five years now, none of their remarks was new or different except for Tim Wu's latest re-framing of the issue in terms of openness instead of neutrality. I can't say it's a huge improvement on the hand-waving vagueness front, but at least it's upbeat.

Comcast's David Cohen did a remarkable job of coherently summarizing his company's policies, which might have saved them a lot of grief had he delivered it about six months ago. Like Watergate, Comcast's cover-up was far worse than the crime, which wasn't actually criminal in this case. Yoo reminded folks that prioritization was a feature of NSFNet, and Verizon's Tom Taulke patiently explained the difference between short code and text messages.

The second panel featured one of my technical heroes, MIT professor David Clark, as well as Tim Berners-Lee's boss Danny Weitzner. Clark's old friend David Reed, the very presentable new CTO at BitTorrent, Eric Klinker, and a fellow from Sony I should have known from the home networking standards world, the well-rehearsed Scott Smyers, also featured. This panel placed ingenious pioneers on the same stage with defenders of tradition with instructions to reach a consensus about how to circumscribe network management practices in the interest of progress. Ingenuity was presumed to reside in an "innovative-new-application" (the phrase was repeated so often as to become a single word - "innovativenewapplication"), but in the end a portion was found in the management practices that squelch BitTorrent's excessive bandwidth demands.

If we stipulate, as most witnesses did, that peer-to-peer uses the Internet's classical mechanisms in a novel way, it's hard to sustain the argument that network operators must respond to the traffic streams it generates according to the dictates of official Internet standards. BitTorrent isn't an Internet standard and neither are the tools that manage it; they're gander and goose.

Hogging the neighborhood

BitTorrent certainly uses Internet Standard TCP as a delivery vehicle, but it does so in an unconventional way that essentially exploits a loophole to increase performance. At the end of the day, BitTorrent is just another file transfer program. It has thousands of predecessors, and they differ from each other in only three fundamental ways: scalability, resilience, and performance. It gets its performance boost from the ability of BitTorrent to access a deeper pool of bandwidth than a centralized program can; there's no way to transfer a (compressed) file faster than to take more bandwidth.

BitTorrent does this as a direct consequence of its scalability, by running dozens (or even hundreds) of TCP streams concurrently. The proliferation of streams gives BitTorrent immunity, at least partially, from the Internet's packet-drop-triggered congestion management system.

By contrast, most Internet traffic moving upstream on residential broadband networks comes from applications with no more than one stream active at a time. The loss of a single packet slows this application down, and hence the entire PC that runs it. The loss of a single packet by an application with dozens of active connections hardly registers on the host PC's bandwidth consumption scale. That's the loophole in conventional bandwidth management issues, and why Comcast has been hauled before the Star Chamber: when congestion kicks in, the neighbors slow down before BitTorrent does.

So an innovative bandwidth allocator at the client demands an innovative bandwidth allocator on the network, and that's what the Comcast's Sandvine system is. And logic suggests that if we appreciate a break from tradition on the application side we at least have to accept it on the management side, if it's being deployed to benefit the public. No one has suggested that Comcast's management of BitTorrent caused any harm: as a Comcast subscriber and BitTorrent user, the practice kept the application running well, without degrading the rest of the neighborhood.

QED, innovation all around. But how do we define good practice?

This is just what the Commissioners are looking for, too: what several referred to as a "bright line rule". that would allow them to distinguish "good" management from "bad". It's going to take a long trek to the heart of the sacred wood to find one, however. My suggestion, which I developed during my session, is to apply a laundry list of principles:

  1. Does the practice support a rational goal, such as the fair distribution of bandwidth?
  2. Is it applied, adapted, or modified by network conditions?
  3. Does it conform to standard Internet practices, or to national or international standards, and if not, does it improve on them?
  4. Has it been communicated to customers?
  5. Has technical information that would allow for independent analysis been made available to the research community and the public at large?
  6. Does the practice interfere with customer control of traffic priorities or parameters consistent with terms of service?
  7. Is the practice efficient with respect to both the upstream and downstream data paths?
  8. Does the practice accomplish its purpose with minimal disruption to the network experience of customers as a whole?

The controversy won't come to an end until the Commission produces a statement of principles and disclosure requirements, so they need to get on with it. I don't want to trudge through the snow in these shabby boating shoes again, even to harangue our federal regulators.®

Richard Bennett is a network architect and occasional activist in Silicon Valley. He wrote the first standard for Ethernet over twisted-pair wiring and contributed to the standards for WiFi and the Ultra-Wideband wireless networks. His eleven-year old blog is at bennett.com.