Google hints at the End of Net Neutrality
This cache makes perfect sense
Updated Network Neutrality, the public policy unicorn that's been the rallying cry for so many many on the American left for the last three years, took a body blow on Sunday with the Wall Street Journal's disclosure that the movement's sugar-daddy has been playing both sides of the fence.
The Journal reports that Google "has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content."
Google claims that it’s doing nothing wrong, and predictably accuses the Journal of writing a hyperbolic piece that has the facts all wrong. It's essentially correct. Google is doing nothing that Akamai doesn’t already do, and nothing that the ISPs and carriers don't plan to do to reduce the load that P2P puts on their transit connections.
Caching data close to consumers is sound network engineering practice, beneficial to users and network operators alike because it increases network efficiency. More people are downloading HDTV files from Internet sources these days, and these transactions are highly repetitive. While broadcast TV can deliver a single copy of “Survivor” to millions of viewers at a time, Internet delivery requires millions of distinct file transfers across crowded pipes to accomplish the same end: this is the vaunted end-to-end principle at work.
There’s nothing wrong with Google's proposed arrangement, and quite a lot right with it. The main beneficiary is YouTube, which accounts for some 20 per cent of the Internet’s video traffic and was recently upgraded to a quasi-HD level of service. Taking YouTube off the public Internet and moving it directly to each ISP’s private network frees up bandwidth on the public Internet. Google’s not the only one doing this, and in fact so many companies are escaping the public Internet that researchers who measure Internet traffic at public peering points, such as Andrew Odlyzko, are scratching their heads in wonderment that the traffic they can measure only increases at 50 per cent a year. Researchers who study private network behavior see growth rates closer to 100 oer cent per year, and caching systems like Google’s and Akamai’s make this kind of traffic distribution possible.
While there’s nothing to see here of a technical nature, the political impact of this revelation is study in contrasts.
Cache from Chaos
Rick Whitt, Google's chief lobbyist and spin doctor, was pressed into service Sunday night to deflect the Journal’s claim that the search monopoly has abandoned its commitment to the Neutrality cause, which he did by issuing a rebuttal-by-blog:
"All of Google's colocation agreements with ISPs ... are non-exclusive. ... Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic. In contrast, if broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables."
Whitt makes some great points, and as a bonus, some of them are even true. But he’s trying to change the subject. Google is making exactly the kind of deal with ISPs that it has consistently tried to ban in law and regulation. One of the blog posts that Whitt cites in defense of Google’s alleged consistency makes this very clear. The post, titled What Do We Mean By 'Net Neutrality'?, advocates a ban on the following ISP practices:
- Levying surcharges on content providers that are not their retail customers;
- Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content; or
- Building a new "fast lane" online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection
Google’s co-location agreement violates all three principles if any money changes hands - and the latter two in any circumstance. Placing content close to the consumer raises its delivery priority relative to content housed on the public Internet. This is the case simply because each hop that the content has to make from one router to the next is an opportunity for congestion and loss, the result of which is a slowdown in the rate at which TCP will transmit. While the Google system reduces the load on the public Internet, it pushes Google’s traffic to the head of the delivery queue at the last minute, as a consequence of its relative immunity to loss.
If the caching system didn’t have an advantage over public Internet delivery, there would be no reason to deploy it.
In e-mail, Whitt insists that caching agreements are permitted under even the most extreme versions of net neutrality: "In my view, access to central offices (or cable headends) is not an NN issue, but should be a Title II functionality (regardless of what the FCC now says) governed by traditional common carriage principles, as well as any TA of '96 CLEC-style requirements." This represents a change of heart on the fundamental issue in the NN debate: advocates of ISP regulation have demanded a wall of separation between infrastructure and content, lest ISPs leverage their monopoly position to push other sources of content aside.
Google has always blurred this line by serving up content from an enormously expensive infrastructure of its own. By stepping across the line they themselves have asked for, while refusing to back down from their legislative demands, Google is now demanding "neutrality for thee, but not for me." Unlike Google, regulators are bound by a requirement to be consistent, so they should take a hard look at this arrangement. Bundling content with expedited delivery is a good thing for many web businesses, not just the biggest one. ®
Richard Bennett is a network inventor who helped design the modern, manageable local area network. He blogs at bennett.com.