Feeds

Researchers seek Internet's choke points

DSL more resilient than cable when the chips are down

Maximizing your infrastructure through virtualization

Cable Internet access really is faster than DSL – but paradoxically, cable users get less of the throughput they think they're paying for.

That's one of the conclusions of a study* from America's National Institute of Standards and Technology (NIST) which ran the slide-rule over datasets captured under the FCC's Measuring Broadband America project.

While American DSL users only get an average download speed of 5.4 Mbps compared to cable users' average speed of 13.5 Mbps, there's a problem: the highest speed available to DSL is far below that available on cable.

The study, by Daniel Genin and Jolene Splett, also took into account what speed tier a customer is signed on for – as the authors state:

“DSL broadband provided connections on average delivering download speeds above 80% of the assigned speed tier more than 80% of the time. By contrast, a significant fraction of cable connections received less than 80% of their assigned speed tier more than 20% of the time. One must keep in mind that cable connections typically have higher download speed tiers than DSL connections.”

To try and explain that disparity, the authors took the Measuring Broadband America data to try and identify choke-points in the different networks. The data is collected by SamKnows, which uses test units installed in more than 10,000 customer premises across 16 ISPs.

Their conclusion is that something about ISPs' network architecture makes cable networks more susceptible to recurrent congestion than DSL networks: “The difference in consistency of service is reflected in the number of connections with recurrent congestion, a relatively low 9–12% for DSL in comparison to 27–32% for cable connections”, the report states, adding that several cable providers in the test “have disproportionately high concentrations of recurrently congested connections”.

For this, a definition of a term used in the report is necessary. In working to identify congestion, Genin and Splett work with a concept they describe as a “tight initial segment”: “all network devices between consecutive IP router interfaces, i.e. all network devices and links between the users side of the connection and the terminal node(s) of the initial segment.”

Interestingly, they also found that DSL appears more resilient in the presence of a “tight initial segment”.

“In the case of DSL 37–50% of the connections identified as having a tight initial segment also experienced recurrent congestion, whereas for cable connections the same number was 91–100%," the study says, going on to explain "That is, a tight initial segment virtually always coincides with recurrent congestion for cable connections but more than half of DSL connections manage to deliver performance close to speed tier in spite of a tight initial segment.”

It's a pity that Genin wasn't able to uncover enough fibre connections on the Measuring Broadband America project to add fibre into the mix. However, The Register notes that Verizon's fibre product was measured in February to be delivering 120 percent of advertised speed, averaged across all speed tiers.

The study is available on Arxiv, here. ®

Bootnote: Since this story was published, the lead author of the study, Daniel Genin, has asked The Register to make it clear that the study was published in a personal capacity only, and not as part of any NIST project.

"The paper was authorized for a submission to the Infocom 2012 but since it was rejected from the conference it was never published. I decided to submit the paper to arXiv to receive additional feedback from the research community and never expected that it would be figuring so prominently", Genin told The Register in an e-mail. ®

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Sysadmin Day 2014: Quick, there's still time to get the beers in
He walked over the broken glass, killed the thugs... and er... reconnected the cables*
Amazon Reveals One Weird Trick: A Loss On Almost $20bn In Sales
Investors really hate it: Share price plunge as growth SLOWS in key AWS division
Auntie remains MYSTIFIED by that weekend BBC iPlayer and website outage
Still doing 'forensics' on the caching layer – Beeb digi wonk
SHOCK and AWS: The fall of Amazon's deflationary cloud
Just as Jeff Bezos did to books and CDs, Amazon's rivals are now doing to it
BlackBerry: Toss the server, mate... BES is in the CLOUD now
BlackBerry Enterprise Services takes aim at SMEs - but there's a catch
The triumph of VVOL: Everyone's jumping into bed with VMware
'Bandwagon'? Yes, we're on it and so what, say big dogs
Carbon tax repeal won't see data centre operators cut prices
Rackspace says electricity isn't a major cost, Equinix promises 'no levy'
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Application security programs and practises
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Securing Web Applications Made Simple and Scalable
Learn how automated security testing can provide a simple and scalable way to protect your web applications.