Feeds

Researchers seek Internet's choke points

DSL more resilient than cable when the chips are down

Combat fraud and increase customer satisfaction

Cable Internet access really is faster than DSL – but paradoxically, cable users get less of the throughput they think they're paying for.

That's one of the conclusions of a study* from America's National Institute of Standards and Technology (NIST) which ran the slide-rule over datasets captured under the FCC's Measuring Broadband America project.

While American DSL users only get an average download speed of 5.4 Mbps compared to cable users' average speed of 13.5 Mbps, there's a problem: the highest speed available to DSL is far below that available on cable.

The study, by Daniel Genin and Jolene Splett, also took into account what speed tier a customer is signed on for – as the authors state:

“DSL broadband provided connections on average delivering download speeds above 80% of the assigned speed tier more than 80% of the time. By contrast, a significant fraction of cable connections received less than 80% of their assigned speed tier more than 20% of the time. One must keep in mind that cable connections typically have higher download speed tiers than DSL connections.”

To try and explain that disparity, the authors took the Measuring Broadband America data to try and identify choke-points in the different networks. The data is collected by SamKnows, which uses test units installed in more than 10,000 customer premises across 16 ISPs.

Their conclusion is that something about ISPs' network architecture makes cable networks more susceptible to recurrent congestion than DSL networks: “The difference in consistency of service is reflected in the number of connections with recurrent congestion, a relatively low 9–12% for DSL in comparison to 27–32% for cable connections”, the report states, adding that several cable providers in the test “have disproportionately high concentrations of recurrently congested connections”.

For this, a definition of a term used in the report is necessary. In working to identify congestion, Genin and Splett work with a concept they describe as a “tight initial segment”: “all network devices between consecutive IP router interfaces, i.e. all network devices and links between the users side of the connection and the terminal node(s) of the initial segment.”

Interestingly, they also found that DSL appears more resilient in the presence of a “tight initial segment”.

“In the case of DSL 37–50% of the connections identified as having a tight initial segment also experienced recurrent congestion, whereas for cable connections the same number was 91–100%," the study says, going on to explain "That is, a tight initial segment virtually always coincides with recurrent congestion for cable connections but more than half of DSL connections manage to deliver performance close to speed tier in spite of a tight initial segment.”

It's a pity that Genin wasn't able to uncover enough fibre connections on the Measuring Broadband America project to add fibre into the mix. However, The Register notes that Verizon's fibre product was measured in February to be delivering 120 percent of advertised speed, averaged across all speed tiers.

The study is available on Arxiv, here. ®

Bootnote: Since this story was published, the lead author of the study, Daniel Genin, has asked The Register to make it clear that the study was published in a personal capacity only, and not as part of any NIST project.

"The paper was authorized for a submission to the Infocom 2012 but since it was rejected from the conference it was never published. I decided to submit the paper to arXiv to receive additional feedback from the research community and never expected that it would be figuring so prominently", Genin told The Register in an e-mail. ®

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
BOFH: Oh DO tell us what you think. *CLICK*
$%%&amp Oh dear, we've been cut *CLICK* Well hello *CLICK* You're breaking up...
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
prev story

Whitepapers

SANS - Survey on application security programs
In this whitepaper learn about the state of application security programs and practices of 488 surveyed respondents, and discover how mature and effective these programs are.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Top three mobile application threats
Learn about three of the top mobile application security threats facing businesses today and recommendations on how to mitigate the risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.