Feeds

Boffins strap turbocharger to BitTorrent

P2P that goes FASTER as load increases

Next gen security for virtualised datacentres

Cue a new round of fast-network scare-mongering from the world's content owners: a group of information theorists from the US, France and Finland believe that with a bit of tweaking, P2P networks can become even more efficient.

In fact, if their maths is correct – and their ideas could be deployed on a large scale – their counter-intuitive conclusion is that P2P networks would deliver the counter-intuitive outcome of performing better as the load on the network grows.

One of the key assumptions underlying P2P networks like BitTorrent is that users have relatively limited upload available to them. This is likely to become obsolete, the researchers argue in Can P2P networks be super-scalable?, available on arXiv.

As a result, they believe, it's time to consider new P2P models that ignore upstream capacity as a constraint, and consider what else might hamper P2P overlays in the high-speed broadband world.

Their argument is that P2P protocols like BitTorrent assume (quite reasonably) that the key performance bottleneck is the nodes themselves – how quickly their processing power and upstream links allow them to serve chunks to their peers.

What the paper suggests is that if upload capacity is no longer a constraint, the next bottleneck to emerge is topological – the logical distance between peers. However, that also becomes a strength, the paper argues, since if it's built into the operation of the P2P protocol, the P2P overlay becomes “super-scalable”: performance gets better as load increases (up to the underlying network capacity).

“There are some earlier papers considering P2P systems in a spatial framework … but they do not assume that distance has some effect on transfer speed. Our paper seems to be the first where a peer's downloading rate is a function of its distances to other peers,” the paper states.

If, the main resource bottleneck is considered as the logical links between nodes rather than the nodes themselves, and if all peers are visible to each other (the mesh is complete, or in the authors' terminology, “the interaction graph is complete at any time”), then “the service time is inversely proportional to the square root of the arrival intensity: this is super-scalability”.

“The central reason for super-scalability is rather obvious: the number of edges in a complete graph is of the order of the square of the number of nodes, and so is the overall service capacity”, they write.

The paper was authored by Francois Baccelli (UT Austin), Fabien Matheiu and Rémi Varloot of the University of Paris, and Ikka Norros of the VTT Technical Research Centre in Finland. ®

The essential guide to IT transformation

More from The Register

next story
6 Obvious Reasons Why Facebook Will Ban This Article (Thank God)
Clampdown on clickbait ... and El Reg is OK with this
So, Apple won't sell cheap kit? Prepare the iOS garden wall WRECKING BALL
It can throw the low cost race if it looks to the cloud
EE fails to apologise for HUGE T-Mobile outage that hit Brits on Friday
Customer: 'Please change your name to occasionally somewhere'
Time Warner Cable customers SQUEAL as US network goes offline
A rude awakening: North Americans greeted with outage drama
We need less U.S. in our WWW – Euro digital chief Steelie Neelie
EC moves to shift status quo at Internet Governance Forum
BT customers face broadband and landline price hikes
Poor punters won't be affected, telecoms giant claims
prev story

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.