Feeds

Verizon makes nice with P2P

We can help ISPs turn internet into big TV set

Beginner's guide to SSL certificates

From an ISP’s point of view, P2P traffic can appear to be exceptionally daunting. If they choose to block it, as some have accused almost all of the major US ISPs of doing, then their networks would become ghost networks, with virtually no traffic in sight. But if they embrace it, their networks are fast moving crazy places, where suppliers have to sprint to keep their network surviving.

So what’s it to be? Well Verizon appears, at least to be considering a middle road, one where instead of working against P2P, or just putting up with its traffic costs, it will offer protocols to help co-operate with P2P networks to deliver entertainment, by better understanding the conditions of the network it is traveling over. That really IS open.

The initiative began last July and is through the auspices of a Distributed Computing Industry Association (DCIA) working group called P4P, which stands for Proactive network Provider Participation for P2P. The two founder members and chairs come from Pando Networks and Verizon Communications. Pando is one of the new breed of P2P companies trying to eek out a living in legal P2P file delivery.

This is really a club for ISPs and P2P suppliers in which they can work out their differences and it is so much more of a positive approach than whining about network traffic and investing purely in “traffic shaping".

Statements from this workgroup claim that software that is already being tested which can improve download speed between 200 per cent and 600 per cent, purely by offering up a set of network APIs, which let a P2P application know which parts of a network are busy, and using this to intelligently decide which P2P nodes should be uploading in support of a file or stream delivery. It’s not rocket science, and if a CompSci grad student had been given the problem he could have come up with the same answer, but it is how to phrase that question which is interesting.

If the question was “How do we get traffic zingin around the internet, for nothing, without the help of the ISP and despite its best efforts to stop us,” then that definitively is the wrong question. If it were simply told “you have a network and multiple copies of large files distributed around that network, how do you build a rapid file delivery mechanism,” then naturally you reach the DCIA answer.

It is the history of ISPs and P2P suppliers being at each other’s throats for so long, that makes it hard to see how this might ever have come about.

In fact what needed to happen was that the livelihood of ISPs needed to be threatened, where the average customer was expecting more and more from the ISP, while the average monthly price for ISP service went down and down, and traffic on their networks went up and up, forcing more and more investment. At that point, P2P traffic is taken as a fact of life, not something that the ISP looks to the US Supreme Court to make illegal.

ISPs cannot block all P2P activity because Verisign’s Kontiki P2P client, which is now used to deliver millions of hours of TV services around the world from respectable broadcasters, Skype, as well as Joost and Babelgum, are not breaking any laws. Even Kazaa and Bit- Torrent may now be carrying more legal than illegal traffic, or if not yet, they should lean that way over time.

If we look beyond this simple set of proposals we see more and more which might be done. By bringing ISPs and P2P suppliers closer, perhaps the handshakes for this type of co-operative routing might also include some form of legitimate traffic audit. So we perhaps reach a point where if P2P traffic from your software passes some kind of “threshold” test of mostly sending legitimate files (something that deep packet inspection might still be needed for) then the APIs to sense the condition of the network are open to your client software, and it is pushed higher up the food chain in terms of the priority attached to the traffic.

If mostly copyrighted material appears to be traveling across the network, then perhaps that API co-operation is refused by the network nodes and the resulting traffic packets will be treated as low priority. That would create an underclass and upperclass of P2P clients, each with a signature which would trigger the various treatments by ISPs.

Security for virtualized datacentres

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Bitcasa bins $10-a-month Infinite storage offer
Firm cites 'low demand' plus 'abusers'
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.