Feeds

Enter Avalanche: P2P filesharing from Microsoft

Doo-da-da-da...doo-da-da-da

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Researchers at Microsoft's computer science lab in Cambridge have developed a peer-to-peer filesharing system that they say overcomes the scheduling problems associated with existing distribution protocols such as Bit Torrent.

The researchers claim download times are between 20-30 per cent faster, using their network coding approach, than on systems that only code at the server, and between 200 and 300 per cent faster than distributing un-encoded information.

Naturally, Microsoft is very keen to stress that this technology should be used for distributing legitimate content. It even put that in italics in the press material.

The basic principle of the system, dubbed Avalanche, is pretty much the same as BitTorrent. Certainly the problem it solves is: a large file needs to be distributed to many people. One server does not have the bandwidth to deal with all that traffic, so you need to find another way of getting the file to everyone who needs it.

If the file is broken up into smaller pieces, these can be distributed among a smaller number of people, who can then share the pieces to make sure they all eventually have the complete file.

The problem with this approach, as anyone who has ever tried to download content on the system - legitimate or otherwise - knows, is that towards the end of a download, any one downloader could have a while to wait for the particular pieces he needs. As the number of receivers increases, scheduling traffic also becomes more complex, and the whole process slows down.

Microsoft Research's approach gets around this by re-encoding all the pieces, so that each one that is shared is actually a linear combination of all the pieces, fed into a particular function. The blocks are then distributed with a tag that describes the parameters it contains.

Once you have downloaded a few of these, you can generate new combinations from the ones you have, and send those out to your peers. Collect enough of these pieces, and you will have enough information to reconstruct the whole file. Even if you don't have all the original pieces distributed by the person who held the original version of the file.

Peers can make use of any new piece, instead of having to wait for specific chunks that are missing. This means no one peer can become a bottle neck, since no piece is more important than any other. It also means overall network traffic is lower, since the same information doesn't have to travel back and forth multiple times.

Nifty, no?

Have a read of the research paper here (pdf), if this is your kind of thing. ®

Related stories

Spaniards stick sword in P2P website
High Court orders ISPs to name file-sharers
German court protects P2P ne'er-do-well

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.