This article is more than 1 year old

TCP/IP headers leak info about what you're watching on Netflix

Not even HTTPS can hide your secret Gilmore Girls fetish

An infosec educator from the United States Military Academy at West Point has taken a look at Netflix's HTTPS implementation, and reckons all he needs to know what programs you like is a bit of passive traffic capture.

The problem, writes Michael Kranch (with collaborator Andrew Reed), is information in TCP/IP headers are enough to leak content information.

In this paper (PDF) delivered at the CODASPY’17 conference in March, Kranch explains that the TCP/IP headers of a Netflix HTTPS stream provide a 99.5 per cent content fingerprint.

Yes, HTTPS is meant to provide privacy, and no, Netflix isn't doing anything dumb like putting movie titles in headers: the variable bitrate (VBR) encoding happens to yield up predictable behaviour, particularly in how the byte-range portion of HTTP GET commands perfectly aligns with individual video segment boundaries.

For the paper, Kranch wrote a crawler to create the fingerprints and ended up with a database of more than 42,000 Netflix videos, each represented by 7-plus fingerprints per video.

With a database indexing the content metadata (harvested by setting up a server to automatically “watch” videos) against the fingerprints, it's pretty straightforward to capture the fingerprint on someone else's connection and use it to look up the video.

The server Kranch used in his work was hardly a monster: he used a decade-old box with two quad-core Xeon 2.0 processors running at 2 GHz, with Linux Mint 17.3 MATE as the OS.

Even that kit loaded the 184 million fingerprints in 15 minutes, and their assessment found that 99.9989 percent of the “windows” were unique.

Two movies out of the whole database had oddities that meant they're hard to fingerprint: 2001: A Space Odyssey and The Gospel Road: A Story of Jesus, “both of which have lengthy periods where the screen is completely dark, thereby resulting in 'flat' windows that consist of 30 identically-sized segments.”

To test their work, Kranch and Reed then set up a couple of MacBook Pros to stream 20 minutes of video from a random list of 100 flicks, and captured the Internet traffic from each.

On average, they write, the algorithm identified the videos within three minutes, 55 seconds, with more than half of the videos identified before 2:30.

Kranch offers a couple of ideas to fix the issue. For example, he says, “the browser could average the size of several consecutive segments and send HTTP GETs for this average size. As an alternative approach, the browser could randomly combine consecutive segments and send HTTP GETs for the combined video data.”

Code for the project is at GitHub. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like