Feeds

35m Google Profiles dumped into private database

Easy as pie

The essential guide to IT transformation

Proving that information posted online is indelible and trivial to mine, an academic researcher has dumped names, email addresses and biographical information made available in 35 million Google Profiles into a massive database that took just one month to assemble.

University of Amsterdam Ph.D. student Matthijs R. Koot said he compiled the database as an experiment to see how easy it would be for private detectives, spear phishers and others to mine the vast amount of personal information stored in Google Profiles. The verdict: It wasn't hard at all. Unlike Facebook policies that strictly forbid the practice, the permissions file for the Google Profiles URL makes no prohibitions against indexing the list.

What's more, Google engineers didn't impose any technical limitations in accessing the data, which is made available in an extensible markup language file called profiles-sitemap.xml. The code he used for the data-mining proof of concept is available here.

“I wrote a small bash script to download all the sitemap-NNN(N).txt files mentioned in that file and attempted to download 10k, then 100k, than 1M and then, utterly surprised that my connection wasn't blocked or throttled or CAPTCHA'd, the rest of them,” Koot wrote in an email to The Register.

In an accompanying blog post – which happens to be hosted on Google's Blogger service – he said the exercise was part of a research project he's doing on online privacy.

“I'm curious about whether there are any implications to the fact that it is completely trivial for a single individual to do this – possibly there aren't,” he wrote. “That's something worth knowing too. I'm curious whether Google will apply some measures to protect against mass downloading of profile data, or that this is a non-issue for them too.”

A Google spokesman said he was exploring whether the scraping violates the company's terms of service. He issued the following statement:

“Public profiles are usually discovered when people use search engines, and sitemap information makes it possible for search engines to index these public profiles so that people can find them. The sitemap does not reveal any information that is not already designated to be public.”

He said users can choose to make Gmail addresses, and other certain pieces of information, public or private. Users can also select an option in their profile settings that prevents search engines from indexing their profiles.

Google isn't the only hoarder of personal information that has been scraped. In July, an independent researcher compiled the names and unique URLs of 100 million Facebook users and made them available for public download. The release made it possible for the profile pages to be accessible even if the users later configured their accounts to be private.

Like Google, Facebook made it possible for users to keep their pages from being indexed, but that did little for those whose data was already published.

The database compiled by Koot contains names, educational backgrounds, work histories, Twitter conversations, links to Picasa photo albums, and other details made available in 35 million Google Profiles. It comprises the usernames of 11 million of the profile holders, making their Gmail addresses easy to deduce. The 35 GB of data excludes the full-text indexes and profile photos of the users.

Koot said he downloaded the information using a single IP connection over a single month. Users may want to remember the ease of constructing such permanent records the next time they're deciding whether to post something to Google, Twitter, Facebook or some other Web 2.0 service. ®

Next gen security for virtualised datacentres

More from The Register

next story
Snowden on NSA's MonsterMind TERROR: It may trigger cyberwar
Plus: Syria's internet going down? That was a US cock-up
Who needs hackers? 'Password1' opens a third of all biz doors
GPU-powered pen test yields more bad news about defences and passwords
e-Borders fiasco: Brits stung for £224m after US IT giant sues UK govt
Defeat to Raytheon branded 'catastrophic result'
Microsoft cries UNINSTALL in the wake of Blue Screens of Death™
Cache crash causes contained choloric calamity
Germany 'accidentally' snooped on John Kerry and Hillary Clinton
Dragnet surveillance picks up EVERYTHING, USA, m'kay?
Linux kernel devs made to finger their dongles before contributing code
Two-factor auth enabled for Kernel.org repositories
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.