This article is more than 1 year old

Facebook decides fake news isn't crazy after all. It's now a real problem

Once dismissed by Zuck, misinformation now merits revised security strategy

Analysis Last November at the Techonomy Conference in Half Moon Bay, California, Facebook CEO Mark Zuckerberg dismissed the notion that disinformation had affected the US presidential election as lunacy.

"The idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way, I think, is a pretty crazy idea," said Zuckerberg.

Five months later, after a report [PDF] from the Office of the US Director of National Intelligence provided an overview of Russia's campaign to influence the election – via social media among other means – the social media giant has published a plan for "making Facebook safe for authentic information."

Penned by Facebook chief security officer Alex Stamos and security colleagues Jen Weedon and William Nuland, "Information Operations and Facebook" [PDF] describes an expansion of the company's security focus from "traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people."

This despite Zuckerberg's insistence that "of all the content on Facebook, more than 99 per cent of what people see is authentic."

Facebook's paper says information operations to exploit the personal data goldmine revolve around targeted data collection from account holders, content creation to seed stories to the press, and false amplification to spread misinformation. It focuses on defenses against data collection and the distribution of misleading content.

To combat targeted data collection, Facebook says it is:

  • Promoting and providing support for security and privacy features, such as two-factor authentication.
  • Presenting notifications to specific people targeted by sophisticated attackers, with security recommendations tailored to the threat model.
  • Sending notifications to people not yet targeted but likely to be at risk based on the behavior of known threats.
  • Working with government bodies overseeing election integrity to notify and educate those at risk.

False amplification – efforts to spread misinformation to hurt a cause, sow mistrust in political institutions, or foment civil strife – is recognized in the report as a possible threat to Facebook's continuing vitality.

"The inauthentic nature of these social interactions obscures and impairs the space Facebook and other platforms aim to create for people to connect and communicate with one another," the report says. "In the long term, these inauthentic networks and accounts may drown out valid stories and even deter some people from engaging at all."

As can be seen from Twitter's half-hearted efforts to subdue trolls, sock puppets, and the like, such interaction can be toxic to social networks.

Stamos, Weedon and Nuland note that Facebook is building on its investment in fake account detection with more protections against manually created fake accounts and with additional analytic techniques involving machine learning.

Facebook's security team might want to have a word with computer scientists from University of California Santa Cruz, Catholic University of the Sacred Heart in Italy, the Swiss Federal Institute of Technology Lausanne, and elsewhere who have made some progress in spotting disinformation.

'Some like it hoax'

In a paper published earlier this week, "Some Like it Hoax: Automated Fake News Detection in Social Networks" [PDF], assorted code boffins report that they can identify hoaxes more than 99 per cent of the time, based on an analysis of the individuals who respond to such posts.

"Hoaxes can be identified with great accuracy on the basis of the users that interact with them," the research paper claims.

Asked about Zuckerberg's claim that only about 1 per cent of Facebook content is inauthentic, Luca de Alfaro, computer science professor at UC Santa Cruz and one of the hoax paper's co-authors, said he had no information on the general distribution of misinformation on Facebook.

"I would trust Mark on this," de Alfaro said in an email to The Register. "I know that on Wikipedia, on which I worked in the past, explicit vandalism is about 6 or 7 per cent (or it was some time ago)."

More significant than the percentage of fake news, de Alfaro suggested, is the impact of hoaxes on people.

"For instance, suppose I read and believe 10 run-of-the-mill pieces of news, and one outrageous hoax: which one of these 11 news [stories] will have the greatest impact on me?" he said. "Hoaxes are frequently harmful due to the particular nature of their crafted content. You can eat 99 meatballs and 1 poison pill, and you still die."

Machine learning techniques are proving to be effective, de Alfaro suggested, but people still need to be involved in the process.

"In our work, we were able to show that we can get very good automated results even when the oversight is limited to 0.5 per cent of the news we classify: thus, human oversight on a very small portion of news helps classify most of them."

Asked whether human oversight is always necessary for such systems, de Alfaro said that was a difficult question.

"To some level, I believe the answer is yes, because even if you use machine learning in other ways, you need to train the machine learning on data that has been, in the end, selected by some kind of human process," he said. "We are developing in my group at UCSC, and together with the other collaborators, a series of tools and apps that will enable people to access our classifiers, and we hope this might have an impact."

For Facebook, and the depressingly large number of people who rely on it, such tools can't come soon enough. ®

More about

TIP US OFF

Send us news


Other stories you might like