This article is more than 1 year old

Facebook removes about as many fake accounts as it has actual monthly users (yes, billions) in effort to clean up online

Social ad biz details effort to cleanse community

Analysis Just as the US Environmental Protection Agency allows up to 9 mg of rodent waste per kilogram of wheat and 0.15 μg/m3 of lead in the air over three months, Facebook expects toxic content will always be a part of its service.

When CEO Mark Zuckerberg addressed the issue in a post last November, he said the antisocial network's content cleansing efforts will never be perfect but can be expected to improve. And he said as much during a conference call with journalists on Thursday, noting that it's impossible to build a content moderation system that works everywhere.

The media call focused on the US web titan's third Transparency Report, which offers more information about awful content than ever before. After Zuckerberg's overview, Guy Rosen, VP of integrity for the social ad biz, likened the company's role to that of an environmental regulator.

That's how the company describes its Prevalence metric. "When measuring air quality, environmental regulators look to see what percent of air is nitrogen dioxide to determine how much is harmful to people," the company explained in a blog post on Thursday. "Prevalence is the internet’s equivalent — a measurement of what percent of times someone sees something that is harmful."

Liability

Publishers like The Register strive to keep typos and factual errors to the minimum. Lacking the same sort of legal liability, Facebook nonetheless has taken it upon itself – after public prodding and political threats – to limit users' exposure to murder videos, child exploitation images, terrorist propaganda, hate speech, harassment, drug sales and related ills. It's a difficult task which is why Zuckerberg keeps asking the government to provide guidance.

Hewing to Zuckerberg's approach of surrendering but still scrubbing, the company's report admits that no one is sure how much hate speech pollutes the social network's air. The report says Facebook is working to develop a global Prevalence metric that accounts for cultural context and language nuance.

Even so, the biz took action on four million bits of hateful content in Q1 2019, up from 3.3 million the previous quarter and its systems proactively flagged 65.4 per cent of that bile, up from 58.8 per cent in Q4 2018 thanks to investments in AI, not to mention a growing pool of content moderators. That's significant because proactively flagged content gets removed before it gets seen.

Facebook undid about 150,000 hate speech removals, 21,200 on its own and 130,000 following appeals from those posting the content.

Prevalence measurements also eluded Facebook for bullying and harassment, spam and posts about regulated goods (drugs and firearms).

Removal of child exploitation images declined to 5.4 million pieces of content in Q1 2019, from 6.8 million the previous quarter. In part that's a reflection of directions to content moderators to focus on other areas and there was also a bug, now revolved we're told, that Rosen said prevented new file hashes from being added to Facebook's detection system.

In terms of prevalence, less than three in every 10,000 views on Facebook involved child exploitation content. The same stats hold for terrorist propaganda. For adult nudity and sexual activity every 12 to 14 views out of 10,000 contained said stuff during Q1 2019. And every 23 to 25 views out of 10,000 contained graphic violence.

Fake views!

More than 2.2 billion fake accounts were removed in the quarter, an amount almost equal to the number of monthly active users claimed by the site. That's a billion more than last quarter, a surge attributed to increased automated attacks. Facebook insists it removed most of these within minutes of creation so they never were counted in the metrics used to outline usage or set ad rates.

bribe

Revealed: Facebook, Google's soft-money 'blackmail' to stall Euro fake news crackdown

READ MORE

Rosen estimated that fake accounts accounted for about 5 per cent of Facebook's worldwide monthly active users in Q4 2018 and Q1 2019. That's up from the company's estimate of between 3 per cent and 4 per cent during Q2 2018 and Q3 2018.

While the Facebook execs on the call praised the impact of AI improvements on the company's proactive detection capabilities, they also acknowledged technology's limits. AI is "not a silver bullet," said Rosen, noting that people need to be involved too.

Toward that end, Justin Osofsky, VP of global operations, highlighted the pay increases announced for content review contractors last week, now at $22 an hour in the San Francisco Bay Area, New York and Washington, DC, $20 an hour in Seattle, and $18 an hour in other US metropolitan areas. Incidentally, about half what a Facebook technical intern gets paid.

Moderators will also be provided with on-site counseling at all hours – viewing videos of beheadings and kitten crushing all day takes a psychological toll.

"Content review at this scale is no easy feat," said Osofsky. ®

More about

TIP US OFF

Send us news


Other stories you might like