This article is more than 1 year old

Oops! Facebook outed its antiterror cops whilst they banned admins

C'mon, what were they expecting? Privacy? On Facebook?

Facebook last year introduced a bug in its content moderation software that exposed the identities of workers who police content on the social network to those being policed, raising the possibility of retribution.

"Last year, we learned that the names of certain people who work for Facebook to enforce our policies could have been viewed by a specific set of Group admins within their admin activity log," a Facebook spokesperson told The Register in an email. "As soon as we learned about this issue, we fixed it and began a thorough investigation to learn as much as possible about what happened."

In October, Facebook added an activity log to Groups, visible to the other Groups admins. When someone in the Group gets promoted to an admin, Facebook's software creates a notification – a "Story" in Facebook parlance – that gets posted to the activity log.

When Facebook workers take action to ban a Group admin for a terms-of-service violation, such as posting a beheading video or other disallowed content, that event isn't supposed to be logged. But a bug in the content moderation software recorded the removal of the admin's original promotion Story, along with information about the Facebook moderator taking that action.

Remaining admins in that Group who chose to look at the activity log could thus learn about the person watching over them.

The security flaw, essentially a permission issue, was introduced in mid-October, identified in early November and fixed two weeks later. It affected roughly 1,000 Facebook workers – a mix of employees and contractors – across almost two dozen departments who use the company's content moderation software.

Among these were around 40 who worked in the counter-terrorism division at the Facebook's Dublin, Ireland office.

The Guardian, which first reported the programming cockup, claims Facebook singled out six of its workers for special attention because Groups admins with possible ties to terror groups may have viewed their information. It recounts the situation of an Iraqi-born Irish citizen who fled Dublin for fear of reprisal after he discovered seven people that he'd banned, individuals associated with a terror group, had viewed his profile.

Facebook insists that it has seen no evidence of a credible risk to its workers, which assumes you don't count content so horrific that it demands psychological support and counseling.

"Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter," Facebook's spokesperson said. "Even so, we contacted each of them individually to offer support, answer their questions, and take meaningful steps to ensure their safety."

Facebook's technical fix, according to the company spokesperson, involves the creation of administrative accounts not associated with personal Facebook accounts, because personal information represents a security risk. And the social network, which for years has hectored users to share more information, has made changes to its infrastructure to keep workers' personal details private.

Coincidentally, Facebook on Thursday announced a plan to police its platform with savvy software. ®

More about

TIP US OFF

Send us news


Other stories you might like