Europe taps Facebook, Google, Twitter on the shoulder. So about those promises to stamp out lies, bots, dodgy ads?
Internet giants hand in their homework – here's a summary
The EU regulatory body last month, after prior deliberation, announced an Action Plan to counter the disinformation poisoning political discourse and disrupt democratic norms on social ad platforms. As part of the plan, online services and ad-selling groups were asked to provide updates on their commitment to regulate themselves under the Code of Practice on Disinformation, which the platform companies agreed to in October 2018.
The goal of the code is to deny ad revenue to those spreading lies, to enable disclosure of political and issue advertising, to create clear policies on identities and bots, to close fake accounts, to provide information evaluation tools, and to provide researchers with data about disinformation while respecting privacy.
Shocking though it may be to imagine something's wrong on the internet, it does happen occasionally. In 2017, Google's report says, the company banished 3.2bn ads, blocked 2m pages a month from its publisher network, terminated 320,000 publishers from its network, and blacklisted 90,000 websites and 700,000 mobile apps from its network.
Facebook says it took down 800m and 754m fake accounts in Q2 and Q3 2018 respectively. Twitter says it suspended 1,432,000 applications for serving spammy tweets.
The concerned parties, wary of the imposition of actual laws with meaningful penalties, signed on and have now provided an account of steps they're taking to distribute less destabilizing propaganda.
"Signatories have taken action, for example giving people new ways to get more details about the source of a story or ad," said Andrus Ansip, European Commissioner for Digital Single Market, in a statement.
"Now they should make sure these tools are available to everyone across the EU, monitor their efficiency, and continuously adapt to new means used by those spreading disinformation. There is no time to waste."
And yet the work has just begun. The Commission summarized the submitted self-evaluations as works in progress. Facebook, it says, has made some headway but needs to provide more clarity about how it will help users recognize misleading content.
French president Macron insists new regulations needed to protect us all from Facebook's clawsREAD MORE
Google meanwhile has subjected its ads to more scrutiny but needs to make its tools more widely available. Twitter has taken steps to rein in malicious users and fake accounts but needs to provide more details about how it intends to prevent "persistent purveyors of disinformation from promoting their tweets."
And Mozilla – which doesn't really belong alongside the ad slingers – gets credit for adding tracking protection to its Firefox browser but needs to provide more details about how its interventions will help prevent users' browsing activities from being weaponized against them.
The Commission also credits ad groups with making members aware of the Code, but calls out the absence of corporate signatories, citing the need to involve brands and advertisers in the effort to starve misinformers of money.
The Commission appears to be overly charitable in its assessment of the willingness of platform companies to participate in their own policing. On Monday, ProPublica reported that Facebook, under the pretense of user protection, has taken steps to disable software tools developed to allow researchers to analyze its advertisements. The result is that it will be more difficult for users to understand how they were targeted by Facebook ads.
The very same day Facebook issued a press release touting the efforts of its now 30,000-strong content cleansing crew. The company insists it's "bringing unprecedented transparency to political advertising," even as it takes steps to make its ad targeting more opaque. ®
Snapchat, Pinterest, Facebook's Instagram and WhatsApp, YouTube, and other web giants are under increasing pressure in the UK this month to prevent kids from seeing pro-suicide material, following the 2017 death of 14-year-old Molly Russell. Molly's family believe internet content on self-harm, depression, and anxiety influenced her to take her own life.
Sponsored: Becoming a Pragmatic Security Leader