Facebook, Google slammed for 'commercial prostitution'
MPs accuse social media firms of profiting from hate
Google, Twitter and Facebook were hauled over the coals by MPs yesterday in a select committee hearing in where they were accused of "having no shame" and engaging in "commercial prostitution".
The hearing was on the topic of online hate and what the social media/advertising platforms are doing to combat its proliferation online.
Labour MP Chuka Umunna noted that last year Google made $34bn in operating profit, adding that a person who posts a video makes $7.6 per 1,000 views.
He said: "Supporters of Isis have been posting videos and tagging the option of making money from ads alongside videos, which makes them and you money."
Peter Barron, vice president of communications and public affairs at Google in Europe, the Middle East and Africa, said the firm has "no interest in making money from hate" but added "I do not deny it has happened [in the past]."
Umunna said: "There are not many business actives where someone would openly give evidence to committee that they are making, and the people who use their platform are making, money out of hate… you as an outfit are not working nearly hard enough to deal with that."
Senior Labour MP David Winnick said: "What came to mind... when it came to the amount of money made, the millions of dollars, the thought that came to my mind is it's a form of commercial prostitution. I think that's a good and apt description."
He added: "I would be ashamed, absolutely ashamed to earn my money in the way in which you three do."
However, Nick Pickles, senior public policy manager for UK and Israel at Twitter, said it was just not possible to pre-emptively block posts.
He said: "Let's be absolutely clear. We are never going to get to a point where internet companies pre-moderate content because for the 400 hours of YouTube going up every day, for the 500 million tweets that go up every day, if you want pre-moderation of internet platforms, there may well be no internet platforms."
All three pointed to their policy of responding once content has been flagged as in breach of their community policies.
Committee chair Yvette Cooper said: "You all have a terrible reputation among users for dealing with swiftly with problem[s] in content even against your own community standards.
"So surely when you have such a good reputation among advertisers for doing all kinds of sophisticated things in your platform form, surely you ought to do a better job [at responding to hateful posts].”
All the companies said they were working hard to improve the way the they responded to complaints about about objectionable posts.
Simon Milner, policy director for the UK, Middle East and Africa at Facebook, pointed to the firm's Photo DNA database, which allows it to scan every image against a known database of child sex abuse images.
He said the firm was undergoing a similar collaboration with colleagues from Google, Twitter and Microsoft, to develop similar system for most extreme forms of terrorist content. ®