YouTube's pedo problem is so bad, it just switched off comments on millions of vids of small kids to stem the tide of vileness
But this moneybags web giant is not a publisher, got that?
YouTube has disabled comments on millions of videos because they were being used by pedophiles to communicate with one another and, allegedly, even link to child abuse videos.
The extraordinary measure is one more sign that the web giant's systems – which are designed to drive views and hence advertising money beyond all else – are fundamentally broken.
"Over the past week, we disabled comments from tens of millions of videos that could be subject to predatory behavior," YouTube said in a blog post on Thursday. "These efforts are focused on videos featuring young minors and we will continue to identify videos at risk over the next few months."
That decision follows an advertiser boycott of the service this month by, among others, AT&T and developer of the Fortnite game, Epic, after one YouTuber posted an angry rant about how the comments section on otherwise innocuous videos of young children was being used to make inappropriate comments.
Matt Watson posted a 20-minute video outlining what he claimed was "a wormhole into a soft-core pedophilia ring on YouTube." In the video he ran through clips of young children doing everyday innocent activities like yoga or eating ice creams, and highlighted inappropriate comments, as well as links inside the comments section that jump to specific points in the videos where the child could be viewed as being in a compromising position.
While that is unpleasant, it is ambiguous and not quite illegal. However, Watson claims to have witnessed pedophiles sharing contact information and even to have followed links posted in the comments section to child abuse images. He provides no proof of those claims however.
Despite the lack of evidence, the video's viral attention caused advertisers to pull their ads from the service. Within days, the internet goliath publicly pledged it would fix the problem.
Soon after, The National Center on Sexual Exploitation investigated this cleanup, and said not enough was being done: YouTube was removing some comments, yet leaving the vids online to further monetize them – videos, we're told, that contained details on how to get in touch with the children featured, and how to network with like-minded perverts.
“Within two clicks, I was able to enter into a rabbit hole of videos where children are being eroticized by pedophiles and child abusers,” said Haley Halverson, veep of advocacy and outreach at the US-based center.
“The content became more flagrantly sexualized the more I clicked, as the YouTube recommendation algorithm fed me more and more videos with hundreds of thousands, and sometimes millions, of views.
“Despite YouTube’s claims to be cleaning up this content, YouTube so far still continues to monetize videos that eroticize young children and that serve as hubs for pedophiles to network and trade information and links to more graphic child pornography.
"YouTube is putting these children at risk by not removing these videos."
Fast forward to today, and YouTube has simply shut down comments on virtually all videos featuring young children, highlighting the extreme sensitivity of the issue.
Show me the money
However, in the same blog post, YouTube continued to demonstrate the very culture that critics say is behind the problem: a drive to put advertising and revenue ahead of everything else, and a refusal to acknowledge editorial responsibility over the service that it provides.
"A small number of creators will be able to keep comments enabled on these types of videos," the company stated, adding: "These channels will be required to actively moderate their comments."
It is a fair assumption that these exempt channels are the most popular and hence most profitable outlets on the service.
But more significantly YouTube's proposed solution to the issue is the same one that Facebook and Google have been promoting for years as the solution to abusive content and fake news, but which continues to fall far short of effective moderation: artificial intelligence.
"While we have been removing hundreds of millions of comments for violating our policies, we had been working on an even more effective classifier, that will identify and remove predatory comments," YouTube noted.
YouTube, owned by Google, is determined to use (or claim to use) only machine-learning algorithms for filtering content and comments because otherwise it would potentially assume liability for illegal content. For this reason it – and other social media outlets – claim not to be "publishers" in the traditional media context, while continuing to make billions of dollars from content that they publish on their platforms.
Notably, however, YouTube stressed that the new, improved system "does not affect the monetization of your video."
The web giant also fails to address one of critics' largest concerns: that its algorithms automatically show similar videos to one being watched; something that can create a dangerous loop.
Previously the Silicon Valley biz has been criticized for sending users down a conspiracy theory wormhole where watching one video about, say, the Earth being flat, leads to another and another. The users that post those videos also tend to view and post other videos with a conspiratorial bent, and that results in YouTube recommending those videos as well.
The same pattern has been observed with racist, sexist, and white nationalist content: an online echo chamber that experts say can be extremely unhealthy. In this case, it was notable that clicking on one video of a young child led to the related videos becoming immediately dominated by other videos of young children and so ended up creating a de facto meeting place for people interested in seeing lots of videos whose only connection was that they featured young children (rather than, say, specific topics like favorite movies or activities).
However, for YouTube, this approach is enormously profitable since in the vast majority of circumstances it leads a user who might otherwise stop watching after the video they chose has finished to keep watching similar content and hence boost advertising figures and profits.
Harassment, hate and bile, suicide instructions for kids... anything else social media's good at? Ah yes, cybercrimeREAD MORE
YouTube also adopts a very light regulatory approach to the creation of new accounts and content, as well as to complaints of abuse by accounts. It claims that additional restrictions would limit the service unnecessarily whereas critics say the lack of controls – or ones that are enforced – make it a free-for-all and lead to the situation where pedophiles can make inappropriate and offensive comments on children's videos and even – most disturbingly – congregate online.
"We will continue to take action when creators violate our policies in ways that blatantly harm the broader user and creator community," the tech giant noted before, yet again, putting the onus on its own users. "Please continue to flag these to us."
Incidentally, you will find it difficult to find the original video posted by Matt Watson, embedded below, highlighting the issue if you search through YouTube's own system. Why? Because the platform has become overwhelmed with other YouTubers posting angry rants about Watson's angry rant in an effort to get clicks and hence make a cut of the resulting advertising revenue.
And in a reflection of the destructive discourse that social media continues to encourage and reward, the most popular videos are not ones that agree with Watson or dig further into the issue but rather videos that attack him personally. ®
Sponsored: Becoming a Pragmatic Security Leader