UK children's charity: Social media firms rubbish at stopping grooming. Time for a mandatory... AI
Self-regulation has failed, says NSPCC
The National Society for the Prevention of Cruelty to Children (NSPCC) is calling on the British government to force social media companies to use AI to detect suspicious grooming behaviour.
The charity campaigning group wants UK.gov to introduce a raft of measures to regulate social media platforms, with consequences including hefty fines for the platforms if they fail.
Self-regulation of social networks has failed, it believes: this year there were 4,373 offences of sexual communication with a child recorded in the 12 months to April 2019 compared with 3,217 in the previous year, according to a Freedom of Information request to police forces in England and Wales.
The charity also wants sites to turn off friend suggestion algorithms for children and young people, and for their accounts to have the highest privacy settings – such as geo-locators switched off by default, contact details being private and unsearchable and live-streaming limited to contacts only.
Additionally, it wants a better data sharing system with other platforms to understand the methods offenders use and flag suspicious accounts.
Peter Wanless, NSPCC chief executive, said: "It's now clearer than ever that government has no time to lose in getting tough on these tech firms.
"Despite the huge amount of pressure that social networks have come under to put basic protections in place, children are being groomed and abused on their platforms every single day. These figures are yet more evidence that social networks simply won't act unless they are forced to by law. The government needs to stand firm and bring in regulation without delay."
Earlier this month the chancellor announced £30m of new cash to tackle online child harm as part of Spending Review 2019.
OK, but how can we tell your age on the internet?
Alastair Graham, CEO of AgeChecked, said he shared the charity's concerns, but said the fundamental challenge for social media sites is that none of them know for certain the age of users.
"Our own research found that 59 per cent of children have used social media by the age of 10, which will usually require registering with a false date of birth to make them look old enough. This means the website is likely to assume they are an adult years before their real 18th birthday, and they will lose any protection put in place for minors, making them vulnerable to online predators.
"Simple and secure age verification technology already exists to ensure that websites can confirm which of their customers are adults. The implementation of robust age-checking processes should play a critical safeguarding role, as it is the only way that social media platforms can truly identify which of their users are children."
A government spokesman said: "We have taken strong action to tackle this vile abuse, from developing AI tools to identify and block grooming conversations to our online harms white paper, which will place a legal duty of care on social media companies to protect their users." ®
Sponsored: Beyond the Data Frontier