A new report has found evidence of a “rising tide of child abuse content’ on social media platforms, with tens millions of images being removed each year after being flagged as sexual exploitation and child nudity. 

Web research and security company, Comparitech, said it based its findings on the transparency reports of seven of the biggest social media platforms – and that the results highlighted “the growing problem of child abuse content appearing on mainstream websites”.

Among the key findings was the troubling growth of such images on TikTok, which is more popular with younger social media users, and similar surge on YouTube.

The removals of child abuse content “nearly doubled” between 2019 and 2020 on TikTok, while “YouTube has seen a 169% surge in removals between 2018 and 2020”, the report found. It noted “a modest 3% decrease between 2019 and 2020” in the removal of such content for Facebook in 2020.

The scale of the problem was highlighted in the sheer number of content flagged.

“In 2020 alone, Facebook removed 35.9 million pieces of content flagged under ‘child nudity and sexual exploitation’,  according to the social network’s latest transparency report. And Facebook isn’t alone; Instagram, Youtube, Twitter, TikTok, Reddit, and Snapchat combined remove millions of posts and images that fall foul of community guidelines regarding child abuse,” Comparitech noted.

While the popular perception of child pornography was that it was offered only in ” shady corners of the internet”, the reality was that “thousands of images and posts containing child abuse, exploitation, and nudity are removed by the biggest names in social media every day,” the report noted.

They cautioned that “transparency reports really only became a trend among social media companies in 2018, so we don’t have a ton of historical data to go by. Furthermore, changes in content moderation policies might skew the numbers.”

Looking at Big Tech efforts to combat child abuse content, Comparitech noted  that most efforts were “largely reactive, not proactive”. However, they noted that Apple had begun to use a different scanning process to prevent images being uploaded.

“Recently, Apple has started hashing image files on users’ iCloud storage to see if they match those in a law enforcement database of child abuse images. This allows Apple to scan users’ storage for child porn without actually viewing any of the users’ files. Some privacy advocates still take issue with the tactic, and it’s not perfect, but it might be a compromise that other tech companies decide to adopt,” they said.