State media regulator Coimisiún na Meán has adopted a finalised ‘Online Safety Code’, which sets binding rules for video-sharing platforms based in Ireland.
The code obligates video-sharing platforms under the jurisdiction of the State “to protect people, especially children, from harmful video and associated content” or face fines of up to €20 million or 10% of the platform’s annual turnover, whichever is greater.
An Coimisiún is to take a supervisory approach to enforcing the code, ensuring that platforms implement systems to comply with the provisions of the code.
“The Code applies to video-sharing platform services, many of which are household names and services we use every day. It requires these platforms to restrict certain categories of video and associated content, so that users cannot upload or share the most harmful types,” the code states.
“The restricted categories include cyberbullying, promotion of eating and feeding disorders, promotion of self-harm and suicide, dangerous challenges, and incitement to hatred or violence on a range of grounds including gender, political affiliation, disability, ethnic minority membership, religion and race. Restrictions also include criminal content such as child sex abuse material, terrorism, racism and xenophobia.”
From next month, the general obligations contained in the code will apply, while platforms will have an implementation window of nine months for certain detailed provisions, which require “IT build”, to come into compliance.
Platforms will be required to use “age assurance” to prevent children from encountering pornography or gratuitous violence online, as well as having age verification measures in place as appropriate. Parental controls will also have to be provided for content that may “impair the physical, mental, or moral development of children under 16”.
Ten video-sharing platforms were designated in December 2023: Facebook, Instagram, YouTube, Udemy, TikTok, LinkedIn, X, Pinterest, Tumblr and Reddit.
However, the code will be applied to only nine of these platforms, with the regulator saying that it has not yet reached a determination for Reddit.
This comes after both Tumblr and Reddit took unsuccessful High Court actions against Coimisiún na Meán as a result of the designations, the platforms arguing that they weren’t suitably characterised as “video-sharing”.
Online Safety Commissioner at Coimisiún na Meán, Niamh Hodnett, said that the adoption of the code “brings an end to the era of social media self-regulation”.
“The Code sets binding rules for video-sharing platforms to follow in order to reduce the harm they can cause to users. We will work to make sure that people know their rights when they go online and we will hold the platforms to account and take action when platforms don’t live up to their obligations,” she said.
Executive Chairperson Jeremy Godfrey said that with all the elements of Coimisiún na Meán’s Online Safety Framework are in place, the regulator’s focus “is on fully implementing the Framework and driving positive changes in peoples’ lives online”.
“Our message to people is clear: if you come across something you think is illegal or against a platform’s own rules for what they allow, you should report it directly to the platform. Our Contact Centre is available to provide advice and guidance to people if they need help.”
Meanwhile, Minister for Media, Catherine Martin welcomed the adoption of the code, describing it as “a major step forward in online safety”.
“It introduces real accountability for online video sharing platforms and requires them to take action to protect those that use their platforms, including by having robust complaints handling procedures and introducing effective age-verification. It will make all of us, but particularly our children, safer online,” Minister Martin said.
The code comes as part of Coimisiún na Meán’s ‘Online Safety Framework,’ which the regulator states makes digital services “accountable for how they protect users from harm online”.
The framework gives Coimisiún na Meán “the tools to address the root causes of harm online, including the availability of illegal content, the harmful impacts of recommender systems, and inadequate protections for children on social media services,” it states.