The CEO of Meta, Mark Zuckerberg, has said that his company, which encompasses Facebook and Instagram, will be ditching “fact checkers” going forward as they are “too politically biased” and have created more mistrust than confidence among platform users.
In a video posted today, the tech billionaire said that, the recent US elections “feel like a cultural tipping point towards once again prioritising freedom of expression”.
In place of the fact checkers, Meta will follow the example of X’s community notes function which allows users to add context to, or refute claims made on posts.
Meta says this new approach, which will start in the US, “will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations” and that the company “will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.”
Meta says that when it launched its “independent fact checking program” in 2016, it was “very clear” that it did not wish to be “the arbiters of truth,” however critics have long accused fact checkers on the Zuckerberg owned platforms of political bias and efforts to assert narrative control in areas such as covid era online discourse.
In spite of the criticism, Meta insists that the intention of the program “was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”
Meta now admits that this is “not the way things played out” emphasizing this point in regards to the way fact checkers have behaved in the United States.
“Experts, like everyone else, have their own biases and perspectives” it said, adding that this “showed up in the choices some made about what to fact check and how.”
“Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.” it said.
Meta says that it has seen community notes “work” on rival platform X saying Musk’s approach serves to “empower their community to decide when posts are potentially misleading and need more context” saying “people across a diverse range of perspectives decide what sort of context is helpful for other users to see”.
“We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias.” it said.
The company says it will also change how it enforces policies in order to “reduce the kind of mistakes that account for the vast majority of the censorship on our platforms”.
Up until now, the company says it has been using automated systems to scan for all policy violations, which “has resulted in too many mistakes and too much content being censored that shouldn’t have been.”
Meta says it will continue to focus these systems on tackling “illegal and high-severity violations”, such as terrorism, child sexual exploitation, drugs, fraud and scams.
For less severe policy violations, it will now “rely on someone reporting an issue before we take any action”.
It says that its systems will be ‘tuned’ in order to require “a much higher degree of confidence before a piece of content is taken down.”
“As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations.” it said.