Meta Gets Rid of Fact Checkers, Says It Will Reduce ‘Censorship’


New York
CNN

In a series of sweeping changes that will significantly alter the way posts, videos and other content are moderated online, Meta will adjust its content review policies on Facebook and Instagram, getting rid of fact-checkers and replacing them with user-generated “community notes,” in similarity to Elon Musk’s X, CEO Mark Zuckerberg announced Tuesday.

The changes come just before President-elect Donald Trump takes office. Trump and other Republicans have criticized Zuckerberg and Meta for what they see as censorship of right-wing voices.

“Fact checkers have been too politically biased and have destroyed more trust than they have created,” Zuckerberg said in a video announcing the new policy Tuesday. “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and that’s gone too far.”

However, Zuckerberg acknowledged a “tradeoff” in the new policy, noting that more harmful content will appear on the platform as a result of the content moderation changes.

Meta’s newly appointed Chief of Global Affairs Joel Kaplan told Fox on Tuesday that Meta’s partnerships with third-party fact-checkers were “well-intentioned at first, but there’s just been too much political bias in what they choose to fact-check and how.”

The announcement comes amid a broader apparent ideological shift to the right within Meta’s top ranks, and as Zuckerberg seeks to improve his relationship with Trump before the president-elect takes office later this month. Just a day earlier, Meta announced that Trump ally and UFC executive Dana White would join the board, along with two other new directors. Meta has also said it will donate $1 million to Trump’s inaugural fund and that Zuckerberg wants to take an “active role” in technology policy discussions.

Kaplan, a prominent Republican who was elevated to the company’s top policy job last week, acknowledged that Tuesday’s announcement is directly related to the changing administration.

He said there is no doubt there has been a change over the past four years. We saw a lot of societal and political pressure, all towards more content, moderation more censorship, and we’ve got a real opportunity. Now we have a new administration and a new president who are great defenders of free speech, and that makes a difference.”

The moderation changes mark a stunning turnaround in how Meta handles false and misleading claims on its platforms.

In 2016, the company launched an independent fact-checking program in the wake of allegations that it had failed to stop foreign actors from exploiting its platforms to spread disinformation and sow discord among Americans. In the years since, it has continued to struggle with the spread of controversial content on its platform, such as misinformation about elections, anti-vaccination stories, violence and hate speech.

The company built security teams, introduced automated programs to filter out or reduce the visibility of false claims, and introduced a kind of independent high court for difficult moderation decisions, known as the Oversight Board.

But now Zuckerberg is following in the footsteps of fellow social media leader Musk, who, after acquiring X, then known as Twitter, in 2022 dismantled the company’s fact-checking team and made user-generated context labels called community notes the platform’s only method of correcting false claims.

Meta says it is ending its partnership with third-party fact-checkers and introducing similar community notes.

“I think Elon has played an incredibly important role in shifting the debate and getting people to focus on free speech again, and it’s been really constructive and productive,” Kaplan said.

The company also plans to adjust its automated systems that scan for policy violations, which it says have resulted in “too much content being censored that shouldn’t have been.” The systems will now be focused on checking only for illegal and “serious” offenses such as terrorism, child sexual exploitation, drugs, fraud and fraud. Other concerns must be reported by users before the company evaluates them.

Zuckerberg said Tuesday that Meta’s complex systems for moderating content have mistakenly resulted in too much non-infringing content being removed from the platform. For example, if the systems fail something 1% of the time, that could represent millions of the company’s more than 2 billion users.

“We’ve reached a point where it’s just too many mistakes and too much censorship,” Zuckerberg said.

But Zuckerberg acknowledged that the new policy could create new problems for content moderation.

“The reality is this is a trade-off,” he said in the video. “This means we’ll catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts we accidentally remove.”

The company is also getting rid of content restrictions on certain topics, such as immigration and gender identity, and rolling back limits on how much policy-related content users see in their feeds.

As part of the changes, Meta will move its trust and security teams responsible for content policies from California to Texas and other US locations. “I think it will help us build trust to do this work in places where there’s less concern about our team’s bias,” Zuckerberg said.

This is a developing story and will be updated.