Should social media companies be responsible for checking their sites?

Meta – the company that owns Facebook, Instagram, Threads and Whatsapp – announced on January 7 that it would end its long-standing fact-checking program, a policy put in place to limit the spread of misinformation across its social media apps .

As part of the content moderation overhaul, the company said it would also drop some of its rules protecting LGBTQ people and others. Mark Zuckerberg, CEO of Meta, explained that the company wanted to “get rid of a lot of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

Do you use any of the apps Meta owns? What is your reaction to these changes?

In the Jan. 8 edition of The Morning, Steven Lee Myers, who covers misinformation and disinformation for The New York Times, explains the impact of the new policy:

Politicizing the truth on social media is a Sisyphean challenge. The volume of content – billions of posts in hundreds of languages ​​- makes it impossible for the platforms to identify all the errors or lies that people post, let alone remove them.

Yesterday, Meta – the owner of Facebook, Instagram and Threads – actually stopped trying. The company said independent fact-checkers would no longer monitor content on its sites. The announcement marked an industry-wide retreat in the fight against falsehoods that poison public discourse online.

Mark Zuckerberg, Meta’s CEO, said the new policy would mean fewer cases where the platforms “accidentally” remove posts that are mistakenly flagged as fake. The trade-off, he acknowledged, is that more “bad stuff” will contaminate the content we scroll through.

It’s not just an annoyance when you open Facebook on your phone. It also eats away at our social life. Social media apps — where the average American spends more than two hours a day — make it so that truth, especially in politics, is simply a matter of toxic and inconclusive debate online.

What could this mean for users? The article explores some potential outcomes:

Meta does not fully disclaim responsibility for what appears on their platforms. It will still remove posts with illegal activity, hate speech and pornography, for example.

But like other platforms, it leaves the political space to maintain market share. Elon Musk bought Twitter (now called X) with a promise of unfettered freedom of expression. He also invited back users who were banned for bad behavior. And he replaced content moderation teams with crowdsourced “community notes” under contentious content. YouTube made a similar change last year. Now Meta also adopts the model.

Several studies have shown the proliferation of hateful, biased content on X. Anti-Semitic, racist, and misogynistic posts spiked after Musk took power, as did misinformation about climate change. Users spent more time liking and reposting items from authoritarian governments and terrorist groups, including the Islamic State and Hamas. Musk himself regularly peddles conspiratorial ideas about political issues like migration and gender to his 211 million followers.

Allowing users to weigh the validity of a post – e.g. one that claims vaccines cause autism or that no one was hurt in the Jan. 6 attack — promises, researchers say. Today, when enough people speak up on X, a note appears below the disputed material. But that process takes time and is susceptible to manipulation. By then, the lie may have gone viral and the damage is done.

Maybe people still yearn for something more reliable. That’s the promise of upstarts like Bluesky. What happened at X could be a warning. Users and, more importantly, advertisers have fled.

It is also possible that people value entertainment and views they agree with over strict adherence to the truth. If so, the internet may be a place where it’s even harder to separate fact from fiction.

Students, read the entire article and then tell us:

  • What’s your reaction to Meta’s decision to end its fact-checking program on its social media apps?

  • Do you think social media companies should be responsible for fact-checking lies, misinformation, disinformation and conspiracy theories on their sites? Why or why not? How much does it matter if what we see on social media is true?

  • How effective do you think the “community notes” approach, where users leave a fact-check or correction on a social media post, will be in limiting the spread of falsehoods on these sites?

  • Critics of fact-checking programs have labeled some decisions by social media companies to remove posts as censorship. Mr. Zuckerberg said the “trade-off” for unfettered free speech and reducing the number of posts falsely flagged as inaccurate is that more “bad stuff” will appear in our feeds. Is that trade-off worth it, in your opinion?

  • How much time do you spend on social media like Instagram, Facebook, TikTok or X? Do you often get information about what is going on in the world from them? Do you expect this information to be correct or do you do your own fact checking?

  • Mr. Myers describes the task of trying to fact-check the billions of posts made on Meta’s social media as “Sisyphean” or almost impossible. But the interventions have been seen by researchers as quite effective. As Claire Wardle, associate professor of communication at Cornell University, put it: “The more friction there is on a platform, the less spread you have of information of low quality.” Do you think it’s worth trying to curb the distribution of misinformation and disinformation on these sites, or should social media companies just give up?

  • Mr. Myers writes that the “bad stuff” we see on social media is not only an annoyance, but that it “also eats away at our civic lives.” Do you agree? Why or why not? What, if anything, do you think Meta’s changes will mean for you, the communities you belong to, your country and the world?