
Meta Scraps Fact-Checkers, Eases Content Restrictions
Fact checkers are consigned to the dustbin of history in Meta.
“We will discontinue the current third-party fact-checking program in the United States and will instead begin transitioning to a Community Notes program,” Joel Kaplan, Meta’s chief international affairs officer, said in a company blog post Tuesday.
Kaplan added that Meta will also address the problem of “mission creep,” which has caused the rules governing the company’s platforms to become too restrictive and prone to over-enforcement.
“We are removing a number of restrictions on topics such as immigration, gender identity and gender, which are the subject of frequent political discussion and debate,” he wrote. “It’s not right that something can be said on television or on the floor of Congress, but not on our platforms.”
In addition, Meta will modify automated systems that scan its platforms for policy violations. “[T]it led to too many errors and censoring too much content that should not have happened,” Kaplan wrote.
Going forward, systems will focus on illegal and serious violations such as terrorism, child sexual exploitation, drugs, fraud and scams, while less serious policy violations will depend on someone reporting the problem before action is taken any actions.
Meta also makes it harder for platforms to remove content by requiring multiple reviewers to decide whether something should be removed and allowing users to see more civic content—posts about elections, politics or social issues—if they choose to.
Censorship tool
Kaplan explained that when Meta launched its independent fact-checking program in 2016, it didn’t want to be the arbiter of truth, so it handed over the responsibility for checking content to independent organizations.
“The goal of the program was for these independent experts to provide people with more information about what they see on the Internet, especially viral hoaxes, so that they can judge for themselves what they see and read,” he wrote.
“Things were different, especially in the United States,” he continued. “Experts, like everyone else, have their own biases and points of view. This was evident in the choices some made about what and how to test.”
“Over time, we came to the point where there was too much content being vetted that people perceived as legitimate political speech and debate,” he said. “Our system then brought real consequences in the form of intrusive labeling and reduction in spread. A program designed to inform has too often become a tool of censorship.”
David Inserra, Free Speech and Technology Fellow at Cato InstituteThe Washington, D.C.-based think tank worked on Facebook’s content policy team and said it was concerned about the group’s selection bias. “The only people who joined to fact check wanted to moderate the content,” he told TechNewsWorld. “People who wanted users to make their own decisions about content didn’t become fact checkers.”
“My experience as a fact-checker at Facebook in general has been quite mixed,” added Darian Shimi, CEO and founder of Facebook. FutureFoundationfundraising platform for K-12 and parent schools in Pleasanton, California.
“It’s safe to say it added an extra layer of responsibility, but frankly, I found it to be too slow and inconsistent to keep up with the pace of viral misinformation,” he told TechNewsWorld. “Talking to many people in my circle and conducting internal research, I have found that most people believe that using third-party fact checkers creates a sense of bias, which does not always help build user trust.”
“This is not a victory for free speech”
Irina RaicuDirector of Internet Ethics at the Markkula Center for Applied Ethics at Santa Clara University, noted that under the current fact-checking regime, a lot of misinformation appears on Facebook.
“Part of the problem was automating content moderation,” she told TechNewsWorld. “The algorithmic tools were quite dumb and missed the nuances of language and images. And the problem was even more prevalent in messages in languages other than English.”
“With billions of pieces of content published every day, it is simply impossible for fact checkers to keep up,” added Paul Benigheri, co-founder and CEO of the company Archivea company that develops software to automate digital marketing workflows in e-commerce, based in New York.
“The fact-checking was more of a PR stunt,” he told TechNewsWorld. “It worked sometimes, but it never came close to catching the full volume of misleading posts.”
Meta’s abandonment of its fact-checking system was questioned by Tal-Or Cohen Montemayor, founder and chief executive Cyberwellis a nonprofit organization dedicated to combating anti-Semitism on social media, headquartered in San Francisco.
“While the previous fact-checking system has proven to be an ineffective and unscalable method for combating disinformation and misinformation during conflicts and emergencies in real time,” she told TechNewsWorld, “the answer cannot be less accountability and less investment on the part of the platform.”
“This is not a victory for free speech,” she said. “This is trading human bias in a small and discreet group of fact-checkers for human bias at scale through Community Notes. The only way to prevent censorship and data manipulation by any government or corporation is to introduce legal requirements and Big Tech reforms that will ensure social media reform and transparency requirements.”
Wrong community decision
Meta’s replacement for fact-checking community notes is modeled after a similar scheme used on X, formerly Twitter. “The good thing about a community-based approach is that it solves part of the problem of scale,” said Cody BuntonAssociate Professor, College of Information Science, University of Maryland. “It allows a lot more people to participate in the process and add context.”
“The problem is that community notes, while they can work on a large aggregate scale for random pieces of information or random stories that go viral, they are usually not fast enough and are completely overwhelmed by new big events,” he explained.
“We saw this after the attacks in Israel back in October 2023,” he continued. “There were people actively involved in the process of creating notes for the community, but Twitter as a platform was just overwhelmed and overwhelmed by the amount of misinformation going on around this event.”
“When the platforms say, ‘We’re going to wash our hands of it and let the community sort it out,’ that becomes problematic in times like these when the only people who can really handle massive influxes of high-speed, low-speed quality information are the platforms,” he said. “Community notes aren’t really designed to address these issues, and those are the times when you need high-quality information the most.”
“I’ve never been a fan of public notices,” added Karen Kovacs North, clinical professor of communications at Annenberg School of Communication and Journalism, University of Southern California.
“People who are willing to do something tend to be polarized and passionate,” she told TechNewsWorld. “Average people don’t take the time to comment on a story or piece of content.”
Earn Trump’s favor
Vincent Raynaudassistant professor of communication studies at Emerson College, noted that while community moderation sounds great in theory, it has some problems. “Even though content may be labeled as disinformation or misleading, it is still available for people to consume,” he told TechNewsWorld.
“So even if some people see a community note, they can still consume that content, and that content can still influence their attitudes, knowledge and behavior,” he explained.
Simultaneously with Kaplan’s statement, Meta released video CEO Mark Zuckerberg welcomed the company’s latest moves. “We’re going to go back to our roots and focus on reducing errors, simplifying our policies and restoring freedom of expression on our platforms,” he said.
“Zuckerberg’s announcement has nothing to do with improving the Meta platforms and everything to do with winning over Donald Trump,” they said. and KennedyProfessor of Journalism at Northeastern University in Boston.
“There was a time when Zuckerberg was concerned about his products being used to spread dangerous misinformation and misinformation about the January 6 insurrection and Covid,” he told TechNewsWorld. “Now Trump is back in office, and one of Zuckerberg’s rivals, Elon Musk, is going crazy with Trump’s pandering, so Zuckerberg is just getting started with the agenda.”
“No fact-checking or moderation system is perfect,” he added, “but if Zuckerberg really cared about it, he would be working to improve it rather than getting rid of it altogether.”
Musk as a trendsetter
Damian Rollison, company marketing director SOCIALcloud-based co-marketing platform headquartered in San Diego, pointed out the irony of Meta’s latest move. “I think it’s safe to say that no one predicted that Elon Musk’s chaotic takeover of Twitter would become a trend that other tech platforms would follow, and yet here we are,” he told TechNewsWorld.
“Now, in hindsight, we see that Musk set the standard for a new conservative approach to weakening online content moderation, which the Meta has now adopted before the Trump administration took office,” he said.
“This will likely mean that Facebook and Instagram will see an uptick in political speech and posts on controversial topics,” he continued.
“As with Musk’s X, where advertising revenues fell by half, this change could make the platform less attractive to advertisers,” he added. “It could also perpetuate a trend in which Facebook becomes the social network for older, more conservative users and loses Gen Z to TikTok, with Instagram positioned in between.”
2025-01-08 13:00:15