Information Sciences and Technology

True: Fact checkers tend to agree on validity of news claims, researchers say

Researchers from the Penn State College of IST studied the practices used by fact-checking organizations to assess the validity of news claims. Credit: Vitalii Vodolazskyi/Adobe StockAll Rights Reserved.

UNIVERSITY PARK, Pa. — The use of fact-checking services spikes during major news events. Fortunately, the fact checkers have generally agreed in their assessments of whether news claims are true or false, according to researchers from the Penn State College of Information Sciences and Technology (IST).

In their work, which appeared in the Harvard Kennedy School Misinformation Review in October, the researchers studied the practices used by fact-checking organizations to assess the validity of news claims. They measured the consistency of legitimacy ratings across four popular fact-checking platforms: Snopes, PolitiFact, Logically and the Australian Associated Press FactCheck.

“Half of U.S. adults regularly get their news from social media like X, Facebook, Instagram and TikTok,” said Sian Lee, doctoral student in the College of IST and first author of the research article. “But social media platforms generally do not check the legitimacy of headlines and content the way traditional news outlets do, and this can result in the spread of misinformation — fake news — that misleads and harms people and society.”

But social media sites appear to be addressing this lack of vetting, according to the researchers. During newsworthy events, such as the COVID-19 pandemic and the 2020 U.S. presidential election, they increasingly turn to fact checkers to assess the validity of the news in their feeds and mitigate the spread of fake news online.

“Fact checking is complex and multifaceted and involves numerous variables,” said Aiping Xiong, assistant professor in the College of IST and co-principal investigator on the project. “Currently, fact-checking is often done by humans. As fact checkers aim to get closer to the truth, they may select and verify different events or see different things when looking at the same event.”

When multiple fact-checking organizations consistently agree on the accuracy of a statement, the public is more likely to trust their assessments, said Dongwon Lee, professor in the College of IST and principal investigator on the research project.

“As the next U.S. presidential election approaches, we wanted to understand how fact checkers operate and if, when or why they differed,” he said. “However, so far, there has not been a large-scale data-driven study to answer such a question.”

The researchers examined more than 24,000 fact-checking articles from Jan. 1, 2016, to Aug. 31, 2022. They developed automatic methods to collect articles from the fact-checking platforms and to compare the similarity between the claims in these articles. Using this approach, they identified 749 potentially matching claims — meaning the same information was examined —between Snopes and PolitiFact. For these matching claims, 228 received differing ratings from Snopes and PolitiFact for how true the information was.

To investigate the reasons for these discrepancies, they manually examined the 228 cases, and found that some of the diverging ratings resulted from minute differences in the granularity of rating systems. Snopes uses a five-point scale — True, Mostly True, Mixture, Mostly False and False — along with additional categories of ratings such as Outdated, Miscaptioned and Satire, among others. PolitiFact uses its six-point "Truth-O-Meter" that includes True, Mostly True, Half True, Mostly False, False and Pants On Fire.

Other divergent ratings resulted from the timing of the fact checking or the specifics of claim being assessed. For example, Snopes rated the claim “Five people died during the Jan. 6, 2021, U.S. Capitol riot” as True, while PolitiFact rated the claim “Only one person died on that day during the Jan. 6 U.S. Capitol riot” as False. The algorithm used by the researchers identified these as matching claims. But the detailed numbers — “five” versus “only one” — differed, resulting in disagreement between the fact checkers’ conclusions.

When the researchers adjusted the 228 disagreed matching claims for these differences, they found only one instance where Snopes and PolitiFact did not agree: Whether 2016 presidential candidate Ben Carson said, “Anyone caught involved in voter fraud should be immediately deported and have his citizenship revoked.”

According to the researchers, Snopes interpreted “anyone” to mean “illegal immigrants,” and rated the claim that Carson made the statement Mostly True. PolitiFact, however, interpreted “anyone” to mean “any American” and rated the claim Mostly False.

“In the end, we found only one case of a conflicting rating, which suggests that, by and large, Snopes and PolitiFact have established consistent and reliable fact-checking practices,” Sian Lee said. “We believe this enhances the credibility of fact checkers in the eyes of the public.”

Haeseung Seo, doctoral student in the College of IST and contributing author, said that the findings of this study validate the fact-checking practices of social media platforms.

“Ultimately, this work contributes to the promotion of truth and the prevention of the spread of misinformation on social media,” Seo said.

The Penn State Social Science Research Institute and the National Science Foundation partially supported this research.

Last Updated February 1, 2024

Contact