Insights, tools and research to advance journalism

Recent research: When fact checkers agree

FCP logoWhat does it mean when fact-checking journalists review campaign claims and come to the same conclusions?

Michelle Amazeen, an assistant professor of advertising at Rider University in New Jersey and one of API’s fact-checking researchers, takes a look at fact-checking agreement in a recently published study.

Amazeen revisits a study by two University of Miami scholars whose criticisms of journalistic fact-checking included what they deemed media’s “naive” methodology and a failure to adhere to methods used by the scientific community.

But journalism itself is not scientific, Amazeen says. “Does that mean we should do away with news reporting, too?”

“What the study suggests is that we should not be blinded by the criticism that just because fact-checking is not scientific, it holds no societal value,” says Amazeen.

The paper cites examples of clear-cut deceptions that fact-checking has exposed, Amazeen notes.  “And when not one, not two, but three different fact-checkers come to the same conclusion about the accuracy of a claim using different evaluation methods, the general public should have more confidence that there’s cause for concern,” she says.

 

Fact-checker agreement during U.S. presidential campaigns

How often the primary fact-checkers in media reach similar conclusions

2008 2008 2008 2012 2012 2012
Fact checkers No. of ads No. of claims Agreement (%) No. of ads No. of claims Agreement (%)
FactCheck.org, Fact Checker, PolitiFact 2 6 100 13 17 100
FactCheck.org, Fact Checker 5 9 100 12 49 98
Fact Checker, PolitiFact.com 1 2 100 8 19 95
FactCheck.org, PolitiFact.com 18 36 98 6 12 100
TOTAL 26 53 39 97

Data Source: Dr. Michelle Amazeen, Rider University

AMERICAN PRESS INSTITUTE

Beyond agreement on evaluations, finding statements that multiple fact-checkers agree are verifiable is also noteworthy. Out of all of the statements in a political ad, most are not factually verifiable. For instance, “We can restore America’s leadership in the world” is not something that fact-checkers can check, Amazeen notes.

“Thus, these leading fact-checkers have to be disciplined in carefully selecting the types of claims to check,” she says. “It’s a task that’s fraught with peril.”

Amazeen acknowledges that the authors “do offer some fair criticisms” of current fact-checking practices in U.S. journalism. For instance, she says, fact checkers would benefit from adopting a more systematic and transparent method of claim selection. However, she adds that “as long as unambiguous practices of deception continue, fact-checking has an important role to play in the United States and around the world.”

You can read the full text of Amazeen’s study, “Revisiting the Epistemology of Fact-Checking,” here.

Read more about political fact-checking and accountability research to be released by the American Press Institute this spring.

 

 

  • Bryan

    The fact checker agreement table is rubbish, more or less. Amazeen announces a “Krippendorff’s alpha” of 0.66 for fact checker agreement. But there’s no admission the score reflects very low variability in the rating (and no explanation for ignoring Krippendorff’s warning not to use data with a K alpha below 0.667). Amazeen’s system of classification was binary, dividing all claims up into either true or false. But nearly all the claims were false. It’s easy to find agreement when for 20 claims both sources rate 19 of them false (for example). But disagreeing on one rating will net a K alpha in the neighborhood of 0.66.

    “Consider variables with reliabilities between α = .667 and α = .800 only for drawing tentative conclusions.”
    –Klaus Krippendorff

  • Matthew

    I think the argument makes sense based on this data; Krippendorff’s Alpha produced the .66 instead of a number closer to the maximum 1.0 because Alpha doesn’t just measure the percentage of times ‘coders’ – in this case fact-checkers – agree (which is extremely high here), but takes into account the number of fact-checkers, number of possible judgements about the claims (false/true), the number of cases (claims), and variability in the coded values (the vast majority of claims judged false). I’ve had the same experience coding segments of TV shows as color vs b&w – the vast majority were, and were coded as, color but the coders missed a few b&w ones, so Alpha went way down; that doesn’t change the conclusion that most of TV is in color.

    • Bryan

      Matthew,

      The key inference from Amazeen’s paper is not that most fact checks find fault in their ratings. We know that going in, for fact checkers admittedly are looking for claims with flaws that need exposing. That’s what people want to read.

      Amazeen’s inference is that the coders are accurate thanks to that agreement. The importance of the low K alpha here is that in the context of a data set with very low variation (hardly any claims classified as false), a red flag is raised about the accuracy of the coding. And speaking of coding, it’s a big stretch to count fact checker ratings as coding amenable to a K alpha rating when nearly all specific distinctions in their coding are erased: Any matching rating finding fault, no matter what the fault, was counted as coder (fact checker) agreement. That’s not how you train your coders, is it?

      When the rating for your coders went way down, it was reflective of a lack of agreement because some missed some b&w programming. And it’s key to remember that when there’s little variation in the data we should expect a high K alpha by default. It’s hard to drop a K alpha much lower than .66 when there’s little variation in the data to begin with. High agreement is guaranteed when there’s little variation in the data.

      Amazeen owed readers more than an outsourced (a footnote leading to another paper) rationale for accepting such a low K alpha for this study.

Need to Know newsletter

The smart way to start your day

Each morning we scour the web for fresh useful insights in our Need to Know newsletter. Sign up below.

Featured topics

Go deeper on…

Dive deep on everything we produce about these key topics.

Strategy Studies

The best practices for innovation within news organizations

This Strategy Study presents examples and insights about journalism innovation, offering actionable advice and methods to move your journalism and business forward.