Fact-checking is changing how people do politics. At its best, this reporting makes officeholders, candidates, parties, staff and supporters more cautious about what they say. It also provides independent, explanatory information that partisans use as a frame or point of reference to discuss complex issues.

At the same time, fact-checking can also provide something else for political professionals to fling, deflect or twist — another weapon and a source of partisanship and cynicism.

Just as politics is adapting to fact-checking, some fact-checkers are adapting to the world they cover. Sometimes this adaptation draws on the best elements of journalism. As an example, fact-checkers’ bloggy willingness to link to and cite one another’s reporting, even when they sometimes come to different conclusions, reinforces and spreads each other’s analysis and provides additional information to their audiences.

On balance, fact-checkers tend to be good about correcting their mistakes and updating findings — a necessary and minimum investment in credibility given their mission. But some could do more to elevate and explain those corrections and updates on their sites, especially for those in their audience who already read, saw or heard earlier versions of the stories.

Another inheritance from traditional newsroom culture is less healthy — an institutional reluctance to publicly defend journalism, the way Richmond Times-Dispatch editor Danny Finnegan did when he posted responses to bloggers in Virginia who were asking questions about the voting record of his PolitiFact editor. That does not mean media leaders should ignore serious questions about their journalists’ reporting or impartiality. But the rhinoceros hide required to oversee or run a fact-checking operation does not mean just letting the work speak for itself. Publicly explaining the reporting process and editorial decision making is important, even if it won’t always change critics’ minds.

It also is important to recognize that fact-checking is a more pro-active form of journalism that takes a reporter and a news organization beyond describing events. The news organization, by calling out what is true and what is not, is acting as something of an advocate — on behalf of the citizen rather than a partisan. Yet that raises the stakes and by degrees changes or elevates the journalistic role.

Beyond these fundamentals, there are other tools and practices journalists can use to increase the impact of fact-checking, and perhaps counter some of the negative effects detailed in this report.

What follows are recommendations — drawn from the research for this report and the advice of established fact-checkers — that will help journalists in a changing environment play that role more effectively:

Watch out for misuse of fact checks: As more and more politicians use and misuse media fact checks to make their points and to rebut critics and rivals, journalists need to monitor these statements even more closely.

Fact-checkers occasionally produce items correcting or challenging political advertisers or officeholders who mischaracterize their reporting. That’s what happened after a Republican presidential debate in 2011, when Republican Rep. Michele Bachmann asserted that PolitiFact had reviewed her performance at an earlier face-off and found “everything that I said was true.” PolitiFact looked back and found two ratings: a “mostly true” and a “Pants on Fire.” The candidate’s new claim about PolitiFact’s findings earned her another Pants on Fire.[ref Louis Jacobson, “Michele Bachmann says “PolitiFact came out and said that everything I said was true” in last debate.” PolitiFact, Dec. 15, 2011]

Many examples of political operations misusing fact checks that were referred to earlier in this report were found just by arbitrarily reviewing stories that happened to be cited in political ads. Others were found in the kind of follow-up stories suggested here, though some fact-checkers said there were more than they felt they could cover.

These meta-fact checks may feel a bit self-referential and risk picking fights with political players. But newsrooms need to police these references aggressively to try to keep the abusers honest and to defend the credibility and accuracy of their own reporting. In so doing, they may also help call attention to the importance and impact of their work. (There’s room for a journalism review or an independent media watchdog to lend a hand here, perhaps in a form that other fact-checkers could then help distribute.)

Keep context in context: Context matters when it comes to facts. A number may be correct and still be misleading. A partial quote may be accurate, but the full quote may say something different entirely.

In the days before the Republican primary in Nebraska’s 2014 Senate race, outside groups used partial quotes to portray two top candidates as being wishy-washy on repealing the Affordable Care Act. One group’s ad targeted Ben Sasse for saying the health care law was “an important first step” while another group’s ad went after Sid Dinsdale for saying the law had its “good aspects.” As FactCheck.org noted in back-to-back stories, both statements were taken out of context and did not accurately reflect either candidate’s opposition to the law.[ref Robert Farley, “‘Fact’ Lifted Out of Context in Nebraska,” FactCheck.org, May 8, 2014; Robert Farley, “Same Tactic, Different Candidate in Nebraska,” FactCheck.org, May 9, 2014]

Many political advisers may think it’s their opponent’s job to tell the whole story (as Republican pollster Neil Newhouse put it above, “let the other guy do that”). But it’s the fact-checker’s job, too, even when that means making complicated distinctions and navigating tricky terrain. That can be especially difficult when an elected official takes credit or gets the blame for a particular policy outcome. Like Mark Twain’s Connecticut Yankee claiming responsibility for a solar eclipse in the sky over King Arthur’s Court, these statements are rarely as clear as they seem.

Journalists who do this work quickly learn that politicians and their supporters will disregard and even try to discredit fact checks based primarily on interpretive, contextual arguments — dismissing them as differences of opinion and accusing reporters of bias. While those accusations come with the job, there are ways to make sure fact checks are less vulnerable.

PolitiFact founder Bill Adair explained that he and his colleagues made changes to their rating process before the 2012 election, in part to address outside concerns about the occasional overuse of contextual arguments. The change involved a series of questions that PolitiFact journalists now ask about any statement they are reviewing:

  • Is the claim literally true?
  • Is the claim open to interpretation? Is there another way to read the claim?
  • Does the speaker prove the claim to be true?
  • Did we check to see how we handled similar claims in the past?[ref For a behind-the-scenes look at he PolitiFact process, including the four questions, see Andrew Phelps, “Inside the Star Chamber: How PolitiFact tries to find truth in a world of make-believe,” Nieman Lab, Aug. 21, 2012]

Adair said these questions helped avoid unfairly declaring a factually true statement some flavor of false. At the same time, it left room for the journalists to note when the “ruling statement,” in PolitiFact’s terminology, overlooks other relevant information to make a misleading point.

FactCheck.org does not use the kind of true-false scale that PolitiFact and others do, but it does have a stylistic conceit for dealing with claims that may be “technically true, but that don’t tell the full story has its own.” As its editors explained, “We’re very careful to label out-of-context statements as ‘misleading’ or ‘potentially misleading’ rather than false.”[ref “Not Just the Facts,” FactCheck.org, Nov. 19, 2009]

There are other ways fact-checkers can take on the context question, especially when they know their reporting is likely to be mischaracterized or misunderstood. This matters most when reviewing a small detail from an ad or a debate — the kind of middle-tier fact-check that’s easily overstated or dismissed as nitpicking. Fact-checkers might explore more systematic and creative ways to show a statement or claim they are evaluating more fully in the context it appeared, especially if the claim is a relatively small part of a larger message. That’s why most fact-checkers usually include the full text of an ad script in their stories, as they also do with lengthy excerpts from speeches and debate transcripts.

The Washington Post’s occasional TruthTeller video segments take that a step beyond. Rather than just showing videos, their producers pause clips from ads and speeches to superimpose the fact-check. WNYC’s weekly public radio program “On the Media” went so far as to have its hosts heckle political statements and ads from movie theater seats in an animated series called “Media Scrutiny Theater” modeled after the cable TV show “Mystery Science Theater 3000.” NPR and the Texas Tribune used a different cable show for inspiration — VH1’s PopUp Videos. Beyond the purely fun aspects of some of these videos was an effort to integrate fact checks with the source material, giving viewers another way to fairly evaluate the conclusions.[ref See “Media Scrutiny Theater” by WNYC’s On the Media and NPR’s “Pop-Up Politics”]

None of that will prevent political players from using a fact check on a relatively small point to validate or challenge the entirety of a message or even the messenger. But knowing that’s how their work will be used, fact-checkers should continue to explore ways to help the audience better understand what they are actually checking.

Go for magnitude: Not every “false” is as false as another. That’s why fact-checkers need to find better ways to regularly tell voters which claims and statements matter most.

Thematic lists and compilations are one helpful approach. FactCheck.org, for instance, published collections for the 2014 campaign that rounded up the year’s “Medicare Mudslinging” and biggest “abortion distortions” plus an Election Day collection of the year’s “funniest, strangest and otherwise noteworthy ads.”[ref Lori Robertson and Eugene Kiely, “Midterm Medicare Mudslinging,” FactCheck.org, Oct. 3, 2014; Brooks Jackson, “Abortion Distortions 2014,” FactCheck.org, Sept. 26, 2014; D’Angelo Gore and the FactCheck.org Awards Committee, “Our 2014 FactCheck Awards,” FactCheck.org, Nov. 4, 2014] But the audience could use a more regular if not persistent indicator of importance, especially given the steady stream of stories almost all news organizations feel they need to generate these days to maintain their presence in a 24/7 news cycle.

The news business continuously debates this need for speed, particularly the link between traditional ways of measuring online traffic and the frequency of publication. By nature of their work, dedicated fact-checkers typically spend far more time preparing individual stories than most other daily journalists do. And yet the need for a fresh headline can result in marginal fact checks — sometimes interesting or even amusing, but also unimportant and purely fleeting.

For the most part, fact-checkers try to avoid this speed trap. In its “basic principles” published online, the Washington Post’s Fact Checker feature declares it will focus on “the issues that are most important to voters. We cannot nitpick every detail of every speech.” Likewise, PolitiFact says its journalists ask whether a statement is “significant” when deciding what it statements it will cover: “We avoid minor ‘gotchas’ on claims that obviously represent a slip of the tongue.”[ref “About the Fact Checker,” Washington Post; Bill Adair and Angie Drobnic Holan, “The Principles of PolitiFact, PunditFact and the Truth-O-Meter,” PolitiFact, Nov. 1, 2013]

But readers accept and even appreciate an entertaining story — like one that PolitiFact Wisconsin posted after a TV interviewer asked Democrat Mary Burke which beer the gubernatorial candidate preferred: New Glarus Brewing Co.’s Spotted Cow and Leinenkugel’s Summer Shandy? The Milwaukee Journal Sentinel’s fact-checking team did not scrutinize Burke’s choice, but rather her claim that Spotted Cow was only available in Wisconsin. PolitiFact’s reporting rated that True. Even in a state where beer matters as much as it does in Wisconsin, it would be hard to say this fact check met PolitiFact’s “significance” test. And yet the “True” still counts on the site’s cumulative list of stories about her statements — one of only two “True” ratings the site awarded Burke during the 2012 race, as it happened.[ref James B. Nelson, “Mary Burke says Spotted Cow beer available only in Wisconsin,” PolitiFact Wisconsin, Aug. 25, 2014]

Even on matters of more substance, some issues are more significant than others. “True” and “False” doesn’t convey that — and trying to add that tier to a system like PolitiFact’s Truth-O-Meter would be difficult. As its creator Bill Adair put it, “Any time you introduce a measurement you introduce an element of complexity.” But fact-checkers might seek other ways to signal importance as well as factualness in their reporting, beyond the helpful lists of key fact checks many already compile. For the fact-check with rating systems, that could mean finding other ways to review statements that are marginal but interesting, like Burke’s beer knowledge, that do not use the same grading systems they usually apply to more serious and important questions.

Avoid matters of opinion: There have been times when fact-checkers have been drawn into debates over opinion, not fact, despite principles, policies and practices aimed at avoiding that. Fact-checking sites, particularly those that generally rely on rating systems, deal with claims that are based on opinions by publishing stand-alone articles that review key facts underlying an ongoing debate without taking a position on the claims of one side or another.

PolitiFact and its affiliates have published articles labeled “In Context.” These articles review and annotate a political statement at length, explaining factual references and linking to supporting material without issuing a Truth-O-Meter score. PolitiFact’s partners at the Austin American-Statesman used this template after a city council candidate with a chemistry background spoke about a study she said persuaded her that explosives planted in the Twin Towers contributed to the buildings’ collapse during the 2001 terrorist attacks. The nearly 2,000-word story included links to 30 sources, but no Truth-O-Meter rating.[ref Dylan Baddour, “In Context: Laura Pressley’s ‘something was planted’ in World Trade Center,” PolitiFact Texas, Nov. 4, 2014]

Likewise, the Post’s Kessler has on occasion dropped his Pinocchio-rating system, as he did when Republican Sen. Kelly Ayotte of New Hampshire asked questions about Secretary of State Hillary Clinton’s knowledge of security concerns before the 2012 attack on the U.S. diplomatic compound in Benghazi, Libya. “[W]e find ourselves checking an opinion,” Kessler concluded, noting the absence of concrete evidence to evaluate the senator’s claim. “[T]hat’s not enough for a Pinocchio. So, for now, we will not issue a rating.”[ref Glenn Kessler, “Hillary Clinton and the Aug. 16 cable on Benghazi security,” Washington Post, April 10, 2013]

Fact-checkers should talk even more regularly to their audience about the statements they are not reviewing and why, particularly matters of opinion. Not only would these stories add something to the public discussion of controversial political statements, it may well help their audiences better understand the difference between fact and opinion.

Going for ratings: Rating systems are powerful and provocative tools for drawing an audience into stories that often dwell on complicated and esoteric policy matters. Research published by the American Press Institute in March shows that readers prefer fact checks that include ratings.

But few topics rile people in politics more than fact-checking rating systems. For Mo Elleithee, communications director for the Democratic National Committee, the different grading systems are “cheapening the discussion” and practically invite campaigns and other political organizations to pummel one another with ratings he thinks are meaningless. “Serious fact-checkers are actually making themselves less valuable than they should be,” he said. If fact-checkers abandoned their rating systems, Elleithee said, “I would be willing to lose them as a political tool…. It would become less of a political weapon.”

Perhaps. But even FactCheck.org, which does not use a rating system, is cited in attack ads. Meanwhile, every other U.S. fact-checking site tracked in the Duke Reporters’ Lab early 2015 annual survey uses ratings of some sort. Those ratings come in a wide variety, especially among local fact-checking operations across the country. Cleveland.com and the Reno Gazette Journal both use 0-10 numeric scales, while the the Cedar Rapids Gazette in Iowa and news partner KCRG-TV grade their facts on an A-F basis. In North Carolina, WRAL-TV devised a traffic light system (green, yellow, red) while local fact-checkers elsewhere have found other ways to brand an untruthful message — from the Sacramento Bee’s “outright lie” to Voice of San Diego’s “Huckster Propaganda.”[ref Duke Reporter’s Lab database of global fact-checking sites]

Ratings are a way of signaling editorial authority. They also are an entry point — a easily understood invitation to what are usually long, in-depth articles that explore challenging topics.

“I embrace the idea that it’s marketing gimmick,” said Kessler, the Post’s Fact Checker. “It’s easy to remember.”

FactCheck.org’s founder Jackson acknowledged that ratings systems are “terrific reader engagement devices” that are also “easily quoted and repeated by candidates.” But he warned that all the meters and traffic lights and other systems can be “dangerous” metaphors when dealing with ambiguous questions. “It sometimes can get you into trouble, especially when you’re dealing with literally true statements that are nevertheless completely misleading.” Overall, ratings “have their advantages and disadvantages,” but Jackson said he is relieved not to have to engage with sources in what he called “Pinocchio bargaining.”

Should fact-checkers use ratings and if so which kind? The answer depends most on each news organization’s particular editorial goals and an increasing body of research that can help editors and producers determine which approach is most engaging and persuasive to the specific audience they want to reach. But even sites that use them will sometimes set their ratings aside when they are not appropriate for the story at hand — a healthy practice that avoids dog-wagging.

Fishing where the fish are: News is fleeting, but fact checks have a shelf life that give them more lasting value to readers, viewers and listeners — assuming those people can find them. The only thing that keeps news organizations from making the most of this value seems to be the industry’s historic ineptitude at marketing its work, as well as its journalists’ cultural swooning whenever anyone tries to talk about “marketing” in the newsroom.

Most fact-checkers are good about reusing their reporting when an issue reappears in a campaign or political debate. A new piece of legislation or a flurry of advocacy or campaign advertising can be an excuse to write and publish a story that gathers up previous reporting on the same subject, summarizing and typically linking back to the earlier work. As mentioned before, that can involve compiling annotated lists and collections of all past fact checks on topics of ongoing debate, as PolitiFact and the Post Fact Checker did with their work amid the confusing 2012 campaign back-and-forth over Mitt Romney’s time at Bain Capital.[ref Glenn Kessler, “Mitt Romney and Bain: a Fact Checker collection,” Washington Post, July 20, 2012; Angie Drobnic Holan, “Checking the facts about Romney and Bain Capital,” PolitiFact.com, July 17, 2012]

Those compilations and rehashes are especially useful for regular readers, viewers or listeners. But they also are a chance to connect with a new audience that may suddenly have an interest in a topic that’s new to them. For instance, detailed roundups on the 2012 Benghazi attack assembled by FactCheck.org, PolitiFact and the Post’s Fact Checker all turned up prominently in a Google search for “Benghazi facts” more than two years after they were written.

Marketers instinctively capitalize on that kind of interest, using low-cost search and social media advertising to promote their wares. Fact-checkers should do much the same, targeting people’s searches, interests and conversations rather than just passively depending on their “search optimized” headline-writing skills and favorable algorithmic winds. Compared to the expensive promotions and advertising blitzes that, say, a typical newspaper company will still pay for to attract or keep a few hundred print subscribers, the efforts proposed here would be an extremely cost-effective way for a news organization to get its reporting directly to audience that literally self-identifies as wanting to know more about something its journalists are already doing. Plus the newsroom’s topical expertise and sense of timing is a critical ingredient to making this kind of outreach work.

Segmenting the audience: If fact-checkers want to expand their reach and impact, they need to experiment with different ways of presenting their reporting. That means journalists need to think clearly about whom their reporting for.

FactCheck.org calls itself a “consumer advocate,” and it’s clear which consumers they don’t mean. Like most fact-checkers, its editors are quick to say that they aren’t reporting for the campaigns or elected officials they cover; they work for the voters.[ref “Our Mission,” FactCheck.org] But which voters? Partisan voters who tend to vote one way or the other, no matter what the media has to say about their candidates? Decided voters who want to make a case for their side or against the other? Undecided voters who are trying to sort out conflicting claims?

Most fact-checkers, if only for reasons of daily reality, rely on one-size-fits-all story forms to try to serve all of the above. But those daily story forms may not be the right ones to connect with a particular audiences that any given fact-checker aspires to reach.

That is not to say fact-checkers should bend their reporting or conclusions to serve particular partisan audiences. But with academic research showing that some people tend to reject “corrective” information that challenges their strongly-held beliefs, there may be ways to use those same findings to create new forms of fact-checking stories. In some cases, that might simply mean the wording used to help distribute the work via social media — for instance, asking questions in headlines rather than answering them, even in defiance of editorial arguments that question-mark headlines are overused and often oversold.[ref See “Betteridge’s law of headlines,” Wikipedia and the helpful Twitter feed it inspired: https://twitter.com/yourtitlesucks] In other cases, that might mean presenting the same reporting organized in different structures or formats designed to help the audience draw its own conclusions.

This is a tough assignment for newsrooms that have little bandwidth to devote to fact-checking, but it’s an idea worth testing to try to expand the reach and influence of this journalism. And it does not necessarily require new technology or templates to make happen.

One great example of such a story comes via Dartmouth College political scientist Brendan Nyhan — one of the scholars whose work has helped fact-checkers understand how readers absorb their reporting. Writing about the 2012 Bain controversy for the Columbia Journalism Review, Nyhan cited a “choose your own adventure”-style news essay posted by Boston news site BostInno. Walter Frick, who was BostInno’s business editor at the time, was clear in his introduction that he had already chosen his own adventure and did not pretend to be impartial on the questions about Romney’s time at the investment firm. Frick stated upfront that he did not think the issue mattered as much as the policy differences between the Republican nominee and the president. But “as long as we’re forced to keep hearing about this,” he wrote, “I figured I’d offer my own step-by-step guide to the facts, tied into whether or not you need to actually care, depending on your assumptions.”

The article used basic HTML to allow people to answer Frick’s questions about their opinions at the end of each factual summary. By letting readers navigate his well-chosen collection of facts and conflicting arguments, Frick not only conveyed key details but illustrated the reasons why partisans on both sides would see those details so differently.[ref Brendan Nyhan, “Unanswered questions in the Romney/Bain controversy,” Columbia Journalism Review, July 17, 2012; Walter Frick, “When Did Romney Leave Bain? Choose Your Own Adventure,” BostInno, July 16, 2012]

Invest in R&D: Another way to extend the reach and impact of fact-checking is to invest in the tools that journalists need to get their work more directly and immediately in front of their audiences, as well as engaging those audiences in the reporting process.

Intercepting political messages for voters with real-time fact-checking — in ads, in their mailbox and in-person — turns out to be no easy task. But that hasn’t stopped fact-checkers and their allies from trying. The 2012 SuperPAC App is one recent example. Funded by the journalistically minded John S. and James L. Knight Foundation, this app let iPhone users identify political ads with Shazam-like audio recognition software. The app then found matching information about the ad’s funder and, when available, related fact checks. That is, assuming you could fire up the ad and listen long enough to find a match — no easy feat with a 30-second TV ad.

Another Knight grant helped the Washington Post develop a prototype of its TruthTeller app (related to but different from the previously mentioned video series of the same name). The prototype uses speech-to-text technology to match the audio from political messages with related fact checks. But given the limits of speech recognition and natural language processing, the ultimate fact-checking app remains on the drawing board.

Another line of interest aims to bring the audience directly into the fact-checking process. Sites such as the recently launched Fiskkit, for instance, enlist readers to annotate and explain passages of content. A key to any such venture will be developing models that prevent well-meaning “citizen fact-checkers” from being overwhelmed by partisans, especially if political organizers decide platforms such as Fiskkit can become productive grounds for persuasion and recruiting.

But the potential for professional fact-checkers is great too, especially as they develop means to give engaged users directed, factual assignments they can do, regardless of the users’ personal beliefs and political leanings. To some extent that’s already happening. Some TV and radio ads that fact-checkers have analyzed were never announced to the press or posted on campaigns’ official online channels. Instead they came to the attention of the journalists because attentive TV viewers captured the videos and uploaded them to YouTube. Broader efforts to gather and catalog increasingly targeted and personalized political communication, from political junk mail and leaflets to targeted digital communication, is a great opportunity for newsrooms, but one they cannot do without the help of well-organized “citizen militias.”

Conclusion

The appetite for more participatory media is something political pros can’t avoid noting — including a strong desire to figure out what and who to believe.

“Voters are extremely cynical about the messages they get,” Democratic strategist Anita Dunn said, pointing to the way the public now taps “their own personal networks” and goes online to “ferret out” facts. She recalled how during the first Obama-Romney debate in 2012, references to the bipartisan Bowles-Simpson budget commission and the Dodd–Frank act of 2010 quickly turned those two somewhat esoteric Washington phrases into trending terms on Google.[ref Chart, “Top 4 Rising Search Terms During Wednesday’s Debate,” Google Politics & Elections, Oct 4, 2012; Note: The terms were 1.) Simpson Bowles, 2.) Dodd Frank, 3.) Who is winning the debate, 4.) Big Bird.]

For Dunn, that was an example of how “voters actively look for what they see as independent, unbiased sources of what’s true and what’s not true.”

If true, that would have delighted my old boss, David Broder, the Washington Post columnist whose push for more press scrutiny of TV ads after the 1988 presidential campaign helped set the stage for the fact-checking efforts of today. He also would have been among the most eager to see journalists find ways to use technology to recruit their readers, viewers, listeners and users into the fact-finding process. In a 1979 speech at the National Press Club in Washington, Broder urged journalists to remember — and remind readers — that “the newspaper that drops on your doorstep is a partial, hasty, incomplete, inevitably somewhat flawed and inaccurate rendering of some of the things we have heard about in the past 24 hours.” Then he added:

“If we did that, I suspect, not only would we feel less inhibited about correcting and updating our own stories, we might even encourage the readers to contribute their own information and understanding to the process. We might even find ourselves acknowledging something most of us find hard to accept: that they have something to tell us, as well as to hear from us. And if those readers felt that they were part of a communications process in which they were participants and not just passive consumers, then they might more easily understand that their freedoms — and not just ours — are endangered when the search warrants and subpoenas are visited on the press.”[ref David Broder, Behind the Front Page: A candid Look Inside How the News Is Made (New York: Simon & Schuster, 1987), p. 14-15]

Fact-checkers do not like to evaluate predictions, but I would rate Broder’s forecast “True.” Just as Broder rightly saw the emergence of fact-checking as an important development in political journalism, the ongoing importance of this work will depend on how news people recognize and adapt to new political tactics, engage their audiences, and make their reporting relevant, interesting and accessible.

Those efforts will keep fact-checkers on the side of the voters — which means the politicians will have to pay attention, too.

Share with your network

You also might be interested in: