The public today must navigate a news ecosystem contaminated with false information. Some of it is spread without malicious intent, which is generally called misinformation. Some of it is designed to mislead or disrupt, which is known as disinformation. And some is aimed at swaying public opinion or beliefs by distorting the facts, which is a classic definition of propaganda. In today’s environment, it is often hard to make a distinction between these terms – misinformation can be passed on without nefarious intent, even though it was originally created with such a purpose. Intent, in fact, is rarely relevant. Even the content spread with an innocent or good intention can hurt people in the process. A lot of disinformation is also propaganda, but can be aimed not just at changing minds but at stoking division and chaos.
(In this paper, we use the term misinformation broadly, especially in referring to falsehoods of uncertain origin or intent. For more on how to characterize different kinds of false information more specifically, we recommend Data & Society’s Lexicon of Lies and First Draft’s Definitional Toolbox).
For news organizations, understanding this environment, and the way in which adversarial actors are seeking to influence the mainstream media, is critical to any effort to capture the attention, trust and loyalty of their communities.
The problem of misinformation poses an ambitious challenge, but audiences expect journalists to solve it. A Pew Research Center survey conducted in early 2019 found that half of U.S. adults surveyed identified made-up news and information as a very big problem, and even more say it is having a big impact on political leaders’ ability to get work done.
In a key finding for the news industry, Pew reported that 53 percent – the largest share – believe it is primarily the responsibility of journalists to fix the problem.
One big question is whether journalists have the tools to do so. The job requires understanding the psychology of audiences at a level beyond that which exists in most newsrooms. It also requires understanding how false information moves, which may be the province of scholars more than reporters. It is also not clear that journalists today have the trust of a broad enough spectrum of Americans to persuade them when information is false. And doing so might require more cooperation among American journalists across the political spectrum than they typically operate with now.
It is also important to know the scale of the problem. Facebook in 2017 estimated that 126 million people may have been served content from a page associated with the Russian disinformation factory known as the Internet Research Agency. That was over a two-year period, and it involved a number of different stories, memes, ads and other divisive content, but it shows the size of the challenge for providers of legitimate news. On Twitter, according to a Massachusetts Institute of Technology study published in Science Magazine last year, false news stories are 70 percent more likely to be retweeted than true stories are. The researchers also said that true stories take about six times as long to reach 1,500 people than false stories.
For some hoaxers, tricking the mainstream media is a goal in itself, experts say.
In other words, the information environment that has most challenged traditional media in the attention economy – social media – is where a lot of misinformation thrives and spreads, and it is huge. Moreover, the biggest of these platforms are the ones benefiting from and even contributing to the erosion of traditional news.
But while the mainstream media may no longer be gatekeepers determining what is and isn’t news, they still have enough influence and credibility that agents of misinformation perceive them as important carriers of their content.
As a result, manipulators try to trick the media into reporting these falsehoods as a way to accelerate and broaden their reach. They know that if a mainstream media outlet carries a falsehood, people feel more comfortable sharing it, said Ben T. Decker, a disinformation expert who runs the consultancy firm Memetica, in a phone conversation. “It creates more consumption and creates the perception that the information is more popular than it actually is. If it’s being reported on ABC News, for example, then it must be real, because ABC is a real media company,” he said.
For some hoaxers, tricking the mainstream media is a goal in itself, experts say.
In a comprehensive report on media manipulation for Data & Society, Alice Marwick and Rebecca Lewis discuss how trolls and other disruptors use the mainstream media to amplify their messages. Their case studies include a 2015 example in which the founder of a Neo-Nazi news outlet, the Daily Stormer, suggested that followers create “White Student Union” pages on Facebook, then contact local media about the posts. Some such pages were created, leading to outrage and controversy on those campuses – and coverage in the mainstream press, including USA Today. The objective, Marwick and Lewis write, was to either facilitate the creation of such groups or simply to “trick the media into moral outrage and simultaneously spread some racial tension throughout college campuses.” It worked.
“The media, hungry for stories about racial tension on college campuses, took the bait and amplified what was essentially a non-story,” they wrote.
The exploitation of big news events
The moments when journalists are most susceptible to misinformation are during breaking news events, when a combination of legitimate or official information intermingles online with fakery, and when journalists and the public are looking for new information.
After the Feb. 14, 2018, mass shooting at Marjory Stoneman Douglas High School in Parkland, Fla., a number of internet rumors started circulating as to the shooter’s identity. One name that made the rounds, Sam Hyde, frequently pops up after such shootings. There were also suggestions that the shooter was affiliated with Antifa, the leftist anti-fascist protest movement. At one point, a “screenshot” of a faked BuzzFeed headline appeared, saying, “Why we need to take away white people’s guns now more than ever.” Some people retweeted faked screenshots of tweets that pretended to be from Miami Herald reporter Alex Harris asking whether the shooter was white and requesting pictures of dead bodies.
All of this happened within hours of the murders. And there were even more hoaxes in the days that followed. Some of the students who spoke out were cast as having made it all up, being, in the lexicon of extremist forums online, “crisis actors” hired to stage the event to further some kind of anti-gun agenda. The students were smeared in other ways, too. One now-famous hoax involved a fake photo of one of the students, Emma González, tearing up the Constitution. It was a doctored version of a real photo of her tearing up a gun target. Another false report that the shooter was part of a neo-Nazi group made its way into mainstream media after a coordinated effort by disinformation actors online.
Mass shootings bring out a particular breed of hoaxes, conspiracies and other kinds of disinformation. They usually try to blame a member of a specific group – religious, racial or political. Gun control is a common theme. Sometimes, as in the case of the Harris, trolls take aim at the media. The purveyors of this misinformation, whom experts identify broadly as adversarial actors, use these moments to take advantage of the public’s instinctive hunger to know more – who was hurt, who did it and why.
These shootings exemplify the special challenges journalists face in breaking news environments. Events are fluid. Reporters are worried about competition. Speed, always the enemy of accuracy, is given added importance. In an era of misinformation, these are also the moments when journalists need to be most on guard against hoaxes. People spread false information during developing news events because that is when the public is paying close attention and journalists have less time to be on guard.
In 2017 the Sam Hyde hoax played out in real time on CNN after the shooting at a church in Sutherland Springs, Tex., where 26 people were killed. A congressman from Texas being interviewed about the killings said he had been told Sam Hyde was the shooter.
The Sam Hyde hoax is just that – a kind of not-funny trick that plays across social media, perhaps for the hoaxers’ own amusement. (Inside jokes are big with trolls and hate groups.) As BuzzFeed’s Craig Silverman explained in this video, the real Sam Hyde is a comedian who’s been made into a meme, but his isn’t the only name that gets circulated after shootings.
The methods of manipulators
danah boyd, a partner researcher with Microsoft Research and founder of the research institute Data & Society, said breaking news situations can also expose how certain terms or phrases exist in what she and Microsoft program manager Michael Golebiewski call “data voids.” Data voids occur when obscure search queries turn up few or no results because the terms haven’t yet been sufficiently established or defined on the internet. When the void happens, media manipulators can then create content designed to hijack those terms, especially in breaking news situations.
Those voids don’t exist forever – eventually, once enough people are aware of the void, news stories and other legitimate content will be created so that search engines will surface them. But until then, the void exists.
“Unfortunately, the time between the first report and the creation of massive news content is when manipulators have the largest opportunity to capture attention,” boyd and Golebiewski wrote in an October 2019 report, Data Voids: Where Missing Data Can Easily Be Exploited.
In discussing data voids, they point to the hours after the Sutherland Springs shootings. Rarely had anyone before that moment searched for Sutherland Springs, a town of about 600 people in south Texas, and if they did, they’d get back mostly weather and maps. But as soon as the shooting happened, far-right groups coordinated online to use social media to associate the Sutherland Springs murders with Antifa, the anti-fascist group that the alt-right seeks to blame for violence including murder.
In moments like these, boyd and Golebiewski wrote, manipulators will take to social media and use fake accounts to pose provocative questions or point to misleading posts, a tactic journalists need to be aware of.
“While their primary goal was to influence news coverage, this tactic also helps waste journalists’ time,” they wrote.
For journalists, this means taking care with the terms they use in these situations, because repeating or adopting the language of extremism or manipulators can cause people to search for terms that aren’t yet clearly explained online, and, like “crisis actor” after a shooting, can lead users into conspiracy theories. This might come as counterintuitive advice to journalists, noted Kelly McBride, chair of the Craig Newmark Center for Ethics and Leadership at the Poynter Institute, because they like to “discover” and define emerging terms to help their readers understand them. But it could be counterproductive to the public’s understanding.
The motives behind manipulations and hoaxes vary. Hackers may just see it as a technological challenge. Extremists usually have a political agenda. Trolls may be just making mischief or trying to drive wedges in society, deepen mistrust in the media and government and further polarize groups who are already divided. The common element is that during breaking news events, these adversarial actors seek to disrupt the information ecosystem when the largest possible audience is paying attention – and when people most need reliable and accurate information.
Decker also noted that sometimes these actors are paid, like those profiled in a Washington Post piece on troll farms in the Philippines, and sometimes the growth of such content is generated by user interest, as in the case of a conspiracy theory connecting Bill Clinton to the convicted sex offender Jeffrey Epstein.
The spread of a conspiracy into the mainstream media also creates what Decker describes as a “weird beacon of recruitment,” wherein people vulnerable to buying into a conspiracy might see the bait of intrigue in mainstream coverage, then follow it deeper into the darkest corners of the internet where they become a convert to the cause of the false narrative.
Opportunities for collaboration
Breaking news situations are also moments, argue both boyd and Donovan, when news organizations, even competing ones, could band together and collaborate on separating reality from falsehoods. Journalists who strive for information exclusivity might find this anathema to their mission, but boyd believes some instances require it. “Industries are taught to compete with each other, but there are times and places when collaboration is essential to survival and the ability to thrive,” boyd said in an email exchange. “Journalism hasn’t yet realized it’s reaching that tipping point. I just really hope it does before it’s too late.”
boyd points to New Zealand, where journalists agreed on a coverage plan that included avoiding certain images and symbols and limiting coverage of statements. The goal here was to prevent the active funneling of people toward white supremacist or terrorist ideology by limiting the phrases and content that would promote self-investigation. Some media experts have applauded the moves, as Nieman Lab wrote in June 2019. But not everyone agrees. Politico’s Jack Shafer argued that the agreement amounted to the New Zealand media agreeing to censor themselves. “Drop the blinders, New Zealand,” Shafer wrote. “You can’t stop a threat you have blinded yourself from seeing.”
These disparate reactions showed how controversial any kind of collaboration might be among U.S. media outlets, which are competitive by nature and value their independence and ability to exercise news judgment case by case. The U.S. reluctance to band together is also demonstrated in fact-checking, as Cristina Tardáguila of the International Fact-Checking Network wrote in June 2019. In other parts of the world, she noted, fact-checkers have worked together, enabling them to operate faster and check a larger number of claims. “U.S. fact-checkers haven’t even begun discussing this possibility,” she wrote.
One hope is that newsrooms will inform one another about disinformation they see online in order to identify trends and patterns.
Some U.S. news organizations have recognized the cost- and time-saving advantages of sharing resources while also broadening the audience of the stories they produce. These collaborations are usually aimed at producing deep investigations and projects, and they often involve traditional newsrooms working with nonprofit news organizations such as ProPublica. But many experts say such collaborations aren’t used often enough in breaking news events, which are “Super Bowl” moments for bad actors is to get their conspiracy theories, hoaxes or false narratives picked up by the mainstream media, either intentionally or by accident.
At the Center for Cooperative Media at Montclair State University in New Jersey, which works to facilitate and track collaborative journalism projects around the world, Director Stefanie Murray said that newsrooms are getting better at these efforts. As more of them collaborate on enterprise journalism, they will build relationships and a “muscle memory” of working together that will take them to the next step of collaborating on breaking news to debunk mis- and disinformation, she said in a phone interview.
First Draft, a nonprofit that fights misinformation around the world, has launched an initiative it calls “Together, Now,” a global network of collaborative research aimed at helping newsrooms identify disinformation. The U.S. training, which began in Denver in November 2019 and will continue through 2020, includes a “disinformation emergency training” aimed at teaching newsrooms how to grapple with the problem in real time. One hope is that newsrooms will inform one another about disinformation they see online in order to identify trends and patterns.
“I hate using war metaphors, but it just feels like a battleground, and journalists are not aware of the landmines they face,” said Claire Wardle, the U.S. director of First Draft, in a phone interview. She said she is concerned that journalists’ reluctance to take sides can be used by manipulators, especially hate groups, who seek to get journalists to position disinformation and incendiary views against legitimate points of view in a bid to get journalists to engage in false balance. Journalists, she said, often don’t realize that their desire to include all sides is being taken advantage of.
Fortunately, she said, some news organizations are learning more about how to defend against misinformation.
In addition, a number of journalists are aggressively covering this kind of manipulation to help their audiences recognize it, and news organizations continue to add to these teams. Among them: Ben Collins and Brandy Zadrozny at NBC News; Casey Newton of The Verge; Craig Silverman and Jane Lytvynenko of BuzzFeed News; CNN’s Daniel Dale and Donie O’Sullivan; PolitiFact’s Daniel Funke; The Washington Post’s technology team including Drew Harwell, Abby Ohlheiser and Isaac Stanley-Becker; and the New York Times’ Malachy Browne and Kevin Roose, among others.
As good as these reporters are, though, the need for more coverage will only grow as malicious actors ramp up their pace, volume and sophistication. The Times said in mid-2019 that it is “moving in multiple ways” to confront the disinformation problem; smaller publications that have fewer resources will need to think through strategies, some of which are suggested below, for dealing with the problem.
Strategies to avoid being manipulated
A variety of people are working on ideas to help journalists be prepared for the kinds of mis- and dis-information campaigns that target the mainstream media, especially during major breaking news events or at other times when journalists are most vulnerable.
One common theme from experts is that forensics and verification should be a newsroom-wide endeavor. The news media will always have leaders in this field, like those named above. But training for all reporters and editors in a news operation, even just a one-time primer course, will create overall heightened awareness and help reporters and editors know what to do or where to go for help when something is fishy. Such expertise should not reside in just one corner of the newsroom, and should be available at all times. “Disinformation actors are working double time to fool the night and weekend crews,” said Aimee Rinehart, bureau editor for First Draft’s New York office. “Newsrooms can no longer rely on a 9-to-5 news cycle.”
Successful manipulations of news outlets, along with rising concern about potential hoaxes in the 2020 campaign, have prompted groups that fight misinformation to significantly ramp up their outreach efforts. They have produced a bevy of new reports and guidebooks on the subject and are helping newsrooms through in-depth training and best practices for spotting hoaxes and avoiding manipulations.
Every newsroom will face a different version of this problem, depending on the size and makeup of their audiences. Big national news organizations will be the targets of manipulation schemes that are different from those aimed at local or regional news outlets, for example. But each needs to develop skills and strategies to handle it.
What follows is a distillation of some of those strategies from interviews with and written reports by several experts in the field.
1. Learn basic skills to identify manipulations. The least-sophisticated hoaxes are often the most toxic – and the most commonly spread. Forensic experts have whole arsenals of techniques journalists can use to avoid getting duped into thinking something is real and republishing it. Here are just three basic skills journalists should have:
- Spot manipulated video: “Deepfake” videos are getting a lot of attention, and it’s true that they will be difficult to discern if you’re not an expert. But currently the problematic content is less sophisticated – like the slowed-down video of House Speaker Nancy Pelosi (D-Calif.) that was posted on Facebook in 2019 (discussed in more detail in the first section). Journalists should learn basic skills in spotting manipulated video through groups like First Draft that do hands-on training and simulations. There are also many online resources, including those from fact-checkers and other organizations who want to help fellow journalists understand their methods. Some examples:
- The Washington Post Fact-Checker in 2019 published a guide to manipulated video that it said was partly aimed at helping develop a nomenclature around different types of manipulations. It looked at videos that missed context, those that had been edited deceptively and those that had been maliciously transformed.
- At the Poynter Institute, Daniel Funke in 2018 put together a guide to verifying social media video, which includes a list of tips and tricks for spotting online fakes.
- Full Fact, a fact-checking non-profit in the United Kingdom, did a similar guide.
- Verify images through reverse image searches: This is one of the most basic but also most crucial verification skills a journalist should have. Reverse image searches can establish the provenance of a photo that might have been manipulated or taken out of context.
- First Draft, for example, offers a wide range of online resources for journalists. It also has established simulations and master classes around major news events like the 2020 election, including a nationwide strategy to help newsrooms learn some basic digital forensics, like reverse image searches.
- Use screenshots from others with care (if at all): On Twitter or other social media, people will often take a screenshot of a controversial post because they fear it will later be deleted. The screenshot is to “preserve” it in time. But that’s also what can make a screenshot problematic – it purports to represent something that no longer exists, and people have manipulated them. In other words, just because something looks like a screenshot does not indicate authenticity and could, in fact, be fakery. The Miami Herald example noted above, for instance, is a simple photo edit, but it looked real.
2. Know the viral pathway of misinformation. How does something move from a fringe platform to the mainstream media? If journalists understand how misinformation flows, how it finds a path of least resistance to the largest possible audience, then they can have a greater understanding of how to react.
In describing what she calls a “Trumpet of Amplification” First Draft’s Wardle wrote that the most important takeaway in First Draft’s study of how misinformation travels “is understanding that many agents of disinformation see coverage by established news outlets as the end-goal.”
Journalists must know how to find where the content started and what were the motives of its originator. Newsrooms may monitor social media for tips and insights, but social media could be only the most recent station on the journey from its origin on a fringe site.
3. Understand how manipulations live on social media. The social media strategies of online manipulators have become more sophisticated and varied. In a 2019 paper for Data & Society that breaks down these tactics, Joan Donovan and Brian Friedberg at Harvard’s Kennedy School identify what they call “source hacking,” a category of online manipulations aimed at “planting false information in places that journalists are likely to encounter it, or where it will be taken up by other intermediaries.”
One of the practices on Twitter, which the authors call “keyword squatting,” describes how hoaxers or misinformers hijack hashtags or handles to inject themselves into popular conversations, and get the mainstream media to pick up their views.
The social media monitoring and news agency Storyful partners with news organizations to identify misinformation (or verify that something is real) and train newsrooms in the latest verification techniques. It collects as much information as possible about information in question including original source interviews and geolocation techniques to ensure that it is valid.
In addition, Storyful investigates trends that might themselves produce stories about how misinformation is affecting society. An example was an August 2019 Wall Street Journal story exposing how people are getting around Facebook’s ban on selling guns in its marketplaces by pretending to just sell the gun cases. But the gun cases are posted at inflated prices, an indication that the postings have become “code” for the sale of actual guns.
4. Be aware that some catastrophic events, like mass shootings, are categories unto themselves. Mass shootings and other crises are seen as opportunities by nefarious actors to spread misinformation, and, sadly, are now common enough that every journalist should have a checklist for covering them. Resources that can help organizations think through their plans for shootings include one from Poynter and another from the Suicide Awareness Voices of Education. Fact-checkers like those at BuzzFeed and PolitiFact often monitor social media for hoaxes after such episodes.
As noted above, some experts like danah boyd argue that in these crisis situations journalists should collaborate and share information, since having all journalists vet the flood of information individually is inefficient and even dangerous.