A barrage of fake images in Kashmir
Jency Jacob had never seen anything like it.
“We have been fact checking since November 2016,” the Boom Live managing editor tweeted on Monday. “Never before has one incident taught us so many things about new forms of #fakeimages.”
The incident Jacob referred to was a Feb. 14 terrorist attack in Kashmir, a region in northern India and ground zero for the country’s ongoing conflict with Pakistan. The Washington Post reported 40 Indian paramilitary police were killed in the suicide bombing, which was carried out by a local teenager who had joined a Pakistan-based militant group.
After the attack, misinformation ballooned on social media, as it almost always does following big breaking news events. False posts, images and videos spread on platforms like Facebook and WhatsApp.
Indian fact-checking project Boom Live quickly sprang into action. Within 24 hours of the attack, it debunked a photoshopped image of politician Rahul Gandhi standing next to the suicide bomber. Two Twitter handlesspread deliberate misinformation about the attack. And an old WhatsApp chain message asking people to donate to an army welfare fundresurfaced.
“(What an) eye-opener this has been,” Jacob told Daniel in a WhatsApp message. “(We’ve) never seen this kind of a flood of images and videos.”
Hoaxes on social media about violent attacks are one thing. But after last week’s suicide bombing, mainstream media outlets in India started publishing false photos, too.
Several journalists tweeted a photo which purported to show the terrorist in a combat uniform. The Economic Times and India Today — which has its own fact-checking project — published the photo both in print and in a video. Boom reported that it wasn’t clear how those news organizations first obtained the photo.
Using a reverse image search, Boom debunked the image. The outlet found that it was strikingly similar to other images that were created using an app that lets users superimpose people’s heads onto bodies wearing police uniforms.
The popularity of false images following the Kashmir attack, which Boom debunked in a thread of 25 stories on Twitter, is in line with what other journalists around the world have found: Photo misinformation is often more viral than text.
Hannah Guy wrote for First Draft in 2017 that false or misleading images were among the most popular hoaxes following the terrorist attack in London that year. She also wrote that we don’t know much about how false images spread and what their effects on users are, since researchers have mostly focused on studying text misinformation.
One of the most popular hoaxes following the London attack was a fake photo of a tube sign that displayed a “very British response to the attack.” It was created with an image generator. And two years later, hoaxers are still using readily accessible web tools to trick thousands of people on social media.
So what should journalists do?
“This was pure breaking news madness,” Jacob said. “No image can be taken on face value — even the ones that come from government sources.”
- Google published a comprehensive paper explaining how the company — including YouTube, which it owns — tackles misinformation. Its actions include surfacing quality sources higher up in search results and giving users more context by partnering with nonprofits (including the IFCN). While the report didn’t have much news, it’s a good summary of how Google is thinking about misinformation.
- YouTube shares some blame for spreading flat-earth conspiracy theories, a new study from Texas Tech University concluded. The Guardian unpacked why. And in his column for The New York Times, Kevin Roose wrote about why it will be hard for YouTube — which has fostered the growth of personalities who dabble in “viral stunts and baseless rumor-mongering” — to eliminate conspiracies from its algorithm.
- That push in the U.K. for Facebook to rein in closed groups pushing anti-vaccination propaganda has moved to the U.S., leading the company to consider removing the content from its recommendations. Pressure included a letter from Rep. Adam Schiff (D-Calif.), The Washington Post reported. But anti-vaccine conspiracies are still getting a lot of engagement on the platform — even after they’re debunked by the company’s fact-checking partners. Meanwhile, Pinterest has banned vaccination searches.
- President Trump again this week sought to cast fact-checkers as partisans, saying the Washington Post’s Fact Checker is “only for the Democrats.” The Post’s Glenn Kessler’s responded with a reminder that Trump cites fact checks in which Democrats are found to be misleading.
- Facebook said it disrupted attempts to influence voters in Moldova ahead of its elections later this month, CNBC reported, including some pages designed to look like local fact-checking. It’s the second time a disinformation campaign has been linked to government officials this month; A Macedonian military official was behind a network of fake news sites exposed by Lead Stories and Nieuwscheckers.
- After 18 months, the U.K. House of Commons Digital, Culture, Media and Sport Committee has published the final version of its report on disinformation. The document is overwhelmingly anti-Facebook, calling the platform “digital gangsters,” and contains several provisions calling for more algorithmic transparency. It also called for the government to put pressure on the platforms to publicize any instances of disinformation.
…the future of news
- The text-generator created by Elon Musk-backed nonprofit OpenAI can write pretty well, it turns out. And that’s what makes it dangerous — enough so that OpenAI decided not to publish the full research. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” David Luan, vice president of engineering, told Wired.
- Speaking of AI, an Uber software engineer has created a website that generates an endless stream of fake faces. His motive, explained here, was to raise public awareness of the power of the technology. Writing for The Verge, James Vincent lays out the potential creative applications — as well as the obvious nefarious ones.
- Writing for Wired, Zeynep Tufekci dug into how we can develop a verification system that ensures authenticity in an era where nearly every platform can be gamed. Verification practices like blue checkmarks on Twitter and photo evidence are easily spoofed. That’s where blockchain (*insert hesitant sigh here*) could come in handy.
Each week, we analyze five of the top-performing fact checks on Facebook to see how their reach compares to the hoaxes they debunked. Here are this week’s numbers.
- Liputan 6: “Jokowi Accused of Using Communication Tools during Debate. Fact?” (Fact: 13.6K engagements // Fake: 9.4K engagements)
- Factcheck.org: “O’Rourke Didn’t Trash Seniors and Veterans” (Fact: 2.4K engagements // Fake: 1.2K engagements)
- Full Fact: “You can’t be exempt from council tax if your home is used as a place of worship” (Fact: 2K engagements // Fake: 631 engagements)
- Agence France-Presse: “No, US courts have not “confirmed” that the measles vaccine ‘causes autism’”(Fact: 645 engagements // Fake: 6.8K engagements)
- PolitiFact: “Did Kurt Cobain predict and express approval of a Donald Trump presidency? No.” (Fact: 362 engagements // Fake: 932 engagements)
It may not always be news when a politician tells the truth, but a fact check highlighting a true statement can be a service to readers if done well, especially when the claim seems like exaggeration in the first place.
During his State of the State address, California’s new governor, Gavin Newsom, said: “Just this morning, more than a million Californians woke up without clean water to bathe in or drink.”
That sounds like a lot, but PolitiFact California found it’s actually true. The number may even be understated, experts told Capital Public Radio reporter Chris Nichols.
What we Liked: Californians might have dismissed Newsom’s big number as just more hyperbole from a politician. Nichols’ fact check told them why they shouldn’t. Such fact checks give politicians credit when they do their homework, while also making clear that fact-checkers are not just playing “gotcha” to politicians’ false claims.
- First Draft has left its home at Harvard University’s Shorenstein Center, citing problems with brand control.
- In Brazil, an imposter fact-checking website stole Aos Fatos’ brand to publish fake news stories — and it’s part of a larger network of misinformation that has been investigated by the government.
- Full Fact is hiring four people: A policy officer, product manager, web developer and designer.
- BuzzFeed News reported on why an old fake Pope Francis quote recently went viral online. Spoiler: QAnon is involved.
- The 2020 presidential primary “is going to be the next battleground to divide and confuse Americans,” Brett Horvath, a founder of Guardians.ai, which works on ways to disrupt cyberattacks, told Politico for a story about cyber propaganda. “As it relates to information warfare in the 2020 cycle, we’re not on the verge of it — we’re already in the third inning.”
- Good advice here from Nikki Usher, writing in Columbia Journalism Review, about what journalists should look for when reporting on academic studies.
- “It’s usually a bad sign when a fact-checker makes the news,” reads the lead of this story from The Week. Agreed!
- In Mexico, innocent civilians have been killed by lynch mobs after false rumors were spread about them on WhatsApp. The Pacific Standard profiled some of the fact-checkers working to fight those kinds of rumors.
- In November, Daniel wrote that Nigeria would be the next battleground for election misinformation. Prior to last weekend’s election there, CNN reported on how fake news was weaponized during the campaign.
- Max Read wrote a great story for New York magazine that asks the question: When it comes to disinformation, who, or what, should we all actually be afraid of?
Until next week,