2021-12-04 05:56
Illustration: Kata Máthé / Remarker
For investigative journalists digging into campaigns of misinformation and disinformation, it’s important to identify who’s behind them. But that’s not enough, cautions online sleuth Craig Silverman at the 12th Global Investigative Journalism Conference (#GIJC21). Silverman, a reporter at nonprofit newsroom ProPublica, says that equally important is understanding the motivations of those involved.
The most common element in these campaigns, Silverman explains, involves false context — where a factual statement or accurate quote has been stripped of its original meaning or setting and repurposed to mislead. “So when you’re thinking about digging into, or monitoring, misinformation and disinformation, you need to have that context in mind,” he said. “Because if you’re only looking for things that are 100% false, if you’re only looking for something that claims to be news, you’re going to miss most of what is out there.”
Silverman was joined at a GIJC21 workshop Exposing the Roots of Disinformation by journalist Jane Lytvynenko, a senior research fellow at Harvard’s Shorenstein Center on Media, Politics and Public Policy who specializes in investigations of online myths and fake news. Misinformation and disinformation, Lytvynenko noted, are often rolled out strategically. “Campaigns have stages where they’re being planned, and then where they spread on social media, which is where we might encounter responses by journalists,” she said. “And, of course, many campaigns don’t go away just because we debunk them. Instead, they adjust.”
The two journalists offered a host of tips and tools for getting at the roots of these campaigns.
When trying to fact-check videos and photographs, journalists should ask themselves: Is it an original image or video? Who captured the content, and when? And finally, where was the image or video taken – this can help geo-locate it.
Lytvynenko and Silverman recommended the tool WeVerify, a verification plugin that works with Chrome. Once installed, the plugin gives users the option to run a photograph through various search engines including TinEye, Bing, and Yandex, also allowing the user to highlight a specific part of the image and only search for that. Yandex also has facial recognition capabilities — though that technology still demands additional verification. “One of the key things with image search is to go beyond Google, which the WeVerify search extension allows you to do,” Lytvynenko said.
When debunking misinformation, a good place to start is investigating the social media accounts posting or sharing the information. First, check if it is a verified account: Is it really owned by the person claiming to own it? When was the account created? Where do they say they are? And do a detailed background check: What does their network say about them? Who are they friends with? Who do they regularly talk to, share with, like? What are the patterns? Do they post regularly, and on what subjects?
“Sometimes there is a bit of a disconnect or conflict between who they claim to be and the kind of content you’re seeing from the account,” noted Silverman. “And those are the kinds of things that you want to be watching for.”
Twitonomy is one tool that journalists can use to get detailed and visualized analytics of a Twitter account they are scrutinizing. “It tells you how often they tweet, when they tweet, which other accounts they reply to, who they retweet,” Silverman added. “And it can also give you a breakdown of when they tweet and that’s going to help you figure out what time zone they’re in.”
The “Verification Handbook: For Disinformation And Media Manipulation,” edited by Craig Silverman, aims to help journalists spot disinformation and media manipulation. Image: Screenshot
Although some chat apps are commonly used to spread misinformation, investigating them has its limits. “This is very different from a public Twitter profile or public Facebook profile where somebody is consciously putting information into the public,” explained Lytvynenko. “Chat apps have that expectation of privacy, and we need to be careful when you report on that.”
One way of getting around the limitations is by crowdsourcing that information from the audience, she said. This was used successfully in Brazil in the Comprova Project, where 24 different media companies came together and worked on a project to identify rumors and fabricated content spread to influence Brazil’s 2018 elections.
“All the newsrooms sent the same WhatsApp number to their readers and viewers, and the audience sent back any fake information they encountered for the newsrooms to verify or debunk,” she said. “So don’t discount the power of soliciting tips.” The crowdsourcing method works well with chat apps like WhatsApp, which is challenging to study because it’s difficult to join groups that haven’t been made public.
One of the most useful tools for investigating content on the encrypted messaging app Telegram is Tgstat, a channel search and analytics tool. With it, a journalist can gain insight into those engaged on a particular channel. “So if you punch in a username … into Tgstat, you’re able to get analytics for up to 90 days and you can see if the channel got a big influx of subscribers or if there was a day when their content was the most popular,” Lytvynenko said.
Even with these tools and careful reporting, both panelists urged to be extra careful and gather concrete evidence before making judgments. “One of the ways that people doing this work really go wrong is they make an attribution, and they make accusations, without having the basis to back it up,” warned Silverman. “So be cautious. Gather your evidence — stack it — until you have a high level of confidence.”