Investigating Digital Threats: Disinformation

Over the last 15 years, social media networks experienced unprecedented growth as more people came online around the globe. These users became ripe for exploitation, as platforms fell behind on investing in moderation, protection, and other safeguards. Online manipulation is an outgrowth of traditional propaganda tactics like black public relations and the spread of false information, taken to new speeds and scales. Without the checks and balances that reporting provides, problems of disinformation that can seem insurmountable will genuinely become so.

The initial question every reporter should be asking is whether they’re looking at a single incident or a wide-scale attempt at manipulation.

An important note: the term “disinformation” is not to be confused with “misinformation.” While the two are often used interchangeably, there are distinct nuances between the two definitions that investigative journalists should be careful to understand. Misinformation is generally understood as a broader term that refers to any false or misleading information, which can be shared or spread unintentionally. Disinformation, however, is more precisely identified as deliberately false or malicious content that is purposely designed to spread fear or suspicion within a community or population. The words “online manipulation” are a good umbrella term to use, particularly when reporting on fake accounts or websites. That’s because they sometimes spread accurate information but in manipulative ways.

How we report on online manipulation has changed over the years with the passage of privacy laws, rise of new social media networks, and an ever-evolving understanding of the problem. Websites are still used to spread false information, but so too are influencers and new forms of video and imagery. In many countries, Facebook remains one of the most important platforms where false information spreads. But TikTok, Telegram, and messaging apps have also become powerful vectors for spreading lies or deliberately sowing confusion.

So, how can we as reporters dig into this gargantuan ecosystem?

First we have to see it as just that: an ecosystem. A deliberate, networked spread of disinformation or propaganda is different from a one-off unintentional slip. The initial question every reporter should be asking is whether they’re looking at a single incident or a wide-scale attempt at manipulation. An ecosystem can be many things, and we need to take care to define it well in our investigation. The most common way to describe it is a connection among accounts on different social media networks coordinating to spread the same message. There are several indicators and questions that can help here: when the accounts were created, when the content is shared, who amplified the content on different platforms, and what are the commonalities in the content itself? This could take the form of the same website promoted on both Facebook and Twitter, or influencers on TikTok using nearly identical language to speak about an issue. Timing can also be telling — has some of the content been shared within minutes or even seconds from accounts with similar characteristics?

Investigators of online manipulation should use all available traditional and digital methods to get as close as possible to the sticky questions of origin and intent. Campaigns backed by state actors or private corporations stand out when contrasted with individual actors who may have become genuine believers of conspiracies over time. All of these types of manipulation have impacts, but with varied intent and success. It’s not always possible to trace a campaign to its origin. There has been a rise in public relation firms being used as laundromats for disinformation to protect their clients, adding another layer of difficulty to the already tricky business of figuring out who’s who on the web.
Reporting on online manipulation is at its best when online investigative techniques are combined with old-fashioned source work and documentation.

Digital disinformation is a powerful tool that has been used to facilitate racial cleansing, violence, and war. It has had impacts on healthcare systems around the globe, played a role in major elections over the last decade, and helped undermine press freedom. To identify it before it can take effect, journalists must understand the communities it’s likely to target. Just as with shoe leather reporting, one cannot simply parachute in, look around, and understand the depth to which the issue reaches. That’s because online manipulation plays to existing societal divides, igniting them further, frequently dangerously so. We cannot report on manipulation and disinformation without also understanding those divides.

The tools and approaches outlined below are meant as aids for uncovering information and taking a closer look at data. They cannot replace the hard work of traditional journalism, nor are they intended to. Reporting on online manipulation is at its best when online investigative techniques are combined with old-fashioned source work and documentation. The good news is that you are never alone. A growing crowd of researchers, reporters, and academics is uncovering ever more damning facets of online manipulation. In reading this guide, you are becoming one of them. Never be afraid to seek and give help in this field.

Stay organized: Before beginning your investigation, decide how you will keep track of the social media accounts and other online entities you’re investigating. Browser tabs can pile up quickly, and it’s crucial to have a system for organizing and archiving. Hunchly, a paid tool, is an industry favorite for its auto-archiving capability. Another approach is having an itemized spreadsheet with the accounts, websites, images, videos, and anything else of interest to you in one place. Consider including dates of the account creation and publication dates and times for individual posts so you can easily see the timeline. Be sure to grab screenshots and to take notes as you gather information for your investigation since posts and accounts can be removed at any time.

It’s also important to keep track of what you are seeing and to organize your thoughts. You can use one or more Google Docs as a central repository for your notes and for screenshots or interest. It’s also important to publicly archive what you’re seeing. If you sign up for an account with the Internet Archive, you will have access to their free bulk archiving tool. It connects to Google Sheets and saves each link you’ve collected. Archives are a better way of keeping track of information than screenshots because it is much more difficult to manipulate them, and you can link to the content in your reporting. However, some social networks, such as Facebook, Instagram, and LinkedIn, do not make it easy to archive. Consider keeping a separate screenshot folder for them. Also keep in mind that videos are not automatically archived and you will need to keep them in a separate folder, too.

Understand the community: One of the most common tactics for disinformers is to identify an existing social wedge issue in a country or community, and to work to exacerbate tensions and divisions. Posting divisive or hyper-partisan content is one of the best ways to attract an audience on social media. It’s therefore crucial to understand the communities being targeted by the manipulation. Talk to people in the targeted communities and try to understand their reality. What caused these issues in the first place and what makes the attempted manipulation effective? Are there conversations that seem manipulative but are actually par for the course? By immersing yourself in this kind of digital ethnography, you will be able to understand disinformation more fully and, in many cases, see something particularly impactful coming before it has a chance to take off.

Consider the impact: Deciding whether to cover a piece of false information is an inexact science. On one hand, you could facilitate its spread. On the other, you can help thwart it. Ask yourself whether there are any potential or measurable impacts of the information. Has it gone outside the ecosystem or community where you first saw it? Is it likely to cause physical harm? Has it financially benefited those who posted it? Was it amplified by a particularly influential person? This is a decision that should be made as a team, with all potential harms and benefits weighed.

Minimize harm: Once you make the decision to cover the false information, you should apply industry best practices for responsible reporting. If you’re writing a fact check, for example, put the accurate information in the headline. In the body of the text, adapt the “truth sandwich” approach: accurate-inaccurate-accurate. This will help readers remember the true rather than the false information. When linking out, send your readers to an archived version of the false information to avoid bringing traffic to disinformers. Finally, if you include a screenshot, put a red line or the word “fake” across the image. The goal is to ensure that your investigation “does no harm” and avoids inadvertently spreading the inaccurate or harmful information further.

Set a high burden of proof: Imagine this: several anonymous Twitter accounts are all sharing content from the same website in unison. The website is filled with misleading information and, after looking up domain records, you see that it was registered in Russia. Have you just uncovered a Russian propaganda campaign? Not necessarily. In digital investigations, just as in offline ones, the more proof you’re able to gather, the stronger the case. Don’t point the finger unless you have the evidence to back it up.

Now let’s say this website is also being shared on Facebook. Once you open the Page Transparency box for the posts in question, you see that they all have administrators located in Russia and that a Russian public relations company is listed as the Page Owner. Now the evidence is starting to accumulate. But you also know that digital signals, such as domain records and Facebook page manager info, can possibly be manipulated. So then you find former employees of the agency who disclose the details of the operation and tell you who owns the public relations firm and confirm the other information. Now you have a much stronger case for the origins of the campaign. Always ask yourself: are there other possible explanations for who is behind this operation? Or do we have incontrovertible proof?

Dig for motivation: Disinformation is a strategy. It can be used for financial or political enrichment, to gain clout, and even to change laws. If you’ve found the name of a person running an online manipulation campaign, don’t stop there. Check for companies, donation records, and political affiliations. This may not always be possible, but the closer you get to the motivation, the closer you get to the truth.
No person on social media is loyal to only one platform, and reporters should not be either.

Cross social media borders: Reporters tend to study the platforms that are the most accessible. Twitter is among the most researched social media companies in part because its data has been easier to acquire than other platforms. (Thanks to recent changes in Twitter’s API access, that is no longer the case, however.) By contrast, relatively little scrutiny falls to YouTube or podcast platforms because of the sheer volume of content a reporter must watch, and the lack of data feeds. But by avoiding platforms that are less familiar to us or that require a heavier time investment, we may be missing information crucial to our investigations. No person on social media is loyal to only one platform, and reporters should not be either.

Advanced search: Tools for online investigations are notoriously unstable. They are subject to the whims of social media executives, who at any time can change the type of data they make publicly accessible. It’s why relying solely on tools for investigations is a bad idea. But there is a shining beacon that can be handy in almost any investigation: advanced search. Use Twitter advanced search to monitor live breaking news situations. Use Google advanced search to get information out of websites that can otherwise be difficult to navigate. Searching is at the core of digital investigative work and you need to be comfortable with crafting queries and utilizing the operators offered by different platforms. GIJN has a fantastic tutorial to get you started.

Junkipedia tool: Developed by the Algorithmic Transparency Institute, Junkipedia was originally designed to monitor disinformation and “junk news.” But its focus has since expanded — to reflect this, it will get a new name later this year — and it now allows users to track and build lists of social media accounts from a dozen different platforms, including fringe sites like GETTR and Gab, as well as major sites like TikTok, Facebook, and Telegram. Junkipedia can also automatically transcribe and search English-language podcasts. [GIJN has also covered Junkipedia’s search capabilities in more depth.]

WeVerify tool: The other reliable and irreplaceable tool is WeVerify. It was created by fact-checkers for fact-checkers. You can use it to reverse search images or videos, compare images for manipulation, and conduct Twitter analysis. It is a Swiss Army knife for disinformation reporters. It works best with advanced options, so if you have a work email address, be sure to sign up for a free account.

There are many more tools out there and the field of reporting on online manipulation is constantly in flux. As social media companies evolve, our journalism must evolve with them. It is crucial, in this field, to always seek out new approaches. What you’ve learned here is just a start.
Case Studies

Ukraine: A novel project from Ukrainian investigative outlet Texty showed how fake Telegram channels were created soon after Russia’s full-scale invasion of Ukraine. The channels posed as local news sources but “were in fact used to disseminate Russian narratives and bolster support for the occupiers.” The investigation found that the channels stopped functioning once territory was freed. Telegram data is among the most accessible of all social networks and you don’t need special tools to download an entire channel’s feed or gather data about its content and engagement. Texty mapped Telegram data to show the relationship between troops on the ground and propaganda online.

Texty mapped Telegram channels to assess Russia’s initial military ambitions. Image: Screenshot, Texty

Palestine: In 2021, after Israeli forces attacked Al-Aqsa Mosque and injured over 150 people, Israel and Palestine agreed to an uneasy ceasefire. But what temporarily stopped rockets flying through the air did little to thwart what one researcher called “online-inspired lynchings.” Disinformation monitoring group FakeReporter clocked more than 100 WhatsApp and Telegram Hebrew-language groups coordinating attacks on Arabs in Bat Yam, a seaside town south of Tel Aviv. The calls for violence precipitated actual violence, with one man — a father of four on his way to the beach — hospitalized after being beaten, ostensibly for being Arab. Israeli public figures continued fueling the hatred online, including of the media outlets covering the story. There was never enough information to determine who started the Telegram and WhatsApp groups, but the case showed clearly how online manipulation and hate contributes to real-world harm.

United States: Much has been made of former US President Donald Trump’s baseless voter fraud claims about the 2020 election, and how they fueled the violence and lawlessness of January 6. An investigative report by Jim Rutenberg of the New York Times is particularly noteworthy. It traces Republican efforts to undermine democratic institutions from 2016 until just before the 2020 election. Rutenberg looks at false claims in several states, digs into where they came from, and unpacks the legislative changes they were used to justify. The story is a masterclass in how to understand disinformation as it relates to political power.

Democratic Republic of Congo: In this alarming example, a handful of Congolese Facebook page administrators launched a highly effective disinformation campaign during the COVID-19 pandemic. By attributing false quotes to high-profile public figures, including a French infectious diseases expert, the director of WHO, and the presidents of Madagascar and France, this small group pushed baseless anti-vaccine propaganda and spread conspiracy theories about fake cures. A FRANCE 24 Observers team finally tracked one of them down — a 20-year-old student from Kinshasa — and he confided that his motivation for spreading lies was to grow his pages’ social media presence and generate “buzz.”

Philippines: Although this is an academic report, not a press story, it’s nonetheless important for those digging into disinformation. Authors Jonathan Corpus Ong and Samuel Cabbuag revealed the role of pseudonymous trolls during the 2019 election in the Philippines. They find that this much-overlooked segment of the internet is crucial for driving online discourse and promoting political messaging. The paper draws some parallels to Michael Bloomberg’s campaign for US president in 2020 and is well worth the read for both its investigative techniques and its conclusions.

Costa Rica: This white paper by two academics documents the meddling of so-called cyber troops in Costa Rican elections and politics since 2018. These cyber troops, defined as “government or political party actors tasked with manipulating public opinion online,” not only played a role in the 2018 presidential elections but were used by far right political parties to foster fierce opposition to the incoming president’s plans for fiscal and civil service reform. By pushing phony political polls, to misrepresent public sentiment, and other fake news, aimed at damaging their opponent’s reputations, these homegrown disinformation actors sowed chaos and discord in one of Latin America’s most stable countries.

Check Also

Record growth rate of losses of the Russian Armed Forces, Russian military broke into Kupyansk, FPV drone shot down Ka-52. What happened on the front this week

In today’s summary: The intensification of attempts by the Russian Armed Forces to dislodge the …