Fact-checking of political information about the Russo-Ukrainian conflict

Reijo Savolainen (Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland)

Journal of Documentation

ISSN: 0022-0418

Article publication date: 19 March 2024

145

Abstract

Purpose

To elaborate the nature of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims concerning the Russo-Ukrainian conflict and how they support their assessments by drawing on evidence acquired from diverse sources of information.

Design/methodology/approach

Descriptive quantitative and qualitative content analysis of 128 reports written by the fact-checkers of Snopes – an established fact-checking organisation – during the period of 24 February 2022 – 28 June, 2023. For the analysis, nine evaluation grounds were identified, most of them inductively from the empirical material. It was examined how the fact-checkers employed such grounds while assessing the validity of claims and how the assessments were bolstered by evidence acquired from information sources such as newspapers.

Findings

Of the 128 reports, the share of assessments indicative of the invalidity of the claims was 54.7%, while the share of positive ratings was 26.7%. The share of mixed assessments was 15.6%. In the fact-checking, two evaluation grounds, that is, the correctness of information and verifiability of an event presented in a claim formed the basis for the assessment. Depending on the topic of the claim, grounds such as temporal and spatial compatibility, as well as comparison by similarity and difference occupied a central role. Most popular sources of information offering evidence for the assessments include statements of government representatives, videos and photographs shared in social media, newspapers and television programmes.

Research limitations/implications

As the study concentrated on fact-checking dealing with political information about a specific issue, the findings cannot be extended to concern the fact-checking practices in other contexts.

Originality/value

The study is among the first to characterise how fact-checkers employ evaluation grounds of diverse kind while assessing the validity of political information.

Keywords

Citation

Savolainen, R. (2024), "Fact-checking of political information about the Russo-Ukrainian conflict", Journal of Documentation, Vol. 80 No. 7, pp. 78-97. https://doi.org/10.1108/JD-10-2023-0203

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Reijo Savolainen

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The spreading of false information has become a serious concern for the informed citizenry in the so-called “post-truth” media landscape. One of the ways to combat misleading information is fact-checking. In general, it aims at curating information from reliable sources to verify or debunk claims that are controversial or of unclear accuracy (Vu et al., 2022). More specifically, fact-checking can be defined as “the practice of systematically publishing assessments of the validity of claims made by public officials and institutions with an explicit attempt to identify whether a claim is factual” (Walter et al., 2020, p. 351). From a broader perspective, fact-checkers may also assess of the validity of claims presented by actors of other types, for example, politicians and social media users.

Fact-checking emerged in the 1980s as a part of a broader turn toward critical reporting on politics (Amazeen et al., 2018, p. 30). Since that time, the fact-checking landscape has expanded considerably. Currently, the most widely known fact-checking organisations include Snopes.com (launched in 1994), FactCheck.org (founded in 2003), PolitiFact.com and Washington Post’s Fact-checker (both launched in 2007). As of June 2023, there were 417 active fact-checking organisations all over the world Duke Reporter’s Lab, 2023). In news media, there is a tradition of fact-checking news reports before they go to print. In social media forums, however, fact-checking commonly occurs after a piece of information has been disseminated. Instead of the eradication of any inaccuracies from information, fact-checkers expose incorrect claims publicly. Therefore, fact-checkers' main purpose is to make people aware of inaccurate claims, rather than to report an accurate story (Amazeen, 2020, p. 97). Ideally, fact-checkers investigate verifiable facts, and their work is free of partisanship, advocacy and rhetoric (Graves, 2018, p. 615). In recent years, fact-checkers have made attempts to attain such goals in diverse contexts such as the debate about climate change (Vu et al., 2022), COVID-19 pandemic (Moon et al., 2022), political campaigns (Soo et al., 2023) and the ongoing Russo-Ukrainian conflict (Garcia-Marin and Salvat-Martinrey, 2023).

The present investigation elaborates the picture of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims dealing with a politically sensitive topic, that is, the Russo-Ukrainian conflict. It offers a fertile ground for fact-checking because the conflict tends to intensify information warfare and give rise to massive spreading of political disinformation. The above domain was chosen for the study because in this context, the drawing of the boundary line between correct and misleading information is particularly demanding for the fact-checkers, due to the complex and often ambiguous nature of political claims (Uscinski and Butler, 2013). The present study deepens the understanding about how fact-checkers assess the validity of political claims by making use of diverse evaluation grounds such as the correctness of information and the plausibility of an event. To this end, the evaluative strategies used by the fact-checkers will be scrutinised. The analysis produces new knowledge about how fact-checkers make attempts to systematically employ a set of certain criteria in order to find out whether a piece of political information can be considered as a description of the objective state of affairs. The systematic use of such criteria brings continuity and consistency to fact-checking and thus makes it possible to strengthen its credibility in the eyes of the readers and social media users. Moreover, it will be examined how the fact-checkers support the validity assessments by drawing on evidence acquired from information sources such as articles published by legacy media. To this end, an empirical study was made by analysing a sample of 128 reports created by the fact-checkers of Snopes.com. The empirical findings contribute to research on the validity assessment of political information in times of intensifying information warfare. The findings also reflect the potential and limitations of fact-checking as a systematic attempt to combat disinformation.

The rest of the article is organised as follows. First, to create background, the main features and problems of fact-checking are reviewed, followed by the specification of the research framework, research questions and the empirical research setting. Thereafter, the empirical findings will be reported. The last sections discuss the findings and reflect their significance.

1.1 Literature review

1.1.1 Main features of fact-checking

Fact checking may have important political influences (Morgan et al., 2015, p. 578). Fact-checking can moderate whether citizens believe claims made in negative political advertising. Fact-checking can also increase political knowledge and encourage politicians to refrain from making unsubstantiated claims. Moreover, fact-checking is one way to correct false information and improve the public’s understanding of disputed realities. All in all, fact-checking serves as an intervention to either inoculate individuals from future influence or reduce the likelihood that false information influences civic discourse to begin with (Amazeen, 2020, p. 97).

Tsang et al. (2022, p. 1360) have identified three ideal typical approaches to fact-checking of political information. First, there are partisan fact-checkers, whose participation in political discourse is marked by an attempt to defend their own side and criticise the opposite side. In a media system with a high degree of political polarisation, media outlets often take sides during political debates, and fact-checking can be a component of partisan news practices. Second, there are advocacy fact-checkers. Their agendas are defined in terms of policy issues or thematic concerns related to the health of the public culture of a society. Often, fact-checkers of this type follow the NGO model identified by Graves and Cherubini (2016). When non-governmental organisations (NGOs) engage in fact-checking, they typically treat it as a means to monitor government performance and engage in civic movements on specific issues. Third, there are professional fact-checkers taking up the basic characteristics of journalistic professionalism: an emphasis on autonomy from political and economic power, a public service orientation, and a set of common norms, such as objectivity and neutrality. The fact-checking practices of Snopes.com exemplify this approach.

Based on interviews with U.S. fact-checkers, Graves (2017, pp. 524–528) identified five elements that a typical fact-check contains. First, fact-checkers choose which claims to check for, considering, for instance, the newsworthiness and political significance of the claim. Second, they often give the author of the claim a chance to present their account. Third, fact-checkers try to trace the original source of the claim. Fourth, they consult experts for interpretation of the data, which is sourced from nonpartisan sources. Finally, fact-checkers make the verification process public for the sake of transparency. To achieve this, they justify their choices to the reader, explaining why a particular claim deserves to be checked, how certain sources or experts were chosen, and how the final verdict was reached. For the present study, the fourth and fifth elements are particularly relevant because they deal with the ways in which the validity of claims is assessed and how such assessments are supported by external information sources. Graves (2017) also found that fact-checkers make attempts to triangulate the truth by acquiring multiple information sources. Typically, fact-checking reports rely on 5 to 10 distinct sources.

In an empirical study focussing on fact-checkers in Australia, Germany, United Kingdom and USA, Vu et al. (2022) examined fact-checking dealing with climate change. Of the 490 items analysed by the fact-checkers, 53% were determined as either false or mostly false, while 32% were either true or mostly true. Of the fact-checks, 26% cited scientific studies; 41% referred to scientists, 76% drew on authorities such as United Nations Framework Convention on Climate Change, 59% referred to news media, 64% to websites and 38% social media. In sum, authorities and scientific studies were the most frequently used types of corrective sources.

More recently, Steensen et al. (2023) examined political fact-checking performed by the Norwegian fact-checking organisation Faktisk.no during the Norwegian parliamentary election campaign in 2021. To this end, they examined 55 fact-checks which referenced altogether 59 information sources. Almost one-third of these sources were statistical bureaus, predominantly Statistics Norway. Governmental sources and data from government directorates were also used frequently, thereby making government bodies the most dominant source type. If economic data (mostly from the national budget) are included, no less than 83% of the source material was numeric data. Sources with predefined credibility like Statistics Norway, governmental bodies and other source material seemingly deprived of interpretive potential were prioritised. Steensen et al. (2023) concluded that fact-checking largely relied on pre-epistemic certainty. It is immune to questioning, doubting and testing, and it applies to the things we take for granted. Pre-epistemic certainty is particularly characteristic of statistical and expert sources belonging to a hegemonic view on what constitutes important and reliable information, thus suggesting that there is little room for alternative sources offering different perspectives.

The above observations can be complemented by the findings of the study where Soo et al. (2023) examined fact-checking journalism during the 2019 UK general election campaign. The findings indicate that the process of selecting sources for fact-checking reports often entailed looking at data perceived to be authoritative, such as official data published by the Office of National Statistics and the Office for Budget Responsibility. This approach to identifying sources in fact-checking reports can be viewed as reinforcing the relatively narrow range of selecting elite and authoritative actors (Soo et al., 2023, pp. 471–472).

1.1.2 Problems and limitations of fact-checking

Although professional fact-checking organisations such as PolitiFact and Snopes emphasise their impartial role, it is evident that ultimately, the criteria of fact-checking are always political (Juneström, 2021, p. 513). In order to select and evaluate the factuality of information, fact-checkers have appropriated a particular approach towards society. Importantly, one of the reasons behind biased selection of claims to be checked is that negativity gets attention (Mattes and Redlawsk, 2020, p. 916). Negativity piques our interest because we are drawn toward events that might have undesirable consequences. Further support for this assumption was obtained from the empirical study conducted by Amazeen (2016). She showed that fact-checkers themselves are more likely to examine negative political ads than positive ones.

Uscinski and Butler (2013) haven taken a critical stance on the epistemology of fact-checking. The practices of fact-checking share the tacit presupposition that there cannot be genuine political debate about facts because facts are unambiguous and not subject to interpretation. This suggests that facts function as the gold standard in the market of knowledge, thus treating facts as “truthmakers” (Böhnert and Reszke, 2022, pp. 16–18). This implies the preference for epistemological (positivist) naïveté, resulting in simplistic true/false judgements (Uscinski, 2015, p. 247). The traditional approach to fact-checking is limited because facts are seen as objective features of “the world out there”, and thus can be established and interpreted by independent actors (Yarrow, 2021, p. 622). Therefore, when the black-and-white facts–as they appear to the fact-checkers - conflict with the claims produced by politicians, the fact-checkers are able to see only (to one degree or another) “lies.” However, as Uscinski and Butler (2013, p. 163) contend, the subject matter of politics is often complex and people may genuinely disagree about the truth. If a politician disagrees with a fact-checker about the facts, this does not make the politician a liar any more than it makes the fact-checker a liar.

Andersen and Søe (2020) take even a more radical view on the fact-checking of fake news in particular. They suggest that instead of understanding fake news as something that must be combated and deleted, the existence and appearance of fake news should be understood as a form of communicative action. Fake news is here to stay; primarily, such news are not epistemological statements about reality but communicative actions we must learn to live by. Similarly, Vinhas and Bastos (2022, p. 463) concluded that as fact-checkers become increasingly more involved in the fight against false information, they find themselves facing the “Sisyphean task” of continuously purging misleading content, only to watch it reappear faster and stronger.

2. Research framework

The literature review suggests that despite its limitations, fact-checking is a potentially effective way to combat false information. Since the 1990s, fact-checking organisations have developed specific practices to assess whether claims presented by diverse actors such as politicians and social media users can be considered as valid. Prior studies also indicate that fact-checkers prefer authoritative information sources such as statistics. So far, however, we lack a more detailed picture about how the fact-checkers employ various ways to evaluate the validity of claims and how they bolster their assessments by making use of evidence acquired from information sources. To fill gap in research, the present investigation first identified a set of factors on which the validity assessment in fact-checking is based; henceforth, the set of such factors will be referred to as evaluation grounds. The identification of evaluation grounds was partly based on the list of cognitive mechanisms identified by (Zhang et al., 2009). Such mechanisms include, for example, Comparison and Explanation that people employ while interpreting the relevance of information available in diverse sources. To enhance the distinction power of the construct of Comparison, it was further divided into two subcategories, following the distinction made by Savolainen (2009): Comparison by differentiation and Comparison by similarity. Due to their generic nature, these categories appeared to be relevant as evaluation grounds that fact-checkers employ while assessing the validity of the claims. Other evaluation grounds, for example, Correctness of information, Temporal compatibility and Plausibility of an event were identified inductively from the empirical material of the present study. This was necessary because there were no prior investigations about the evaluation grounds used by fact-checkers.

As a result, the following categories are used to specify the evaluation grounds employed by fact-checkers assessing the nature of political claims about the Russo-Ukrainian conflict. In Table 1 below, the illustrative examples are taken from the empirical material of the present study.

While checking the factuality of the claims, fact-checkers justify their assessments by drawing on evidence obtained from information sources of diverse kinds. In the present study, the types of such sources were identified inductively from the empirical material. As specified in more detail below in the section of Empirical data and analysis, the source types include, for example, videos, photographs, and newspaper articles. Finally, fact-checkers may rate the assessment by indicating whether the claim is valid (true), invalid (false) or a mixture of valid and invalid elements. Based on the above specifications, the research framework of the present study is depicted in Figure 1 below.

As Figure 1 illustrates, fact-checking starts with the identification of a claim chosen for assessment. For example, it can focus on the claim that in his speech on February 24, 2022, President Putin threatened nuclear action against any country that try to prevent Russia’s invasion to Ukraine. The validity of this claim can be examined by making use of evaluation grounds of diverse kind. The fact-checker can, for example, make comparison by similarity, for example, to find out whether the wording of the claim matches Putin’s original statement. In addition, the fact-checker can employ other grounds such as Correctness of information presented in a claim or Honesty in argumentation. To justify the assessment, the fact-checker makes use of diverse sources of information offering evidence for or against the validity of the claim. For example, videos taken from Putin´s speech or newspaper articles commenting on its content can be used as evidence. The result of the fact-checking is presented in the rating indicative of the validity of a claim. It may be rated as valid, for example, true. Alternatively, a claim can be rated as invalid because it is proven false. In some cases, no definite rating can be made because there is insufficient evidence for or against the validity of a claim.

3. Research questions

Drawing on the research framework presented in Figure 1, the present investigation seeks answers to the following questions.

  1. In which ways do fact-checkers employ evaluation grounds of diverse kind to assess the validity of claims dealing with the Russo-Ukrainian conflict?

  2. In which ways do fact-checkers make use of sources of information offering evidence for the validity assessments?

4. Empirical data and analysis

The empirical data were downloaded from the website of Snopes.com–an established fact-checking organisation (https://www.snopes.com/). The fact-checking reports published by Snopes are freely and publicly available for all interested users. Snopes got its start in 1994, investigating urban legends, hoaxes, and folklore. Currently, Snopes is the oldest fact-checking site online, widely regarded by journalists, folklorists, and readers due to its unbiased and non-political background (Amazeen, 2020, p. 543). Snopes, with millions of visits per month, is one of the most popular fact-checkers. Snopes fact-checkers avoid personal views in their reports, and there are no opinion pieces. In the fact-checking reports, Snopes uses colourful symbols, visually similar to road signs, to denote the rating outcome (Juneström, 2021, p. 506). Snopes employs altogether 20 rating categories (https://www.snopes.com/fact-check-ratings/). For brevity, only the nine categories found in the sample are listed below.

  1. True. This rating indicates that the primary elements of a claim are demonstrably true.

  2. Correct Attribution. The quoted material has been correctly attributed to the person who spoke or wrote it.

  3. False. The primary elements of a claim are demonstrably false.

  4. Outdated. This rating applies to items for which subsequent events have rendered their original truth rating irrelevant (e.g. a condition that was the subject of protest has been rectified).

  5. Miscaptioned. This rating is used with photographs and videos that are “real” (i.e. not the product, partially or wholly, of digital manipulation) but are nonetheless misleading because they are accompanied by explanatory material that falsely describes their origin, context, and/or meaning.

  6. Mixture. A claim has significant elements of both truth and falsity to it such that it could not fairly be described by any other rating.

  7. Unproven. This rating applies to a claim for which we have examined the available evidence but could not arrive at a true or false determination, meaning the evidence is inconclusive or self-contradictory.

  8. Labelled Satire. A claim is derived from content described by its creator and/or the wider audience as satire.

  9. Research in Progress. This rating is used with pages that present items which we are currently investigating but have not yet reached any publishable conclusion about.

The empirical data include altogether 128 fact-checking reports (for short, fact-checks) dealing with the Russo-Ukrainian conflict. The reports were published within the period of 24 February 2022–28 June 2023. The reports were identified from the website of Snopes.com using the search terms Russia AND Ukraine, and Ukraine AND war. All of the 128 fact-checks dealing with the above topic were downloaded for closer analysis. The preliminary analysis of the material indicated that it is sufficient for the needs of qualitative content analysis because the data became saturated. It became evident that the gathering of additional reports would not have essentially changed the qualitative nature of the material because it seems that the Russo-Ukrainian conflict will continue in the foreseeable future. The length of the fact-checking reports, including the list of references ranged from 330 to 1840 words. The reports are structured similarly. First, a claim to be assessed is presented, followed by the description of the context where the claim was voiced. Thereafter, the fact-checker reports how he or she assessed the validity of the claim and what kind of sources were used as evidence. Finally, the claim is rated using one of the categories listed above.

In order to get an overview of the reports, they were read twice. Thereafter, except for the categories Comparison by similarity, Comparison by difference, and Explanation, the evaluation grounds used by the fact-checkers were identified inductively from the data. The coding of the data proceeded iteratively so that the emergence of new categories was allowed until there were no anomalies. Finally, the types of information sources used in fact-checks were identified inductively from the material.

The analysis of the coded material was started by calculating the percentage distributions of the evaluation grounds, types of information sources, and the ratings given by the fact-checkers. The ratings were classified into three subgroups. The subgroup of valid include two ratings: True and Correct Attribution, while claims rated as False, Miscaptioned, and Outdated formed the subgroup labelled as invalid. Finally, the subgroup of mixed comprise four categories: Mixture, Labelled Satire, Unproven, and Research in Progress. The adjective “mixed” indicates that a rating category incorporates both valid and invalid elements.

The quantitative overview was elaborated further by means of qualitative content analysis. More specifically, following the ideas of Glaser and Strauss (2017, pp. 101–117), the coded data were scrutinised by means of constant comparison by identifying similarities and differences in the ways in which the fact-checkers employed the evaluation grounds while assessing the validity of the claims. To achieve this, the evaluation grounds were examined within three rating subgroups: valid, invalid, and mixed. Within each subgroup, the analysis focussed on the similarities and differences in the ways in which individual evaluation grounds, for example, Correctness of information and Plausibility of an event were employed in the fact-checks. The identification of similarities and differences enabled the capture of the variation of the ways in which the grounds were employed in fact-checking. The constant comparison also enabled the identification of typical (common) ways in which diverse grounds were used per rating category. In the presentation of the empirical findings, the focus will be placed on the latter aspect rather than the characterisation of the wide variety by which individual grounds were employed. This is because the presentation of typical ways exhibiting essential features of the use of grounds enables the picture of the most important characteristics of validity assessment. Similarly, by focussing on typical cases, the constant comparison was used to identify the ways in which the fact-checkers commonly made use of information sources to obtain evidence for their assessments.

In the findings section, references are made to individual fact-checking reports by technical codes such as Fact check 112; the numbers originate from the chronological order of the reports so that number 1 is assigned to the oldest fact-check. As the fact-checks collectively represent the assessments published by Snopes.com, no attempts will be made to compare the reports written by individual fact-checkers.

5. Findings

5.1 Quantitative overview

Overall, the assessments made by the fact-checkers inclined towards the negative end of the scale, as indicated by Table 2 below.

The share of negatively coloured assessments (False, Miscaptioned, and Outdated) indicative of the invalidity of the claims was 54.7%, while the share of positive ratings (True, Correct Attribution) was 26.7%. The share of mixed assessments was 15.6%. While assessing the validity of the claims, the fact-checkers employed altogether 11 individual evaluation grounds specified in Table 3.

Correctness of information presented was clearly the most frequently used evaluation grounds, followed by Temporal compatibility and Verifiability of an event. The fact-checkers also drew attention to other contextual factors dealing with the object of assessment, for example, the degree to which the description of an event is in congruence with the real occurrence place of an incident. While assessing the validity of a claim, comparison by similarity appeared to be more popular than comparison by differentiation. In some cases, the fact-checkers assessed the honesty of the presenter of a claim particularly in cases in which it was suspected that a claim would serve the ends of propaganda.

To justify their assessments, the fact-checkers made a number of references to diverse sources of information used as evidence. The fact-checks, including the list of sources placed in the end of the report, contained altogether 627 references to information sources. However, only 315 of them were actually used in the fact-checking reports by explicitly drawing on the content of a source. This suggests that the list of sources contains a number of items that were included as additional evidence. Thus, on average, a fact-check employed 5 information sources, out of which 2–3 were significant for the content of the report. The distribution of 315 sources actually used in the assessments is specified in Table 4. It should be noted that in Table 4, the source categories are not completely mutually exclusive with regard to social media forums. For example, social media posts could be deployed by government officials, while local eyewitnesses can make videos and post them on social media. In the categorisation of the information sources, the main criterion is the creator of the information content. For example, if a local government official publishes a tweet, the information source is “local government official”, not a tweet. To compare, if an anonymous user shares a video on Twitter and the original creator of the information content cannot be identified, the information source is classified as “video presented in social media”.

As Table 4 indicates, governmental sources such as the statements of presidents and ministers were used most frequently as evidence. For this purpose, videos and photographs presented in social media were also popular, similar to newspaper articles, news agencies and television programmes. Moreover, posts submitted to social media platforms such as Twitter and Facebook were often used as evidence. The rest of sources employed by the fact-checkers were distributed to a wide area, ranging from experts such as professors to non-governmental organisations. Finally, there was a variety of miscellaneous sources mentioned only once, for example, social activist and the Constitution of the United Nations.

The quantitative overview presented above will be elaborated by the findings of the qualitative content analysis. As noted above, it demonstrates how the fact-checkers typically employed evaluation grounds while rating the claims as valid, invalid or mixed, and how they used information sources as evidence. To achieve this, the presentation of the qualitative findings is focussed on the review of 1–2 fact-checks per rating category illustrating how evaluation grounds and information sources were used in typical cases when a claim was considered as True, Miscaptioned, or Mixture, for example. In this context, the qualifier of “typical” means that a fact-check chosen for review exhibits the employment of essential evaluation grounds and information sources characteristic of a rating. The qualitative findings demonstrate that the employment of evaluation grounds and evidence is context-sensitive, depending on the topic of the claim, as well as its information content and the ways in which it is presented.

5.2 Confirming valid claims

5.2.1 Rated as true

Table 2 indicated that altogether 21.9% of the claims were rated as True. In the simplest cases, this rating was given by drawing on the combination of two evaluation grounds, that is, Comparison by similarity, and Correctness of information. A typical example of this approach is offered by Fact check 32. A social media user shared a video, claiming that “A former railway equipment factory in Pennsylvania shifted its activities to produce ammo for Ukrainian forces”. To examine the validity of this claim, its content was compared to a few sentences of an article published in Time magazine in February 2023. The article offered evidence for the validity of the claim because it stated that “Located in Scranton, Pennsylvania, the plant did repair steam locomotives a century ago”. The article also said the plant, “which is owned by the U.S. Army, is contracted to make over 11,000 shells per month”. The article estimated that “Ukraine had been firing 6,000 to 7,000 artillery shells per day”. In addition, the article stated that the “Pennsylvania facility is not the only U.S. plant making ammunition for Ukraine”. On this basis, by depicting numerical data obtained from an established magazine, the fact-checker concluded that the claim offers correct information and the claim can be rated as True.

The fact-check was rendered more difficult in cases in which the assessments was based on the evidence offered by photographs and videos. Fact check 42 about Nova Kakhovka Dam failure exemplifies well assessments such as these. The dam collapsed on June 6, 2023, after the destruction of the Russian Kakhovskaya Hydro Power Plant. A social media user shared a video and pictures, claiming that they “authentically show a flooded Nova Kakhovka Administrative District following the collapse of the Novo Kakhovka Dam”. The fact-checking concentrated on the examination of the pictures by comparing them to photographs and videos taken earlier from the dam. In the assessment, the employment of the grounds of Verifiability of an event and Spatial compatibility occupied a central role because it enabled the conclusion that the flood referred to in the claim really occurred in Nova Kakhovka Administrative District. As the fact-checker put it, “among the first photographs and videos purporting to show post-failure flooding came from the administrative section of Nova Kakhovka, where swans were viewed swimming in front of ornate public buildings: these images are authentic”. Second, Comparison by similarity was employed as an evaluation ground when the pictures shared by the presenter of the claim were compared with Russian state media’s photographs of the flooded administrative centre. These pictures matched those reported on by other outlets, including drone footage aired by The Guardian–an established newspaper. The authenticity of the pictures shared by the presenter of the claim was confirmed further by drawing on additional information source, that is, Jevgenia, a local eyewitness. She reported her observations about the flood to Reuters, a news agency by indicating that “the water was up to the knees of Russian soldiers walking the main street in high rubber boots.” Using the above evaluation grounds, confirmed by the evidence obtained from complementary sources of information, the claim was rated as True.

5.2.2 Correct Attribution

Among the rating categories employed by Snopes, Correct Attribution indicates that the quoted material, for example, a speech has been correctly attributed to the person who spoke it. This rating was mostly used when assessing the validity of claims presented by presidents and politicians. Fact check 112 examining the statement of the former U.S. President Donald Trump exemplifies this approach. Recorded one day before Russia invaded Ukraine, Trump was interviewed by two hosts of the Clay Travis and Buck Sexton Show. It was claimed that during the interview, Trump characterised Russian President Putin’s Ukraine strategy as “savvy” and “genius.”

The fact-checking was based on a detailed analysis of the transcript of the video comprising the 34-min interview. As the evaluation task was relatively simple, that is, whether or not Trump used the words “savvy” and “genius” in the interview, the fact-check was based on the employment Comparison by similarity as a ground. The comparison revealed that precisely, beginning at the 2:20 mark in the transcript, Trump said: “I went in yesterday and there was a television screen, and I said, this is genius. Putin declares a big portion of Ukraine. Putin declares it as independent. So, Putin is now saying, it is independent, a large section of Ukraine. I said, how smart is that? He is gonna go in and be a peacekeeper …. here is a guy who is very savvy”. The comparison of the wording of the claim and Trump’s statement indicated that they are similar and that the claim is valid. In fact-checking, the transcript functioned in a double role because it was both the object of assessment and a source of information offering evidence. However, to put Trump’s statement in a broader context, two additional information sources were used. In a later interview with Fox News, Trump said that the invasion was “something that should never have happened,” as reported by Politico a politics focused newspaper. Nevertheless, despite these reservations, the fact-checker concluded that within the particular context of the Clay Travis and Buck Sexton Show, it is true that Trump described Putin´s strategy as “savvy” and a “genius.” As such, this claim was rated as Correct Attribution, and thus an example of political information that validly depicts a statement presented by the former president.

5.3 Debunking invalid claims

Table 2 indicated that the claims assessed by the fact-checkers were more frequently rated as invalid (54.7%) than valid (29.7%). This suggests that the fact-checks tend to focus on claims whose validity is doubted in some respect. In the present study, such assessments are analysed under the rating labels of False, Miscaptioned, and Outdated.

5.3.1 False

Out of 128 fact-checks, the rating of False was most frequent; no less than 32% of the claims were given this assessment. Common to such ratings was the conclusion that the claims intentionally spread misleading information. Typical to such claims is that their information content is often ambiguous, thus requiring the employment of multiple grounds in order to obtain a reliable evaluation result. One of such claims distributed in social media asserted that in early March 2023 in Lviv, Russian forces used a Kinzhal hypersonic missile to strike an underground NATO command centre built 400 feet underground, killing there up to 300 people, including at least 40 high level foreign advisors (Fact check 15). The claim was circulated in diverse sources, for example, Cairns Daily, an Australian website which publishes a wide array of disinformation. Moreover, the Intel Drop a website that pushes conspiracy theories – published the claim on March 9, 2023, leading to some of the earliest viral tweets about the purported strike.

To assess the validity of the claim, the fact-checker employed five evaluation grounds. First, the drawing on Explanation and Plausibility of an event, the fact-checker reasoned that the event depicted in the claim is implausible because it is unlikely that a Kinzhal missile could penetrate to an underground military base placed 400 feet underground. Moreover, a NATO command centre placed in Lviv seemed to be wildly unnecessary for the alliance, given that Lviv is located less than 20 miles from NATO member state Poland and their significant build-up of NATO forces. Second, drawing on the ground of Verifiability of an event, it was pointed out that as Lviv is far away from the front lines and that in March 2023, there were no Russian soldiers in Lviv to pull 40 dead from the wreckage of the underground headquarters. Thus, given this state of affairs, it cannot be verified whether the event depicted in the claim had occurred in reality. The date of the purported strike seemed to be wrong, too. Kinzhal missiles were really used in an attack to Lviv on 1 March 2023, but the purported demise of a military headquarters occurred on March 8, 2023. Evidence for the temporal incompatibility of the event was obtained from a viral tweet from March 30, 2023, including a photo of the purported attack. However, it appeared that the photo was taken in 2021 and it shows the aftermath of a Russian airstrike in Syria, not Ukraine, thus indicative of spatial incompatibility. Drawing on the above grounds, the fact-checker concluded that “Because the premise underlying viral claims about a NATO command centre strike borders on impossible and is militarily improbable, because the only evidence supporting the event comes from websites that deceptively mix unrelated news stories to create legitimate sounding viral claims, and because there is no evidence this event occurred, Snopes ranks the claim False.”

Deepfake images have become tools of contemporary information warfare. This is exemplified by Fact check 78 focussing on a video distributed by a social media user. The video suggests that President Putin announces surrender in March 2022, thus meaning the end of Russia’s war with Ukraine. The scrutiny of the video revealed, however, that is not a genuine video of Putin. Reuters–an established news agency – provided a partial translation of this deepfake video, writing: in the video, Putin appears to say, “We have managed to reach peace with Ukraine” and goes on to announce the restoration of independence of Crimea as a republic inside Ukraine. A tweet sharing the video with a caption in Ukrainian reads in translation, “The President of the Russian Federation announced the surrender of Russia. Russian soldier, drop your weapons and go home while you are alive!”

The scrutiny of the video focused on the revealed, however, that the claim offered incorrect information because in reality, Putin has made no such announcement. The video was created by manipulating a real video of Putin that was originally posted by the Kremlin on February 21, 2022. While Putin’s movements in the manipulated video and the genuine video generally matched up, the former version manipulated Putin’s mouth to make it appear as if the fabricated audio was truly coming from the Russian president. This was most noticeable during portions of the video where the real Putin is silent and the fake Putin is speaking. The debunking of the claim was thus based on the analysis of the audio-visual elements of the videos, drawing on three evaluation grounds: Correctness of information in the claim, Comparison by similarity and Comparison by differentiation. Similar to Fact check 112 dealing with President Trump’s statement reviewed above, the deepfake video about Putin’s speech functioned in a double role because it was both the object of evaluation and a source of information offering evidence for the assessment.

5.3.2 Miscaptioned

Claims rated as invalid were often presented in the miscaptioned videos and photographs serving the ends of information warfare. For example, on June 7, 2023, social media users shared an old video to spread a narrative about the Nova Kakhovka Dam explosion in Russian-occupied Ukraine (Fact check 128). It was claimed that the Ukrainian government “decided to purposefully drown their own citizens”, keeping the gates at the river open and that it was a “clear indication that Ukraine was the original culprit.” The fact-checking revealed that the video is authentic, but it was shared by a local media outlet already in April 2023 when massive floods occurred in Ukraine. Thus, using the ground of Verifiability of an event, it was concluded that an incident (flood) depicted in the video had really occurred in reality. However, given the fact that the recording was published several weeks before the destruction of the dam, the claim was temporally incompatible with the real event. In April 2023, the Ukrainian State Emergency Service reported on Telegram that several rivers had broken their banks, causing flooding in various regions of the country. Ukrhyfroenergo, the largest hydro power generating company in Ukraine, announced on April 18, 2023, exactly when the original video was posted on Instagram that early snowmelt and rainfall were causing rising water levels. Moreover, a similar video was shared on TikTok on April 18, 2023. The rating of Miscaptioned was further supported by the statement of Shayan Sardarizadeh, a journalist at BBC Verify. He underlined the fact that the post with a miscaptioned video was shared by many pro-Kremlin accounts, thus suggesting that the claim exhibits dishonesty in argumentation. Employing Verifiability of an event, Temporal compatibility and Honesty in argumentation as evaluation grounds, bolstered by information sources offering mutually supportive evidence, the claim was considered to communicate misleading information.

5.3.3 Outdated

Claims rated as Outdated indicate that they no longer are valid because subsequent events have rendered their original truth rating irrelevant. In February 2022, shortly after Russia’s attack to Ukraine, a social media user shared a photo of a Russian missile dubbed by NATO as “Satan”, claiming that a Russian priest blesses it. The fact-checking was rendered more difficult because it was not possible to locate the original source of the photograph. A reverse image search showed that the picture has been on the Internet for a number of years preceding the Russian invasion of Ukraine. Thus, Temporal compatibility was a major ground by which the validity of the claim was evaluated. The Swiss news outlet 20 Minutes tracked down the original image by indicating that it is located in the database for Moscow City News Agency. The description states the photograph was taken in 2015, and was captioned, “Ritual blessing of the participants in the Victory Parade and consecration of the launchers on the Khodynka field”.

More evidence for the validity assessment was obtained from the British newspaper The Times of London. It reported in 2019 that blessing various objects was a regular practice for Russian. However, the story reported that a church commission voted in 2019 to remove the category of missiles as items priests should bless. The Russian state-run media outlet Ruptly in 2014 posted a video on its platform titled, “Russia: Priests bless Topol-M ICBMs ahead of Victory Day,” although the video is no longer available. The image in question is included in a 2017 story published by the Ukrainian news site Euromaidan Press, with a caption stating, “A Russian Orthodox priest of the Moscow Patriarchate ‘blesses’ a Topol-M nuclear intercontinental ballistic missile.” Drawing on the additional grounds of Comparison by similarity and Correctness of information, and the evidence offered by the above sources of information, the fact-checker concluded that the photograph depicts events that occurred before the Russian Orthodox Church voted to stop blessing missiles. On the other hand, it was reasoned that if the caption in the Euromaidan Press story is accurate, the missile depicted is not the one designated by NATO as Satan. Furthermore, the events in question took place years before Russia invaded Ukraine. Thus, there were good reasons to believe that claim presented in February 2022 was no longer valid.

5.4 Mixed assessments

Altogether 15.6% of the fact-checks resulted in ratings indicative of “in-between” assessments of the validity of claims. In the present study, such ratings are referred to by the adjective mixed. This category contains four ratings: Mixture, Unproven, Labelled Satire, and Research in Progress.

5.4.1 Mixture

“Mixture” was a particularly common rating in cases in which the fact-checking focussed on the ambiguous statements presented by politicians. In a typical case offered by Fact check 27, Snopes examined what President Zelensky exactly meant when he held a news conference on the first anniversary of Russia’s invasion of Ukraine. On March 1, 2023, a Twitter user shared a clip of one of Zelensky’s answers during the news conference, writing, “Is this real? Zelensky saying Americans will send their sons and daughters to war for Ukraine and potentially die.”

The transcript of the video taken in the press conference functioned as a source of information offering evidence for the fact-check. First, drawing on the ground of Correctness of information, it appeared the above claim does not accurately depict Zelensky’s statement because his words were taken out of context and misrepresented. To be exact, Zelensky said in the video: “The U.S. will have to send their sons and daughters exactly the same way as we are sending [our] sons and daughters to war, and they will have to fight, because it’s NATO that we are talking about, and they will be dying.” However, in this particular context, he was talking about a hypothetical situation. If Ukraine loses the war against Russia, that would lead to a broader conflict resulting in Americans being pulled into fighting. Even though Zelensky did not declare that the U.S. will definitively send soldiers to fight in or die for Ukraine’s war, his full statement included truthful elements because he referred to Americans sending their sons and daughters to war. The employment of the combination of three evaluation grounds, that is, Correctness of information presented in the claim, and the comparison of the two versions of statements by similarity and differentiation resulted in the conclusion that the claim incorporates both truthful and misleading elements, thus indicative of political information rated as Mixture.

5.4.2 Unproven

Snopes rates a claim as Unproven if the available evidence for its validity is inconclusive or self-contradictory. Fact check 59 examined a video shared in social media forums, obviously serving the ends of information warfare. The video was accompanied by the claim that President Putin’s legs buckled during a ceremony in June 2022 either because he has terminal cancer or Parkinson’s disease. Another video that appeared to have been captured after the ceremony ended showed Putin standing with other attendees while having a drink. Moreover, in a longer video from the same ceremony showed Putin entering the room and standing for quite a long time.

The fact-checker scrutinised the videos in order to confirm or refute the above claim. To this end, comparisons were made by looking at a number of past speeches made by Putin to see if he often shifts his legs while standing behind or near a podium. Beginning at the 2:00 mark in a video from June 2021, Putin can be seen moving back and forth while speaking into a microphone. In another video from a speech in September 2020, during the first minute alone, Putin is seen shifting his legs back and forth altogether eight times. Drawing on the detailed comparison by similarity, however, the fact-checker found no evidence that Putin’s legs were shaking at the June 2022 ceremony due to terminal cancer, Parkinson’s disease, or any other illness. Thus, due to inconclusive evidence, it was not possible determine whether the claim is valid or invalid.

5.4.3 Labelled satire

Claims incorporating satirical elements are difficult to assess because satire draws on humour, irony, sarcasm and ridicule. Fact check 80 examined the claim contending that Ukraine rejected Dee Snider’s offer to use the song “We are not gonna take it” as a battle cry. On February 28, 2022, the Madhouse Magazine website published a story about Russia’s invasion of Ukraine using the above claim as a headline. The article also said that President Zelensky made negative comments about the song.

The fact-check drew on the evaluation ground of Correctness of information. A closer examination of the above story revealed that Zelensky purportedly said “I never heard of this song before, so I took a listen. Oh my God, it was the worst piece of 1980’s hair metal trash I ever heard. Tell Snider he can offer that song to Putin, maybe he will like it”. The fact-checking also disclosed that Snider tweeted about Ukraine, posting, “I absolutely approve of Ukrainians using ‘We are not gonna take it' as their battle cry!” However, the rest of the story (as told by Madhouse Magazine) never happened. The claim incorporated incorrect information because Zelensky did not make negative comments about Snider and his song. The nature of the claim was further clarified by examining the footer at the bottom of the Madhouse Magazine website. The publication characterises itself as the “Greatest Rock n Roll Comedy Magazine in the world. We bring you a twisted satirical view of the music and entertainment world”. Madhouse Magazine is also marketed as “the only place you can find hysterical satire, phony interviews, and imaginary concert reviews”. Given the particular genre of the magazine, the fact-checker concluded that the article with the above headline came from a satirical website. Therefore, the claim does not communicate information considered as true because the aim of the article is to offer a satirical contribution.

5.4.4 Research in Progress

Finally, Snopes examined claims dealing with events about whose nature there was for the time being insufficient evidence. Fact check 119 is particularly illustrative in this regard because it examined a recently occurred incident whose nature was still ambiguous. Two viral videos distributed in social media forums were accompanied by the claim that on May 3, 2023, Ukraine attacked the Kremlin in Moscow, with two drone strikes. One of the videos showed smoke rising from the Kremlin, while in another video, an aerial object appeared to crash and explode on one of the building’s famous domes.

The examination of the evidence about the purported attacks was rendered more difficult because at the time of the fact-checking, there was no independent reporting confirming who was responsible for the attack. Thus, verifiability of the event formed an important evaluation ground. In this regard, The New York Times (NYT) offered an authoritative source because it had verified two videos, one of which showed “what appears to be a drone flying toward and exploding over the Kremlin Senate, which houses the president’s executive office; another video shows the dome of the Senate building on fire.” According to NYT, it is likely that these are the same videos circulating on social media. To support this conclusion, NYT described how it verified the footage: by synchronising the footage, the newspaper reporters were able to confirm that two videos filmed from different angles captured the same explosion over the building, which houses the president’s executive office. Furthermore, the fact-checker drew on the video published by Associated Press (AP); the video showing smoke rising from the site was published overnight on a local Moscow News Telegram channel. Accompanying text on the video said residents in a nearby apartment reported hearing bangs and seeing the smoke around 2:30 a.m. local time. Unfortunately, however, AP was unable to verify the authenticity of the video. Similarly, the BBC also described the footage as unverified. According to Al Jazeera English, the video showing smoke rising behind the Kremlin Palace appeared to be shot from across the river, and was originally posted on a group for residents of the neighbourhood facing the Kremlin from across the Moskva River and picked up by Russian media. Meanwhile, Ukrainian President Zelensky told TV2, a Nordic broadcaster, “We don’t attack Putin or Moscow. “We fight on our territory.” Similarly, Mykhailo Podolyak, an adviser to President Zelensky, told The New York Times, “Ukraine definitely has nothing to do with the drone attacks on the Kremlin.” Finally, Anton Gerashchenko, adviser to the Ukrainian minister of internal affairs, claimed that Russian partisans were likely perpetrators of the attack.

The above fact-check is indicative of systematic attempts to confirm or refute a claim. The assessment was based on the employment of two evaluation grounds: Verifiability of an event and Comparison by similarity. The assessment is supported by triangulating five information sources offering partially conflicting evidence. However, given that the authenticity of the videos could not be confirmed in sufficient degree, the claim was rated as Research in Progress. However, as in similar cases, the fact-checker indicated that Snopes will “update the story when we learn more”.

6. Discussion

The present study elaborated the picture of fact-checking by demonstrating how the validity of claims dealing with the Russo-Ukrainian conflict is assessed by employing evaluation grounds of diverse kind and how such assessments are bolstered by evidence acquired from a variety of information sources. The quantitative findings suggest that Snopes fact-checkers make use of a relatively broad repertoire of evaluation grounds. However, only a few of them appeared to be particularly significant for the assessment. Correctness of information presented in a claim was the most frequently used ground. This is understandable because the factuality of a claim depends on the degree to which it is able to accurately depict an event or issue. Similarly, to put the fact-check on a firm basis, Verifiability of an event is an important and frequently used evaluation ground where because it helps to find out whether videos and photographs enclosed by the presenter of the claim depict an incident that has occurred in reality.

Overall, the qualitative findings suggest that the applicability of an evaluation ground (or a combination of grounds) is context sensitive. The topic of a claim, as well as the way in which information is presented in a claim affect the selection of the grounds and sources of information offering evidence. In the simplest cases, it can be sufficient to compare the similarity of the content of a claim and evidence obtained from an authoritative information source to arrive in the conclusion that the claim offers true political information or that a statement is correctly attributed to a person. At the other end of the continuum, there are the complex assessments focussing on invalid claims accompanied by manipulated videos or photographs. In these cases, reliable fact-checking requires a combination of multiple evaluation grounds. They are used to reveal the correctness or incorrectness of information content of a claim; in addition, the evaluation grounds are employed to examine the contextual features of an event depicted by the presenter of a claim. In this regard, the grounds of Verifiability of an event, Temporal compatibility, and Spatial compatibility occupy a central role. In assessments drawing on the reasoning of the fact-checkers, Comparison by similarity, Comparison by differentiation, and Explanation are particularly important evaluation grounds because they represent central cognitive mechanisms in decision making (Zhang et al., 2009). Similarly, the employment of the ground of Plausibility of an event is based on the reasoning because it requires the capability of assessing the realistic likelihood of occurrences of events. Finally, Honesty in argumentation is an evaluation ground that reflects the moral dimension of fact-checking, that is, judgements about what is acceptable or inacceptable behaviour while sharing politically sensitive information.

The empirical analysis also revealed that although the fact-checkers employed a wide variety of information sources offering evidence for the validity assessments, only a few source types were used frequently. In this regard, statements presented by government representatives, as well as videos and photographs distributed in social media were particularly important. In many cases, the transcripts of videos functioned in double roles in that they were both the objects of fact-checking and information sources offering evidence for or against the validity of a claim. Moreover, newspaper articles published in legacy media, news agencies, and broadcasting companies were actively employed as sources offering evidence for the assessments. Independent on a source of information, its use as evidence occurred in a relatively straightforward way. First, the fact-checker referred to a source, then briefly depicted its main information content and finally indicated how it supports or contradicts an assessment drawing on evaluation ground, for example, Verifiability of an event.

The novelty value of the empirical findings can be reflected by making a few comparative notions to prior studies on fact-checking. The findings partially differ from the empirical observations of Graves (2017, pp. 524–528) who identified the main elements that a typical fact-check contains. Graves found, for example, that fact-checkers prefer experts for interpretation of the data. In the present study, the role of experts as information sources appeared to be marginal because the most frequently used sources were government representatives, videos, photographs, newspaper articles and television programmes. Graves (2017) found that typically, fact-checking reports rely on 5 to 10 distinct sources. In the present study, the number of sources included in the list of references was somewhat lower, that is, 5 on average. On the other hand, similar to the observations made by Graves, Snopes fact-checkers made systematic attempts to make the verification process public for the sake of transparency. To achieve this, the rating criteria were explicated clearly, and the verdicts were justified in detail. In many cases, the fact-checkers also explained how they chose information sources bolstering the assessments.

The findings of the present study also support the observations made of Vu et al. (2022) about fact-checking dealing with climate change. They found that 53% of the fact-checks were determined as either false (39%) or mostly false (14%), while about one-third (32%) were either true or mostly true. In the present study, the share of ratings indicative of invalid claims was quite similar, that is, 54.7%, while the share of valid ratings was roughly the same as in Vu et al.’s (2022), study, that is, 29.7%. These findings support the conclusion that negativity piques the interest of fact-checkers because they are drawn toward events that might have undesirable consequences, due to the spreading of false rumours, for example. Further support for this assumption can be obtained from the empirical study conducted by Amazeen (2016). She showed that fact-checkers themselves are more likely to examine negative political ads than positive ones. In comparison, fact-checkers assessing claims related to climate change were more active to use scientific sources: of the fact-checks, 26% cited scientific studies and 41% referred to scientists. In the present investigation, the role of scientific sources remained very marginal. This may be due to the difference of the topics; so far, as compared to climate change, there is not much scientific information about the ongoing Russo-Ukrainian conflict.

The findings of the present study also partially support the conclusions drawn by Steensen et al. (2023) and Soo et al. (2023). They found that fact-checkers favour statistical and governmental sources offering numerical data in particular, indicative of the attempt to attain “pre-epistemic certainty” associated with authoritative sources whose factuality can be taken for granted. Also in the present investigation, government representatives were used quite frequently. However, evidence acquired from statistics remained marginal, perhaps simply due to that it is difficult to find statistical information about the Russo-Ukrainian conflict. Instead, while assessing the claims dealing with the conflict, legacy media, for example, The New York Times and Washington Post, as well as established news agencies like AP and Reuters were used actively while seeking support for the validity assessments. This suggests that the checking of political information about controversial issues tends to lean on authoritative sources because it is likely that they disseminate information whose reliability is checked beforehand. Finally, the findings have practical implications dealing with the limitations of fact-checking. Andersen and Søe (2020) and Vinhas and Bastos (2022) contend that the fact-checking of fake news in particular represents a Sisyphean task because false information is here to stay. Fact-checking made by Snopes exemplifies well the “Sisyphean” work. It is evident that the ongoing Russo-Ukrainian conflict will ferment information warfare manifesting itself in the growing amount of misleading claims, accompanied by manipulated videos and photographs in particular. Despite this, fact-checkers have no choice but continue their efforts to enhance the credibility of political information by verifying or debunking claims that are controversial or of unclear accuracy. This requirement applies to fact-checkers working in different media such as newspapers and broadcasting companies, as well as those who work in fact-checking services like PolitiFact and Snopes.com. To achieve this, the consistent use of evaluation grounds of diverse types and clearly expressed justifications on why a statement was considered true, partially true or false, are particularly important because they enhance the credibility of fact-checking in the eyes of the readers. Similarly, for the credibility of fact-checking, it is important that it is based on a deliberate choice of information sources supporting the evaluations.

7. Conclusion

The main contribution of the present study is the elaboration of the ways in which fact-checkers make use of diverse evaluation grounds to demonstrate that a claim offering political information is valid, invalid or mixed in nature. The findings also provide comparative observations about the ways in which fact-checkers make use of diverse information sources to obtain evidence for their assessments.

The findings highlight the importance of fact-checking that is based on the complementary use of diverse evaluation grounds. The findings also emphasise the significance of using a sufficient repertoire of information sources offering credible evidence for the validity assessments. As the present study concentrated on a sample of fact checks reported by Snopes – a single fact-checking organisation – and the empirical analysis focussed on claims dealing with a particular topic, that is, the Russo-Ukrainian conflict, more research is required to examine fact-checking dealing with different issues. It is evident that comparative studies would enrich the picture of fact-checking and elucidate further its potential as a way to combat disinformation in the “post-truth era”.

Figures

The research framework

Figure 1

The research framework

The evaluation grounds used by fact-checkers

Evaluation groundDefinitionExample
Comparison by similarityThe object of assessment is characterised by an individual attribute by pointing out that the object is similar to another object“Buildings shown in the more recent photograph match their location in the April 2022 photo.” (Fact check 6)
Comparison by differentiationThe object of assessment is characterised by an individual attribute by pointing out that the object is different compared to another object“It compared pictures of Putin purportedly taken in early 2023, and then in March 2023 in Ukrainian territories, in which his chin in the centre image looked noticeably different from the other two.” (Fact check 17)
ExplanationThe object of assessment is characterised by an individual attribute that is perceived as causal because it is seen as a factor capable of producing positive or negative consequences for actors in the future“If Russia’s statement has any truth to it, then none of the residential structures hit during the May 30 attack were their intended target.” (Fact check 85)
Correctness of informationThe degree to which information presented in a claim accurately describes the object of assessment“If such a clinic exists, the phone number displayed is incorrect. In reality, the Russian number 495–728-5000 belongs to the U.S. Embassy in Moscow.” (Fact check 22)
Temporal compatibilityThe degree to which information presented in a claim is in congruence with the real occurrence time of an event“Accompanying text on the video said residents in a nearby apartment reported hearing bangs and seeing the smoke around 2:30 a.m. local time.” (Fact check 119)
Spatial compatibilityThe degree to which information presented in a claim is in congruence with the real occurrence place of an event“Several Internet sleuths geolocated the video’s location, placing it in a town just northeast of Moscow named Kolchugino.” (Fact check 123)
Plausibility of an eventThe degree to which information presented in a claim offers a believable description of the likelihood of occurrence for an event“It is unlikely that an entire Ukrainian military convoy could travel with such impunity this deep into enemy territory.” (Fact check 16)
Verifiability of an eventThe degree to which information presented in a claim describes an event that occurred in reality“It is impossible to know what the intended target would be for drones that were disabled or destroyed prior to reaching that target.” (Fact check 2)
Honesty in argumentationThe degree to which information presented in a claim fairly and sincerely describes the object of assessment“Recently, fake news about the alleged activities of American military biological laboratories in Ukraine has been spread in the media and social networks”. (Fact check 116)

Source(s): Created by the author

The percentage distribution of the ratings (n = 128)

%
False32.0
True21.9
Miscaptioned21.1
Correct attribution7.8
Mixture7.8
Research in progress3.2
Labeled satire2.3
Unproven2.3
Outdated1.6
Total100.0

Source(s): Created by the author

The percentage distribution of evaluation grounds used in the validity assessments (n = 410)

Correctness of information39.0
Temporal compatibility16.3
Verifiability of an event12.3
Spatial compatibility10.0
Comparison by similarity8.8
Honesty in argumentation4.6
Explanation3.4
Comparison by differentiation2.9
Plausibility of an event2.7
Total100.0

Source(s): Created by the author

The percentage distribution of information sources actually used in fact-checking reports (n = 315)

Government (e.g. president or minister)15.6
Video presented in social media12.7
Photograph presented in social media12.7
Newspaper article9.2
Post (e.g. tweet) submitted to social media8.3
News agency7.9
Television programme7.6
Fact-checking organisation (e.g. PolitiFact)2.5
Expert (e.g. professor)2.2
Intergovernmental organisation (e.g. the United Nations)2.2
Journalist/Investigative journalism group2.2
Private company1.9
Local resident or eyewitness1.6
News website1.6
Politician1.3
Encyclopedia1.0
Local government official1.0
Military unit representative1.0
Authentication service0.6
Governmental agency (e.g. NASA)0.6
Magazine article0.6
Memorandum0.6
Non-governmental organisation0.6
Sources of other kind (e.g. newsletter and social activist)4.5
Total100.0

Source(s): Created by the author

References

Amazeen, M.A. (2016), “Checking the fact-checkers in 2008: predicting political ad scrutiny and assessing consistency”, Journal of Political Marketing, Vol. 14 No. 4, pp. 433-464, doi: 10.1080/15377857.2014.959691.

Amazeen, M.A. (2020), “Journalistic interventions: the structural factors affecting the global emergence of fact-checking”, Journalism, Vol. 21 No. 1, pp. 95-111, doi: 10.1177/1464884917730217.

Amazeen, M.A., Thorson, E., Muddiman, A. and Graves, L. (2018), “Correcting political and consumer misperceptions: the effectiveness and effects of rating scale versus contextual correction formats”, Journalism and Mass Communication Quarterly, Vol. 95 No. 1, pp. 28-48, doi: 10.1177/1077699016678186.

Andersen, J. and Søe, S.O. (2020), “Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news - the case of Facebook”, European Journal of Communication, Vol. 35 No. 1, pp. 126-139, doi: 10.1177/0267323119894489.

Böhnert, M. and Reszke, P. (2022), “Which facts to trust in the debate on climate change? On knowledge and plausibility in times of crisis”, in Hohaus, P. (Ed), Science Communication in Times of Crises, John Benjamins, Amsterdam, pp. 15-40.

Duke Reporter’s Lab (2023), “Fact-checking news”, (June 21, 2023), available at: https://reporterslab.org/fact-checking/ (accessed 30 August 2023).

Garcia-Marin, D. and Salvat-Martinrey, G. (2023), “Disinformation and war. Verification of false images about the Russian-Ukrainian conflict”, Icono, Vol. 14 No. 1, (no pagination), available at: https://icono14.net/files/articles/1943-EN/index.html (accessed 6 October 2023).

Glaser, B.G. and Strauss, A.L. (2017), The Discovery of Grounded Theory. Strategies for Qualitative Research, Routledge, London, UK.

Graves, L. (2017), “Anatomy of a fact check: objective practice and the contested epistemology of fact checking”, Communication, Culture and Critique, Vol. 10 No. 3, pp. 518-537, doi: 10.1111/cccr.12163.

Graves, L. (2018), “Boundaries not drawn: mapping the institutional roots of the global fact-checking movement”, Journalism Studies, Vol. 19 No. 5, pp. 613-631, doi: 10.1080/1461670x.2016.1196602.

Graves, L. and Cherubini, F. (2016), The Rise of Fact-Checking Sites in Europe, University of Oxford. Reuters Institute for the Study of Journalism, Oxford, UK, available at: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/research/files/The%2520Rise%2520of%2520Fact-Checking%2520Sites%2520in%2520Europe.pdf (accessed 6 October 2023).

Juneström, A. (2021), “An emerging genre of contemporary fact-checking”, Journal of Documentation, Vol. 77 No. 2, pp. 501-517, doi: 10.1108/jd-05-2020-0083.

Mattes, K. and Redlawsk, D.P. (2020), “Voluntary exposure to political fact checks”, Journalism and Mass Communication Quarterly, Vol. 97 No. 4, pp. 913-935, doi: 10.1177/1077699020923603.

Moon, W.-K., Chung, M. and Jones-Jang, S.M. (2022), “How can we fight partisan biases in the COVID-19 pandemic? AI source labels on fact-checking messages reduce motivated reasoning”, Mass Communication and Society, Vol. 26 No. 4, pp. 646-670, published online 10 August 2022) (unpaginated), doi: 10.1080/15205436.2022.2097926.

Morgan, M., Barker, D.C. and Bowser, T. (2015), “Fact-checking polarized politics: does the fact-check industry provide consistent guidance on disputed realities?”, The Forum, Vol. 13 No. 4, pp. 577-596, doi: 10.1515/for-2015-0040.

Savolainen, R. (2009), “Interpreting informational cues: an explorative study on information use among prospective homebuyers”, Journal of the American Society for Information Science and Technology, Vol. 60 No. 11, pp. 2244-2254, doi: 10.1002/asi.21167.

Soo, N., Morani, M., Kyriakidou, M. and Cushion, S. (2023), “Reflecting party agendas, challenging claims: an analysis of editorial judgements and fact-checking journalism during the 2019 UK general election campaign”, Journalism Studies, Vol. 24 No. 4, pp. 460-478, doi: 10.1080/1461670x.2023.2169190.

Steensen, S., Kalsnes, B. and Westlund, O. (2023), “The limits of live fact-checking: epistemological consequences of introducing a breaking news logic to political fact-checking”, New Media and Society, doi: 10.1177/14614448231151436, published online February 12, 2023) (unpaginated), (accessed 6 October 2023).

Tsang, N.L.T., Feng, M. and Lee, F.L.F. (2022), “How fact-checkers delimit their scope of practices and use sources: comparing professional and partisan practitioners”, Journalism, Vol. 22 No. 10, pp. 1358-1375, doi: 10.1080/1461670x.2021.1952474.

Uscinski, J.E. (2015), “The epistemology of fact checking (is still naìve): rejoinder to Amazeen”, Critical Review, Vol. 27 No. 2, pp. 243-252, doi: 10.1080/08913811.2015.1055892.

Uscinski, J.E. and Butler, R.W. (2013), “The epistemology of fact checking”, Critical Review, Vol. 25 No. 2, pp. 162-180, doi: 10.1080/08913811.2013.843872.

Vinhas, O. and Bastos, M. (2022), “Fact-checking misinformation: eight notes on consensus reality”, Journalism Studies, Vol. 23 No. 4, pp. 448-468, doi: 10.1080/1461670x.2022.2031259.

Vu, H.T., Baines, A. and Nguyen, N. (2022), “Fact-checking climate change: an analysis of claims and verification practices by fact-checkers in four countries”, Journalism and Mass Communication Quarterly, Vol. 100 No. 2, pp. 286-307, (published online 1 December 2022) (unpaginated), doi: 10.1177/10776990221138058.

Walter, N., Cohen, J., Holbert, R.L. and Morag, Y. (2020), “Fact-checking: a meta-analysis of what works and for whom”, Political Communication, Vol. 37 No. 3, pp. 350-375, doi: 10.1080/10584609.2019.1668894.

Yarrow, D. (2021), “From fact-checking to value-checking: normative reasoning in the new public sphere”, Political Quarterly, Vol. 92 No. 4, pp. 621-628, doi: 10.1111/1467-923x.12999.

Zhang, P., Soergel, D., Klavans, J.L. and Oard, D.W. (2009), “Extending sense-making models with ideas from cognition and learning theories”, Proceedings of the Annual Meeting of the American Society for Information Science and Technology, Vol. 45 No. 31, p. 23, doi: 10.1002/meet.2008.1450450219, (unpaginated), (accessed 6 October 2023).

Corresponding author

Reijo Savolainen can be contacted at: reijo.savolainen@tuni.fi

Related articles