The spread of misinformation can be driven by a variety of factors, and some purveyors of misinformation have been shown to employ a number of different strategies and tools to enhance spread. This chapter discusses the common factors, strategies, tactics, and motivations that facilitate the spread of misinformation about science. The first section of the chapter describes key factors that contribute to its spread: digital technologies and online platforms, influence and monetization, industry public relations strategies, and information access and voids. The chapter then highlights common rhetorical tactics that are often used by actors that seek to spread misinformation about science. The last section of the chapter explores possible motivations that drive individual people to spread misinformation. Throughout the chapter, the committee discusses how spread of misinformation about science is fundamentally shaped by the broader context of the contemporary information ecosystem and systemic factors described in Chapter 3.
In addition to the various sources of misinformation—each driven by specific reasons and motivations—(see Chapter 4), the committee also identified key factors related to the contemporary information ecosystem that create conditions that facilitate the spread of misinformation about science. These factors—digital technologies and online platforms, influence and monetization, industry public relations strategies, and information
access and information voids—contribute in different ways to the spread of misinformation and who is exposed to it. Moreover, as will be discussed in more detail in Chapter 7, there are currently no specific laws in the United States that directly govern or limit the spread of misinformation, which can also contribute to its proliferation. By illuminating these factors, we can better understand the pervasiveness of misinformation about science, its differential reach, and what can be done to address it.
Digital communication technologies in general, and social media specifically, contribute, in part to the spread of misinformation, including in the area of science. Unlike legacy journalism with its corporate gatekeepers and institutionalized fact-checking, social media platforms offer misinformation purveyors an environment that is far more conducive to engaging amenable audiences. Multiple factors contribute to the prevalence of misinformation on social media platforms, and three of the most consequential are: incentives that are related to popular content, content prioritization algorithms that privilege emotional and controversial content, and lax content moderation policies.
Social media entered Americans’ information environments in the early 21st century, starting with Myspace as the first widely-used platform, and continuing with Facebook, X (formally Twitter), Instagram, TikTok, YouTube, Reddit, and a host of smaller players. This new medium operated based on a fundamentally different logic from mainstream media, which afforded access to only a privileged few. Social media allows age-verified users to log on and create an account, which means anyone can potentially attract large audiences if their content interests enough people. This characteristic of social media has allowed some individuals to gain grassroots fame, although it has not changed the fact that only a small proportion of users can do so (Hindman, 2010; SignalFire, 2020). From the perspective of a business that thrives on attention, viral is viral, whether the content is cute pets doing tricks or falsehoods about the causes and effects of COVID-19. In other words, there can be an economic incentive to allow popular content to flourish on social media platforms, even if it includes misinformation about science, because it attracts attention and boosts advertising revenue (Maréchal et al., 2020). However, this must also be considered against some of the disadvantages of tolerating misinformation, which include negative press, angry users, advertisers’ vested interest in avoiding associations with misinformation (Interactive Advertising Bureau, 2020), and arguably even
deaths (Gisondi et al., 2022). Whether social media companies are doing enough to prevent misinformation continues to be a topic of discussion with respect to opportunities for potential government regulation (Helm & Nasu, 2021).
As part of profit maximization strategies, social media companies can implement two types of algorithms: content shaping, and content moderation (Maréchal & Biddle, 2020b). Content shaping algorithms determine what posts users will see and are usually based on digital traces of their prior activity. Such algorithms tend to show people more of what they have already seen and expressed interest in (Kim, 2017). Content moderation algorithms automatically identify content that violates a platform’s terms of service, including, in some cases, misinformation about science (also see Chapter 7 for more on moderation). Using machine learning and other adaptive and automated techniques, substantial amounts of harmful content can be flagged and taken down before it can achieve viral popularity. However, these methods are not sufficient on their own, and human content moderators are needed to screen out content that eludes automated systems (Gillespie, 2018).
In accordance with the profit motives described above, content shaping algorithms have the potential to boost the visibility of misinformation about science. This can happen in multiple ways: for example, an algorithm might inadvertently show posts containing misinformation about effective treatments for cancer to a user who views and “likes” other cancer-related content. Content shaping algorithms can also surface misinformation about science through trending topics, which achieve broad visibility through short-term bursts of attention (see Basch et al., 2021; Bonnevie et al., 2023). Because they are not based on users’ past activities, trending topics can expose people to content they have not previously expressed any interest in. Whereas content shaping algorithms can amplify misinformation about science by presenting it to users, content moderation algorithms can do so by failing to flag it for removal or by incorrectly removing credible information. But misinformation is a contested category, and even humans disagree as to what qualifies (Newman & Reynolds, 2021; see also Chapter 2), so it is inevitable that algorithms designed to filter it out will fall short in some instances.
Finally, some social media platforms may be slow to implement or unwilling to implement or enforce robust policies against misinformation
in part because prohibitions against misinformation can be in conflict with the predominant, ad-based business model. Some social media platforms do forbid misinformation in certain scientific categories—including health and COVID-19—but permit it in others such as those pertaining to social issues (Waddell & Bergmann, 2020). This means that misinformation in unmoderated scientific categories may be more visible on some social media platforms. Freedom of speech is sometimes marshaled in defense of such lax policies, without much acknowledgment of their downsides (Smith-Roberts, 2018). The efficacy with which platforms enforce policies to address misinformation is a separate matter. Not all policy violations are punished equally, whereby violations committed by popular or powerful users may be exempted from enforcement (Morrison, 2021; Porterfield, 2021).
The committee found substantially less research on search engines than on social media platforms, even though 68% of website traffic comes from organic and paid search results (BrightEdge, 2019) and 88% of adults in the United States use traditional search engines (Iskiev, 2023). The literature on search engines (only some of which is specific to misinformation about science) focuses on improving the epistemic quality of search results (Granter & Papke, 2018; Mazzeo et al., 2021), understanding how information quality relates to health decisions (Abualsaud & Smucker, 2019), analyzing misinformation prevalence in search results across languages (Dabran-Zivan et al., 2023), and comparing misinformation consumption between social media and search (Motta et al., 2023). Recent work by Tripodi & Dave (2023) have employed qualitative interviewing to explore how search engines can lead users to inaccurate or misleading information, finding that optimization and advertising on search engines may create conditions that make it challenging for information seekers to find accurate information about health. But overall, the limited scope of research on search engines is likely related to the lack of availability of data from such sources.
As suggested in the discussion of information sources in Chapters 3 and 4, there is variation in how much different actors are viewed as credible or trustworthy. Additionally, a small number of accounts may be responsible for a large amount of misinformation about science online (Yang et al., 2021). We refer to the ability to gain attention and encourage sharing of information as influence.
On social media, software agents known as bots promulgate false scientific claims on a daily basis that are read by few if any actual people (Dunn et al., 2020). Humans generally have greater power to distribute misinformation than bots (Xu & Sasahara, 2022), probably due to the bots’ inability to convincingly mimic human communication patterns (Luo
et al., 2022). However, bots are cheap and relatively easy to create, which may render their marginal ability to promulgate false beliefs worth the cost. Among human-controlled social media accounts, attention has long been known to follow a long-tailed distribution, wherein a small number of accounts accrue disproportionately large shares of attention (Himelboim, 2017). Accordingly, a substantial proportion of the misinformation shared on social media platforms can be attributed to a small number of prominent and highly active users (Nogara et al., 2022; Pierri et al., 2023). Many of these users are not household names, but nevertheless can reach large audiences with misinformation about science.
Aside from research studies on a few news media outlets, little quantitative research has explored how powerful actors can spread misinformation in non-social media contexts. A few studies have described how popular podcasters (Burton & Koehorst, 2020; Dowling et al., 2022) and TV channels that Goss (2023) describes as “sham journalism” misinform their respective audiences, but these are not systematic analyses. Two other relevant studies have explored talk radio. One was a survey of listeners who reported that they were relatively misinformed (Hofstetter et al., 1999). The other was a content analysis of talk radio content that found, among other results, substantial quantities of “very dramatic negative exaggeration” that “significantly misrepresents or obscures the truth” (Sobieraj & Berry, 2011, p. 40).
One reason for the lack of research on non-social media contexts may be the methodological difficulties of studying visual and audio content. Natively textual media such as newspapers, magazines, and digital text have historically been much more popular as data sources across the social sciences than non-textual media due to the greater accessibility of the former. Moreover, the state of the art in computational analysis of text has been far more advanced than for images, audio, or video: the kinds of information that can be extracted from non-textual media are much more rudimentary than what can be obtained from text. While a few studies have begun to explore misinformation beyond the textual domain (e.g., Yang et al., 2023), it is clear that the field has a way to go before its capacity to analyze images, audio, and video reaches that of text. This is an especially important area for methodological development given the massive popularity of podcasts and video platforms.
Another factor that contributes to the spread of misinformation is the ability to monetize or leverage it for profit. As discussed in the previous section, technology companies, including social media platforms, infrastructure providers (e.g., hosting companies, ad tech firms, donation platforms), and advertisers may profit financially when (mis)information circulates widely (Han et al., 2022), as is the case for social media influencers and other online content creators. Specifically, online content creators can
profit from advertising revenue earned from the social media platforms and funded by advertising companies, as well as from monetization practices that circumvent social media platforms, such as affiliate marketing, selling products, or soliciting donations or subscription payments from fans (Hua et al., 2022).
Financial incentives sometimes underlie the production of intentionally fabricated news on social media platforms. For example, Silverman & Alexander (2016) reported that some producers of fabricated news in Macedonia profited from click-based advertising revenue when articles they posted on social media went viral. Additionally, other studies have shown that some political campaigns and state-level propaganda operations have employed workers to post disinformation online for extra income (Han, 2015; Ong & Cabañes, 2019).
Specific to monetization of misinformation about science, there is some limited evidence that points to financial incentives behind spreading it in venues dedicated to alternative health. Alternative health websites (discussed in more detail in Chapter 4) that spread misinformation may have commercial interests to promote alternative remedies for various health conditions (Baker et al., 2024), including alternative wellness products, and often by linking to affiliate sites (Moran et al., 2024). Another example is “The Non-GMO Project,” a non-profit organization that for a fee, provides verification and labeling for non-genetically modified organism (GMO) products, including for large retailers. Studies find that consumers are willing to pay more for food with a non-GMO label (McFadden & Lusk, 2018), and as of 2019, more than 3,000 brands, representing over 50,000 products and netting more than $26 billion in annual sales, had been verified with the non-GMO label (Ryan et al., 2020). Additionally, the Non-GMO Project’s websites and blogs state for example, that “the science on GMOs isn’t settled” (Waddell, 2023), despite international scientific consensus about the safety of GM foods for human health, including from the National Academies of Sciences, Engineering, and Medicine (2016a), the World Health Organization (2014), and the European Commission (2015).
While these examples are suggestive, systematic analyses of the monetization of misinformation are rare. In one study, Herasimenka et al. (2023) analyzed the websites of 59 different groups demonstrated to be involved in communicating misinformation about vaccine programs and found that a large majority showed evidence of monetization. The authors noted that appeals for donations was the most common strategy used, followed by sales of information products and merchandise including health supplements and then finally third-party advertising and membership dues. Another study by Broniatowski et al. (2023a) compared the website links shared by antivaccine and pro-vaccine Facebook groups, finding that while monetization strategies—particularly embedded ads—were nearly universal, pro-vaccine
pages were more likely to share links to monetized sources than antivaccine pages. This was largely due to the tendency of pro-vaccine pages to link to news websites, which are heavily monetized; when examining non-news sites separately, sites shared by anti-vaccine actors were more highly monetized. We identify the monetization motives and strategies of misinformation actors, as well as their effects, as an important area in need of additional research.
As briefly discussed in Chapter 4, public relations strategies are sometimes used to distort scientific evidence and spread misinformation about science in service of business and/or policy objectives. These strategies, discussed in more detail below, include questioning evidence, claiming more research is needed, conducting internal research that confirms pro-industry biases, funding academic research programs, recruiting individual scientists to speak against the weight of scientific evidence, and exploiting journalistic norms. Researchers have shown that these strategies are often part of disinformation campaigns adopted by a range of industries over the last 70 years (Michaels, 2008, 2020; Oreskes & Conway, 2010b). Importantly, public relations companies have been described as not merely carrying out strategies devised by their corporate clients, but also as the creators, developers, and enactors of these strategies (Aronczyk, 2022). Additionally, Aronczyk & Espinoza (2021) argue that science denial and obfuscation in the interest of corporate profits and power may have become institutionalized in part because of the work of public relations firms. It is also important to note that the existing evidence on the role of public relations strategies in the spread of misinformation about science largely reflects studies of the tobacco, fossil fuel, and pharmaceutical industries. In this section, the committee mainly draws upon this literature.
Science historians Oreskes & Conway (2010b), in their book Merchants of Doubt, have written most extensively about the “playbook” that was established by tobacco companies in the 1950s and have since been adopted by a range of industries to manufacture uncertainty surrounding available scientific evidence (see also Michaels, 2008, 2020; Michaels & Monforton, 2005). Critically, the authors describe this concept as the act of creating debate about the science by questioning the evidence and claiming that more research is needed before acting (Oreskes & Conway, 2010b). For example, Oreskes & Conway (2010b) reported that even though the science was clear that smoking increased the probability or risk of getting cancer and other diseases, the tobacco industry was able to claim that factors other than smoking could be the culprit because not everyone who smoked got cancer. Moreover, as discussed in Chapter 2 of this report, science is
a dynamic, iterative process of discovery that is always evolving. To this end, Oreskes & Conway (2010b) suggest that some industries have taken advantage of the inherent tentativeness of science to create the impression that everything can be questioned and thus nothing about the existing science is certain or resolved.
Further, to cast doubt about the dangers of their products, it has also been reported that some corporations have either conducted their own research or have funded external research that is biased toward predetermined results that support the industry’s position (Oreskes & Conway, 2010b). For example, Oreskes & Conway (2010b) reported that the tobacco industry, under the advisement of a public relations (PR) firm, created the Tobacco Industry Research Committee in 1954 to sponsor independent research on the health effects of smoking, which in practice was weighted toward research identifying alternative explanations for lung cancer, such as stress, infection, and genetics. Decades later, in the 1990s, the NFL was reported to use a similar strategy with the formation of the Mild Traumatic Brain Injuries (MTBI) Committee to conduct scientific research on the risks of concussions to football players and ways to reduce such injuries (Michaels, 2020). According to Michaels (2020), the committee was largely made up of football insiders, many with conflicts of interest due to financial ties to the NFL, rather than independent physicians or brain science researchers. Additionally, in the early 2000s, the MTBI Committee published a series of peer-reviewed journal articles that were reported to either minimize or deny the dangers of football-induced head injuries. Work by Reed et al. (2021) on industries that conduct their own research, shows that such efforts can similarly skew the science in favor of the companies’ agenda, reporting that some pharmaceutical companies may choose to omit particular research methods that might substantiate a link between its product and serious health risks.
Another industry strategy that has been documented in the literature is the leveraging of the trustworthiness of academia and/or professional science societies by building connections through funding and partnerships. For example, Oreskes & Conway (2010b) reported that in the 1950s, the tobacco industry established a fellowship program to support research by medical degree candidates, in which 77 of 79 medical schools agreed to participate, and representatives from reputable agencies and associations were invited to its board meetings. The authors also noted that such connections with doctors, medical school faculty, and public health officials can protect an industry’s reputation and, in the case of the tobacco industry, likely secured its role in national conversations related to smoking and health. Likewise, Thacker (2022) reported that since the 1990s, some fossil fuel companies have funded research programs related to energy and climate at elite American universities. Similarly, other scholars suggest that some
pharmaceutical companies have funded programs at institutions of higher education in service of establishing legitimacy (Reed et al., 2021; Union of Concerned Scientists, 2019).
Relatedly, research suggests that some industry-led disinformation campaigns can often involve recruiting individual scientists who are willing to speak against the weight of scientific evidence, and as a result, such claims may be given a sense of credibility (Dunlap & McCright, 2011; Oreskes & Conway, 2010b). For example, funding in biomedical research at major universities by the tobacco industry is reported to have not only provided new data and results that challenged the link between tobacco and cancer, but also created an army of “friendly witnesses” who could provide expert testimony in lawsuits filed against tobacco companies that cast doubt on cigarettes as the primary cause of disease (Oreskes & Conway, 2010b, p. 30). Likewise, with respect to climate science, it has been reported that a cadre of credentialed scientists have been involved in challenging the scientific consensus on global warming, through appearances in the media, hearings, and press conferences, and in their writing (Dunlap & Jacques, 2013; Oreskes & Conway, 2010a). Moreover, scholars have noted that the experts who speak out against scientific consensus may appear to have field-relevant expertise but often do not (Hansson, 2017), and that many of the same experts who challenged the link between smoking and cancer also contested the science on climate change (Oreskes & Conway, 2010a). Legg et al. (2021) have shown that industry scientists may also participate in seemingly independent decision-making bodies and advisory groups to advocate for industry-favorable policies. Additionally, industry scientists frequently serve on science advisory boards of federal agencies and some researchers have found that industry-majority scientific boards are perceived by the public as biased toward business interests over other priorities, such as human and environmental health (Ard & Natowicz, 2001; Conley, 2007; Drummond et al., 2020).
Another public relations strategy used in industry disinformation campaigns as documented in the research literature is the creation of Astroturf or front groups that can act on behalf of corporate interests but whose corporate ties are obscured from public view (Aronczyk, 2022). Astroturf groups are designed to look like popular, grassroots efforts (e.g., to support oil and gas) but are actually a product of corporate public relations (Aronczyk, 2022; Sassan et al., 2023). Additionally, scholars report that these seemingly independent front groups allow corporations to distance themselves from disinformation campaigns (Dunlap & Brulle, 2020; Givel & Glantz, 2001; Williams et al., 2022), and such groups have been leveraged to promote climate change denial (Aronczyk & Espinoza, 2021), and disinformation about the dangers of tobacco use (Givel, 2007), and to market opioids (Ornstein & Weber, 2011).
Finally, media coverage has also been shown to play a central role in the strategies that some industries may use to manufacture debate around science issues, including efforts that exploit journalistic norms and practices to cover both sides of science debates in the interest of balance and objectivity, but in some cases may promote false balance in news reporting, as previously discussed in Chapter 4 of this report. Additionally, work by Armstrong (2019) has shown that through the efforts of their public relations firms, some industries have also been effective at distorting the broader media narrative around science issues. Moreover, research has shown that in addition to earned media, some industry disinformation campaigns have also involved paid advertising in traditional and social media to target policymakers and the public with false and misleading information. Some examples of this strategy have been documented with respect to the fossil fuel industry, whereby scholars report that paid ads have been used to downplay the risk and seriousness of climate change, promote fossil fuels as a necessity, and shift responsibility for climate change to individual consumers (Holder et al., 2023; Supran & Oreskes, 2017, 2021). Additionally, although the focus of this section is on industry strategies, activist movements have also been shown to rely on similar media strategies to challenge scientific consensus. For example, Lynas et al. (2022) reported that anti-GMO activist networks have been able to seed misinformation about GMOs in online news stories, often by relying on scientists who make statements that question the scientific consensus around the safety of GMO safety.
Misinformation can also spread when people are overwhelmed by information and unsure who or what to trust, or when they are searching for answers but can’t find credible information. The stakes are especially high during emergencies, when misinformation spread and uptake can have significant consequences for public health and safety. Understanding the dynamics of misinformation spread is especially important for managing infodemics that occur during fast-moving environmental and health crises. As previously mentioned, infodemics are characterized by an abundance of information (both accurate and inaccurate) as well as by information voids, which are created when public demand for high-quality information is high but supply is low (Chiou et al., 2022; Purnat et al., 2021), and both conditions can enable misinformation to spread more easily.
Relatedly, “data voids,” which occur when search engine queries on a topic result in few or no results, such as in the case of breaking news, can also be exploited by bad actors to fill that void with disinformation (Golebiewski & boyd, 2019; also see Chapter 3). Tripodi (2022) has also
documented what is referred to as “ideological dialects,” whereby some groups may strategically use community-specific terms and phrases that when entered as keywords into a search engine will primarily return information, including misinformation, that confirms the ideological view of that community. For example, a search using the keywords “illegal aliens” will yield very different results from a search that uses “undocumented workers” as the keywords.
As discussed in Chapter 3, different social groups—especially non-White racial and ethnic groups—have access to and may experience different types and quantities of information, including misinformation, based on the differential positioning of such groups within the contemporary information ecosystem. Further, some efforts to spread misinformation to communities of color have been specifically adapted to exploit the concerns of these communities (Lee et al., 2023). For example, scholars report that Black communities who have been exposed to misinformation on social media platforms concerning vaccines have received messages that elevate concerns about medical racism and exploitation as well as ongoing structural inequalities in order to discourage this community from being vaccinated (Lee et al., 2023). For some Indigenous groups, the spread of misinformation within these communities can be largely driven by national media that then filters down to local issues (Young, 2023b). Further, some misinformation about science often intersects with long histories of extractive science within Indigenous communities, which can exacerbate existing inequalities and social divisions (Young, 2023b; see also Chapter 6). Research on Latino communities has identified “information poverty” linked to the primacy of interpersonal and social media-based information networks as a key driver of the spread and reach of misinformation within this specific community (Soto-Vásquez et al., 2020).
Lack of in-language resources is another example of how social inequalities shape the flow of misinformation about science, given this lack can create a vacuum that can be exploited and filled with unreliable information (Fang, 2021). Access to quality and reliable information often determines how non-English speakers interact with and rely upon information (Nguyễn & Kuo, 2023). Specifically, non-English speaking communities in the United States lack access to critical information regarding public health protocols or vaccines due to a lack of available and sufficient language translation and interpretation for healthcare and other social services (Yip et al., 2019). Marginalization that is created by a supply of credible science information that is predominantly in the English language can also have profound and inequitable impacts. For example, in the context of medical and health inequities, Bebinger (2021) found that in March 2020, “patients who didn’t speak much, or any, English had a 35% greater chance of death” during the COVID-19 pandemic. Lack of adequate language translation
and interpretation has also been described as an issue of collective access (Nguyễn & Kuo, 2023; see also Chatman, 1996 on information poverty). When information is only made available in one dominant language, there are information voids created for both in-language and culturally-relevant translations (Nguyễn & Kuo, 2023). Ryan-Mosley (2021), in looking at Asian American communities, argues that the lack of accurate language translations on websites have created exclusionary, careless, and discriminatory online environments. To this end, many community-based groups and organizations, though often under-resourced, have stepped in to fill in this gap by making their own in-language guides and materials (Nguyễn & Kuo, 2023; also see Chapter 7 for more discussion).
It is imperative to note that translation work is not a direct one-to-one process due to cultural, contextual, dialectical, and technological characteristics of information, and as such, the process of translation may inadvertently change the context and meaning of the original information. For example, in the Spanish language, when discussing “healthcare” there are specific phrases in reference to the general system, coverage, insurance, and literal care; similarly, translating the word “advocacy” may create debates, given existing words in the Spanish language do not adequately capture the concept (Equis Research, 2022). English dominance in the keywording processes of knowledge production also creates limits on what is searchable, since bits of misinformation and disinformation translate differently or may be described differently across other languages (Nguyễn & Kuo, 2023). This in turn creates a bottleneck in the accessibility of language translation of misinformation and disinformation, let alone in the accessibility of empirical research about misinformation that is associated with mistranslation and out-of-context interpretation. Additionally, there are high costs associated with translation work and major hurdles to ensure the translator (whether human or artificial intelligence) has the expertise that requires in-depth knowledge and analysis of regional and temporal dialects. Consequently, there is also a variety of unaddressed misinformation in non-English languages, given the lack of investment in robust content moderation on the part of social media companies, labor-intensive work for human translators, and the unreliability of machine language translations (Nasser, 2017; Nicholas & Bhatia, 2023). Misinformation in the Spanish language that targets Latinos in the United States has been noted as a particular problem that is, in part, due to limited fact-checking of non-English language content on social media platforms (Sanchez & Bennett, 2022).
While the nature of misinformation about science varies across issues, there are some common rhetorical themes that recur regardless of the issue
and source, and that are used strategically by purveyors of disinformation, including within some industries, governments, and activist campaigns. Diethelm & McKee (2009) identify five elements that are commonly used in arguments to challenge a scientific consensus; these include claiming of conspiracies, use of fake experts, selective use of evidence (cherry picking), imposing impossible standards for research, and using logical fallacies. These five characteristics are also known by the acronym FLICC: Fake experts, Logical fallacies, Impossible standards, Cherry-picking of evidence, and Conspiracy theories (Cook, 2020).
The first, claiming of conspiracies, occurs when any agreement among scientists is attributed to a conspiracy among elites to suppress the truth. The second characteristic is the use of fake experts (i.e., scientists who appear to have relevant qualifications but whose views are completely contradictory to established knowledge [as discussed above]) that can result in the denigration of scientists whose research findings support the established consensus. For example, such scientists can be subject to harassment and intimidation, through verbal attacks on their credibility, as well as through lawsuits and Freedom of Information Act requests (Levinson-Waldman, 2011; Quinn, 2023; also see Chapter 8 for more discussion). The third characteristic is selectivity, or cherry-picking evidence to support an anti-consensus position or reject well-conducted research that reaches undesirable conclusions. The fourth characteristic involves imposing impossible standards for what research can deliver. One example described by Diethelm & McKee (2009) is when arguments denying the reality of climate change point to the absence of accurate temperature records prior to the invention of the thermometer. Similarly, some activist campaigns commonly include calls for more research to establish the safety of vaccines, particularly randomized controlled trials. However, withholding lifesaving vaccines from a control group would be considered unethical. Thus, such research may be impractical if not impossible (Kata, 2012). The fifth characteristic is science denialism, which, according to Diethelm & McKee (2009), is the use of misrepresentation and logical fallacies, such as red herrings, straw men, and false analogies.
Research also reveals additional tactics and tropes (e.g., appeals to personal values, using de-contextualized scientific claims to support inaccurate beliefs) that are commonly used within specific communities or as part of a similar approach to spread misinformation. Importantly, these additional strategies have been most documented for the topic of vaccination. Furthermore, these tactics reflect and often exploit key features of the contemporary information ecosystem (discussed in Chapter 3), such as audience fragmentation and context collapse, both of which facilitate exposure to competing narratives about science and can lead to differences in who people see as trustworthy sources of science information. One example of
a common trope are arguments against vaccination that frequently center on values like individual freedom and choice, and highlight concerns about government intervention (Broniatowski et al., 2020; Hoffman et al., 2019; Hughes et al., 2021; Kata, 2012; Moran et al., 2016). Such concerns are reported to often be associated with an expressed mistrust of the scientific community (Hoffman et al., 2019). Appeals to civil liberties with respect to the topic of vaccination have also developed alongside increasing social media activity promoting state-level mobilization against vaccine mandates (Broniatowski et al., 2020).
Another common trope associated with the spread of misinformation about science is to encourage “doing your own research” (DYOR), which urges people to seek out additional or alternative information to verify facts and evidence before making decisions (Carrion, 2018; Hughes et al., 2021; Kata, 2012; Tripodi et al., 2024). While it is reasonable and even desired to seek out more information and verify facts and evidence, the DYOR tactic is not actually in support of a reasonable quest for more information. Rather, the call to DYOR can reflect and cause doubts in substantiated or more settled science, and is consistent with reduced trust in public institutions (i.e., the absence of trust necessitates independent verification; Luhmann, 1979) and post-modernist thinking, whereby truth is seen as contestable and reflective of one’s own lived experiences; and the implication is that doctors, scientists, and other officials may not have all the answers (Carrion, 2018; Kata, 2012). Those who embrace the DYOR perspective may adopt epistemologies that are not bound by expectations of internal consistency or burden of proof (Birchall & Knight, 2022; Carrion, 2018), and may also exhibit an overreliance on people who are not scientific experts as key sources for science- and health-related information (Baker et al., 2024; Hughes et al., 2021; Kata, 2012; Nichols, 2017). Additionally, survey research conducted by Chinn & Hasell (2023) suggests that when people endorse the idea of “doing your own research,” they are more likely to hold misbeliefs about COVID-19 and are less trusting of scientific institutions.
In some cases, purveyors of misinformation may promote “inaccurate narratives” by extracting accurate information from its original context and aggregating it in specific ways (e.g., clipping livestreams, selectively sharing scientific preprints; Wardle, 2023). Examples of this strategy may even be found in the scientific literature: a study that re-analyzed published research rejecting the consensus on anthropogenic climate change revealed that “[a] common denominator [in such research] seems to be missing contextual information or ignoring information that does not fit the conclusions” (Benestad et al., 2016, p. 699). Other scholars have also noted how particular talking points and “patterns of information” allow for subtle segmentation of populations whereby through the use of precise
keywords people can selectively search for material supporting particular (accurate and inaccurate) narratives (Tripodi, 2022, p. xiii). Moreover, this can also give the impression that a person is “doing their own research” (Tripodi, 2022).
Strategies to spread misinformation about science is also not limited to verbal rhetoric. Visuals, including memetic images that circulate widely online, also are used strategically to misrepresent science. For example, some anti-GMO campaigns have used images of needles inserted into fruit and of surreal depictions of plant hybrids (i.e., “Frankenfood”) to convey the unnaturalness and questionable safety of GM crops (Clancy & Clancy, 2016). Specific examples of misleading imagery relating to the topic of climate change have also been reported (see Lewandowsky & Whitmarsh, 2018).
As discussed above, political, ideological, and/or economic motivations may drive some institutions and groups to spread misinformation about science. However, the motivations that might drive individuals to spread misinformation are less well understood. A relatively understudied area, most of the research on individual motivations focuses on misinformation about politics or “fake news” in social media contexts. But within the extant research, multiple motivators that drive the spread of misinformation among individuals have been described; one being monetization as discussed above. Others described in the sections that follow include confusion and inattention, social motivations, partisan motivations, persuasion and activism, emotion, and disruption (i.e., a desire to generate chaos).
Most people want to share accurate information (Pennycook et al., 2021), and reputational concerns typically discourage people from sharing false content (Altay et al., 2020; Waruwu et al., 2021). Thus, in some cases, people may share misinformation because they are unable to discern that it is false, due to either a lack of digital literacy skills (Guess et al., 2020) or because of motivated reasoning that leads some individuals to uncritically accept information that comports with their existing beliefs (Pereira et al., 2023; Peterson & Iyengar, 2021; Taber & Lodge, 2006; Vegetti & Mancosu, 2020). Some misinformation sharing has also been shown to be confusion-based (Pennycook et al., 2021). Yet, even when people can correctly discern the accuracy of information, they may still share misinformation, in part, because their attention is focused on factors other than accuracy. Priming people to attend more closely to the accuracy of social media content can reduce misinformation sharing, lending
support to this inattention explanation (Pennycook & Rand, 2022a; Pennycook et al., 2021).
While confusion and inattention account for some misinformation sharing, people also often knowingly and intentionally share misinformation. In the United States, in 2016, 14% of adults reported sharing a political news story online that they knew at the time was made up (Barthel et al., 2016). Similarly, a 2018 survey of British social media users found that 17.3% of those who share news on social media admitted to sharing news in the past month that they thought was made up when they shared it (Chadwick & Vaccari, 2019). Individuals who intentionally share misinformation may be motivated by a complex constellation of social and psychological factors. Sharing information is an inherently social process, for example, people share information to improve their social status and to build and maintain relationships (Bobkowski, 2015; Bright, 2016). Specific motivations for sharing information on social media that have been reported include for self-expression, to inform, influence, provoke, entertain, or connect with others (Chadwick & Vaccari, 2019). Thus, many of the same reasons that people share accurate information extend to misinformation—they want to pass along interesting and useful content, express themselves, spark conversation and affiliate with others, and show that they are “in the know” (Apuke & Omar, 2021; Chen et al., 2015, 2023; Yu et al., 2022).
People may also share false information, even if they suspect it may be false or are unsure of its veracity, if they think it could benefit or protect someone from harm (Duffy et al., 2020). This altruistic motive to help or warn others has been found to be a strong predictor of sharing misinformation about COVID-19 on social media in Nigeria, where altruism is a strong cultural trait (Apuke & Omar, 2021), as well as a strong predictor of the willingness to share food-safety rumors among Chinese WeChat users (Seah & Weimann, 2020). Focus groups conducted in Africa similarly revealed that sharing misinformation, including health misinformation, is motivated by a “civic duty” to create awareness and warn others about issues of public concern (Chakrabarti et al., 2018; Madrid-Morales et al., 2021). This is often coupled with a “just in case” attitude, whereby people feel that the utility of the information, if it ends up being true, makes it worth passing along despite its questionable credibility (Madrid-Morales et al., 2021). In addition, community norms can play a powerful role in the spread of misinformation (DiRusso & Stansberry, 2022; Kata, 2010). That is, if misinformation is widely accepted within a particular community, community members will be more likely to share it. Moreover, if information that is shared violates community norms or comes from sources
deemed untrustworthy by the group, that information and its source may be disparaged (DiRusso & Stansberry, 2022).
People also share misinformation to generate social engagement online, and this has been shown to be motivated by the positive social feedback that is built into the structure of social media platforms, such as likes and comments, which can overwhelm the motivation to share accurate information (Ren et al., 2023). Additionally, it has been reported that people expect that conspiracy theories will generate more engagement than factual content; this may be due to the strong emotional valence of conspiracy theories (Albertson & Guiler, 2020; van Prooijen et al., 2022a). Moreover, social media environments facilitate social feedback, which may habituate social media users to share dubious information in anticipation of social rewards (Ceylan et al., 2023; Ren et al., 2023). Habitual social media sharers are conditioned to share information that attracts others’ attention and as such may do so without concern for accuracy, even when they are primed to consider accuracy and even when the information contradicts their personal views (Ceylan et al., 2023). Other recent evidence suggests that when people simply think about whether to share content on social media, this can actually distract from their ability to discern the accuracy of that content due to a shift in their attention to non-accuracy-related motivations and factors that drive sharing choice (Epstein et al., 2023). In other words, users may develop a social media mindset as described in Epstein et al. (2023), that is characterized by prioritizing content sharing and personal motivations for sharing content over assessing the accuracy of content.
Some people may also share misinformation in order to expose it as false (Metzger et al., 2021). For example, findings from focus groups conducted in Spain revealed that people sometimes share misinformation with the intent to correct or critique it (Ardèvol-Abreu et al., 2020). In Denmark, tweets containing misinformation about the COVID-19 mask debate were often shared to reject the misinformation; yet many of these tweets were also reported to have used humor to stigmatize or mock the misinformation spreader rather than to engage using substantive arguments (Johansen et al., 2022). This type of sharing behavior can inadvertently contribute to confusion in the information environment, especially when no effort is made to correct the false or misleading claims (Johansen et al., 2022).
Individuals who intentionally share misinformation may also be driven by partisan motivations (i.e., individuals may share misinformation that supports their political in-group to express their partisan identity and associate with like-minded others [Marwick, 2018]). This is likely tied to the anticipated social rewards derived from sharing misinformation (as
described above), and in such cases, the identity that is signaled by information is more important than its accuracy. Misinformation that supports one’s in-group does not pose the same reputational costs as other types of misinformation (Waruwu et al., 2021). One of the few studies that has systematically analyzed various individual-level motivations for misinformation sharing found that partisan motivations are central (Osmundsen et al., 2021). The study found that on Twitter in 2018–2019, individuals who strongly identified with a political party were more likely to share content from politically congenial “fake news” sites, and this was reported to be potentially driven by their hostile feelings toward political opponents (Osmundsen et al., 2021). On the other hand, Osmundsen et al. (2021) did not find that poor reasoning skills (in contrast to Pennycook & Rand, 2019) or apolitical trolling drove “fake news” sharing; however, political cynicism was positively related to sharing “fake news” sources affiliated with both political parties. Following from these results, the sharing of misinformation about science topics that are subjects of political debate, like climate change or masks, might be motivated by political animus. Partisan motivations can also facilitate the spread of misinformation about science, in part because of the loss of trust in some scientific institutions (see Chapter 3).
Individuals also share misinformation with the intent to persuade or influence others. For example, a study based on representative surveys in six Western democracies, including the United States, found that a primary reason that individuals are willing to share social media posts containing conspiracy theories about immigration and COVID-19 is because they are convinced by or agree with the misinformation and feel the message needs to be told to others (Morosoli, 2022b). In other cases, individuals who share misinformation about science may be motivated by activism or the desire to create social change (Perach et al., 2023). Scholars also argue that misinformation can serve as a catalyst of social movements (Earl et al., 2021), whereby some activists may use it to raise awareness, amass support for their cause, build community, and promote collective action (Moran & Prochaska, 2023), including around science issues (Kata, 2012; Lynas et al., 2022; Seymour et al., 2015). In the digital era, scholars also note a rise in “participatory propaganda,” whereby persuasive online messages that originate with political, corporate, or other strategic actors are then passed on by receptive target audiences to their broader social networks, thus increasing the reach and potential influence of the original message (Lewandowsky, 2022; Wanless & Berk, 2019). Target audiences can also play a more active role by finding evidence and creating content that fits
existing misinformation narratives and frames, which can then be amplified by elites and those with large followings in a cycle of participatory disinformation (Starbird et al., 2023)
Emotions, and particularly negative emotions, are also associated with misinformation sharing. Passing along negatively charged misinformation—i.e., “bad news”—may be a way for some people to manage their own uncertainty and anxiety (Wang et al., 2020). For example, anxiety is a predictor of willingness to share misinformation (as well as accurate information) about COVID-19 (Freiling et al., 2023). In China, exposure to food-safety related misinformation has been shown to trigger negative emotions that leads to more frequent sharing of that misinformation through both online and face-to-face communication channels (Wang et al., 2020). Fear and anger have also been reported as motivators for sharing misinformation about science (Ali et al., 2022). However, negative emotions are not the only emotions that are associated with online sharing. Paletz et al. (2023) found that several different discrete emotions are associated with online sharing, including both positive (happiness) and negative emotions (anger, sadness, fear), as well as emotions that differ in their levels of arousal or emotional activation (amusement and pride).
Finally, some people may also share misinformation to disrupt the social order and inflict chaos. Individuals who engage in online trolling have been described as “agents of chaos on the Internet” (Buckels et al., 2014, p. 97), due to deceptive, destructive, and/or disruptive online behaviors—including sharing misinformation. Such individuals may seek to offend and engender negative emotional responses from their targets, often purely for the “lulz,” i.e., because they find it funny (Marwick & Lewis, 2017). Buckels et al. (2014) also reported that those who engage in online trolling may derive enjoyment from victimizing others as signaled by high levels of sadism.
In some cases, however, individuals want to create chaos for more instrumental purposes. “Need for chaos” is a dispositional mindset that reflects a desire to gain status by disrupting the established order (Petersen et al., 2023). It has been reported that people who have a high need for chaos may feel socially and economically marginalized and, in turn, may direct animosity toward elites and people of all political allegiances (Petersen et al., 2023). Such individuals may also be motivated to spread hostile rumors targeting political elites in order to destroy the existing social order
(Petersen et al., 2023). Thus, unlike those who are motivated to share misinformation due to a particular partisan identity (Osmundsen et al., 2021), those with a high need for chaos may share misinformation regardless of which party it helps or hurts, as they want to stoke social conflict and damage the entire system (Petersen et al., 2023). Furthermore, sometimes state-sponsored actors use similar techniques, presumably in an attempt to erode trust in their adversaries’ institutions, such as when messages both promoting and opposing vaccination were reported to be shared from troll accounts operated by the Russian Internet Research Agency (Broniatowski et al., 2018).
In sum, existing research reveals an array of sometimes competing and often overlapping individual-level motivations for misinformation sharing. Inconsistent findings across studies are likely attributable, at least in part, to differences in research methodologies. For example, Pennycook et al. (2021), who found inattention to be a leading explanation for misinformation sharing, studied intentions to share false headlines in an experimentally contrived social media context. On the other hand, Osmundsen et al. (2021), whose results pointed to partisan motivations as a key driver for misinformation sharing, combined actual behavioral sharing data with survey responses, but tracked the sharing of misinformation at the source level rather than at the story or headline level. To date, research highlighting altruistic motives largely reflects self-reported data. More research is needed to examine the robustness of these findings and to better understand how motivations may vary based on contextual factors and individual differences, as well as whether motivations for sharing misinformation specifically about science may vary from those driving the spread of political misinformation and unreliable news, for example.
Motivations are important to understand because they could inform potential interventions. Inattention to accuracy could be overcome with accuracy reminders or nudges (Pennycook et al., 2021). If people share misinformation due to altruistic motives, fact-checking may be helpful. However, if people share misinformation to signal their political affiliation, hurt political opponents, create chaos, or to earn money, accuracy or fact-checking based interventions will not be effective. To reduce misinformation sharing motivated by partisan or ideological bias, interventions may need to target polarization and/or mistrust in the political system (Van Bavel et al., 2021). Likewise, if people are motivated by the social reward structure on social media to post misinformation due to its engagement potential, a solution may be to change the incentive structure to reward the sharing of accurate information (Ceylan et al., 2023; Ren et al., 2023).
Finally, we note that motivations may be linked to the specific type of misinformation in question. Many of the instances of misinformation about science discussed result from a substantial profit motive. The target audiences of such misinformation may be more likely to believe it due to mistrust of the medical establishment and the imperative to find working treatments when conventional medicine has failed. The long-term efforts of industry, government, and other actors to obscure risks to public health and/or the environment also may lead audiences to be skeptical of consensus claims of safety (Goldenberg, 2016). Other types of misinformation about science with less obvious commercial origins (e.g., astrology) may connect with target audiences’ shared identities grounded in interest in spirituality (Smith, 2023).
The spread of misinformation about science is facilitated by key factors in the contemporary information ecosystem as well as by, common strategies used to undermine credible science information. Digital communication technologies can facilitate the spread of misinformation; however online platform companies may face mixed incentives on the issue of addressing the problem since the sharing of any information on platforms, including misinformation, can be lucrative. Furthermore, there are widely-adopted strategies to spread misinformation about science, including “manufacturing” doubt, promoting false balance in scientific debates (in part by exploiting journalistic norms requiring coverage of “both sides”), cultivating relationships with scientists who disagree with the prevailing consensus, and creating Astroturf campaigns to generate the illusion of public support and credibility. Additionally, some of the recurring themes in misinformation about science that have been identified include: claims of conspiracies among scientific, government, and corporate elites; the use of fake experts with questionable or nonexistent credentials; cherry-picking evidence; calling for impossible evidentiary standards to support scientific agreement; and denial of the weight of the scientific evidence using logical fallacies.
Purveyors of misinformation about science also commonly appeal to individual liberties and encourage followers to “do their own research,” (i.e., seeking out sources that contradict the weight of the evidence on science issues). For individuals, major motivations for spreading misinformation can include financial gain, confusion or inattention, maintenance of social ties, signaling of partisan affiliation, persuasion of the unconvinced, management of negative emotions, and disruption of the social order. Finally, race, ethnicity, language, and social class (as well as other demographic characteristics of individuals and communities) constitute important determinants of the spread and reach of misinformation, with under-resourced
communities and communities of color having disproportionately less access to reliable information and other resources that could reconcile information voids and more effectively build resilience against misinformation that is specifically tailored to these groups.
Conclusion 5-1: Individuals share information for a variety of reasons—for example, to improve their social status, to express a particular partisan identity, or to persuade others to adopt a certain viewpoint. Individuals may inadvertently share misinformation in the process of sharing information, and this may be due to their confusion about the credibility of the information, their inattention to accuracy, or altruistic efforts to help or warn loved ones, among other reasons.
Conclusion 5-2: In some cases, individuals and organizations may knowingly share misinformation to profit financially, to accrue social rewards (e.g., followers and likes), to accrue and maintain power, to erode trust, or to disrupt existing social order and create chaos (e.g., trolling). These motivations may be especially incentivized in social media environments.
Conclusion 5-3: The spread of misinformation about science through social networks on social media and through online search platforms is affected by design and algorithmic choices (e.g., those shaping individualized feeds based on prior platform activity), permissive and loosely enforced or hard-to-enforce terms of service, and limited content moderation. Moreover, platform companies may not voluntarily implement approaches to specifically address such issues when they are in conflict with other business priorities.
Conclusion 5-4: Science has traditionally been recognized as an authoritative civic institution that produces many benefits for individuals, communities, and societies. Yet, at times, scientific authority has been co-opted by individuals and organizations feigning scientific expertise, and by science and medical professionals acting unethically in ways that contribute to the spread of misinformation about science (e.g., speaking authoritatively on scientific topics outside of one’s area of expertise).