Department of Sociology and Work Science Obstruction Through Conspiracy A Computational Multimodal Content Analysis of TikTok Climate Misinformation Narratives By: Victoria Vallström Degree essay: 15 credits Course: SC2502 Magister thesis in Sociology Level: Graduate Term: Spring 2025 Supervisor: Anton Törnberg Examinator: Hans Ekbrand Characters incl. spaces: 106,879 Word count: 14,962 1 Abstract Keywords: climate misinformation, computational multimodal analysis, conspiracies, multimodal framing, TikTok This thesis investigates how climate-related conspiracy narratives are constructed and emotionally mobilized on TikTok. Addressing a research gap in climate claims related to conspiracies, as well as visual and multimodal misinformation, the study explores how TikTok’s unique multimodal features, which combine visual elements, sound, and text, shape the construction, emotional appeal, and potential influence on climate-related attitudes and actions. The research is guided by two questions: (1) What topics emerge in the TikTok video corpus on climate and weather modification? (2) How are conspiracy claims constructed through visuals, text, sound, and emotion, and what interpretive and emotional orientations do they invoke? Methodologically, the study employs a mixed-method design that combines large-scale computational multimodal topic modeling with qualitative framing analysis. Utilizing a dataset of 7,658 TikTok posts collected through keyword and hashtag-based scraping, the study initially employs unsupervised AI models (CLIP, Sentence-BERT, and OpenL3) to identify 27 thematically coherent topics across visual, textual, and audio modalities using BERTopic. A subset of 1,220 posts is then analyzed in-depth through multimodal conspiracy framing, drawing on theories of framing, conspiracy, social semiotics, and the sociology of emotion. The findings reveal a distinct form of climate conspiracy on TikTok; one that affirms the reality, urgency, and human causation of climate change, but attributes it to covert technological interventions by powerful actors, typically state institutions. Rather than denying climate change, these narratives frame it as deliberately engineered. Key findings include: (1) these claims fall outside established climate misinformation taxonomies; (2) they invert dominant contrarian arguments by flipping the script, potentially complicating detection efforts; and (3) they redirect attention through epistemic emotions like curiosity, prompting investigation and “dot-connecting”. This emotional redirection sustains conspiracy engagement and may divert attention from constructive climate responses. Enabled by a multimodal and computational approach, these insights expand our understanding of how climate misinformation evolves and adapts to audiovisual platforms like TikTok. 2 Acknowledgements I would like to extend my deepest thanks to my supervisor, Anton Törnberg. Thank you for generously sharing your expertise in computational methods, online studies, and climate misinformation, and for continuously encouraging and pushing me toward creative (and challenging) approaches. Your ability to challenge while also supporting made all the difference, especially when navigating unfamiliar techniques. I’m also incredibly grateful for the opportunity to be part of the climate misinformation research project this past year; it has been both a privilege and a tremendous source of inspiration. Thanks to Noah Björelius for the excellent and challenging review, and to Hans Ekbrand for your insightful questions on method and interpretation. The feedback sharpened the analysis, deepened my thinking, and strengthened the articulation of the methodological approach. Special thanks to my fellow master’s students who, knowingly or not, shaped this thesis in meaningful ways. To Juliana, the TikTok content marketing wizard, your deep insights into the platform were invaluable; to Tilda, for continuously sharing what was happening on TikTok with such enthusiasm, it felt like real-time ethnography; and to Ottavia, who enlightened me about the glorious messiness - and generational cringe - of hashtags. Each of your contributions helped me understand the platform more fully. I also owe a surprising amount to my AI sidekick, ChatGPT, who patiently helped me debug broken code and stopped me from falling into Python-induced despair. And finally, to Simon: thank you for your unwavering support, for allowing me to disappear into my work and gently pulling me out when needed. Lukas and Noah, thank you for your humor, hugs, and wild questions about weather machines. Your curiosity reminded me of why this all matters. Stella, my four-legged companion, your warm head resting on my feet under the desk kept me grounded. And to the deep forests of Gothenburg: thank you for being my reset button, and sometimes the only place where everything made sense. – Victoria, Gothenburg, June 2025 3 Table of Contents INTRODUCTION ..................................................................................................................... 5 AIM AND RESEARCH QUESTIONS ............................................................................................ 7 BACKGROUND TIKTOK ........................................................................................................... 8 PRIOR RESEARCH .................................................................................................................. 9 CLIMATE OBSTRUCTION AND CLIMATE CLAIMS .................................................................................... 9 CONSPIRACY THEORIES ............................................................................................................... 10 TIKTOK & CLIMATE MISINFORMATION .............................................................................................. 12 MULTIMODAL MISINFORMATION ..................................................................................................... 13 THEORY ............................................................................................................................... 16 FRAMING AND MEANING MAKING ................................................................................................... 16 CONSPIRACIES .......................................................................................................................... 17 SOCIAL SEMIOTICS ...................................................................................................................... 17 SOCIOLOGY OF EMOTIONS ........................................................................................................... 19 METHOD .............................................................................................................................. 21 METHODOLOGICAL APPROACH AND RESEARCH DESIGN ...................................................................... 21 COMPUTATIONAL GROUNDED THEORY ............................................................................................ 22 EMPIRICAL DATA SAMPLING AND COLLECTION .................................................................................. 24 DATA PROCESSING AND ANALYTICAL APPROACH ............................................................................... 26 RELIABILITY, VALIDITY .................................................................................................................. 29 ETHICS .................................................................................................................................... 30 FINDINGS & ANALYSIS ......................................................................................................... 31 PANORAMIC OVERVIEW OF TOPICS ................................................................................................. 31 MULTIMODAL FRAMING ANALYSIS OF CONSPIRACY CLAIMS .................................................................. 39 CONCLUSION ...................................................................................................................... 51 DISCUSSION ........................................................................................................................ 53 REFERENCES ....................................................................................................................... 55 APPENDIX ............................................................................................................................ 60 4 Introduction A TikTok clip featuring a floating power plant, Osman Khan, transitions to a map displaying spiky triangular radar signals off the coast of Valencia. Loud, ominous background music accompanies a text overlay that simply reads: “Look how interesting.” (Fig.1). No explicit claim is made; yet, the interplay of visuals, soundtrack, and caption insinuates that the ship manipulated the weather and caused the 2024 Valencia flood. Fig1(a) (b) Circulating in various iterations following the disaster, sometimes tagged with #weathermanipulation and sometimes not, this conspiratorial misrepresentation of a severe, climate change-induced flood as a result of covert, deliberate human operation gained significant traction (BBC, 2025). This not only exemplifies TikTok’s multimodal and memetic communication, where meaning is shaped through subtle combinations of visuals, text, and sound, and amplified through imitation, but also illustrates the specific form that climate misinformation tends to take on the platform: conspiratorial climate claims (Baltasar et al., 2024), often peaking in response to extreme weather events (CAAD, 2024). Climate disinformation and misinformation, whether intentional or not, continue to evolve, often downplaying human responsibility for the climate crisis and contributing to delays in climate action. These claims utilize a range of strategies, from outright denial of anthropogenic climate change to more subtle forms that acknowledge its existence but cast doubt on the severity of its impacts, the urgency of response, or the feasibility of proposed solutions (Ekberg et al., 2023; Coan et al., 2021). Such claims contribute to broader discourses of delay (Lamb et al., 2020), which 5 foster inaction by promoting doubt, uncertainty, false balance, or technological optimism (Vowles, 2024) and are increasingly central to contemporary climate obstruction (Ekberg et al., 2023). While social media platforms been shown to act as superspreaders of climate disinformation and misinformation (Treen et al., 2020), and there is growing interest in large-scale data analysis, most research to date remains concentrated on mainstream, text-centric platforms such as Twitter and Facebook (Böhm & Pfister, 2024; Bloomfield & Tillery, 2019; Mahl et al., 2023). Even studies engaging with more visually oriented platforms like YouTube or Instagram tend to adopt a text-first lens, while overlooking the role of images, video, and sound in meaning- making (Pearce et al., 2020; Brennen et al., 2021). This reflects a broader tendency in misinformation research to neglect visual and multimodal communication (Geise & Baden, 2015; Yang, 2023), leaving critical gaps in our understanding of how misinformation is constructed on audiovisual platforms like TikTok. TikTok’s multimodal affordances—combining visuals, audio, text overlays, and soundtracks—not only enable the spread of climate misinformation but also pose unique challenges for detection and analysis. Despite its rapid growth and influence, especially among Gen-Z users, the platform remains significantly underexplored in climate misinformation research. Existing studies are often limited by methodological constraints, focusing on single modalities like text or metadata, or relying on small, manually coded samples. While early findings suggest that overall levels of climate misinformation on TikTok may be relatively low (Basch et al., 2022; Duan et al., 2024), when it does appear, it clusters around conspiratorial claims of weather manipulation (Baltasar et al., 2024). More broadly, climate conspiracy claims represent a significant yet under-researched form of contrarian discourse in online environments (Coan et al., 2021). Consequently, we still know little about how TikTok’s distinctive multimodal features shape the construction and circulation of this under-researched form of climate misinformation. 6 Aim and Research Questions The aim of this study is to investigate how climate-related conspiracy theories are constructed on TikTok, focusing on weather-related content based on a large corpus of videos, guided by the following questions: (1) What topics emerge in the TikTok video corpus on climate and weather modification? (2) How are climate and weather conspiracy claims constructed through visuals, text, sound, and emotion, and what interpretive and emotional orientations do they invoke? Through a mixed-methods approach that combines computational and qualitative multimodal analysis, the study enables both large-scale pattern identification and close interpretive insight. It examines how visual elements, overlay text, narration, hashtags, music, and emotional cues interact to shape the meaning and persuasive appeal of this content. In doing so, it explores how misinformation about climate change is produced—and how it may contribute to interpretive frameworks and emotional responses that may hinder climate action. Empirically, it identifies novel forms of climate misinformation concerning both multimodal construction and the nature of the claims themselves. Methodologically, it demonstrates the feasibility of computational multimodal content analysis. Theoretically, it broadens the understanding of climate misinformation to encompass multimodal and conspiratorial forms. Thus, it provides critical insights into the evolving nature of climate misinformation and emphasizes TikTok’s unique role in this ecosystem, including its potential to contribute to delays in climate action. The thesis begins by contextualizing TikTok’s unique features, followed by a review of prior research on climate obstruction and misinformation, including studies on TikTok, as well as conspiracy theories and multimodal misinformation, due to their relevance to the study. It then outlines the theoretical framework, methodological design, presents the analysis, and concludes with a synthesis of findings and a broader discussion. 7 Background TikTok TikTok is a key platform for public discourse, especially among younger users. Since its launch in 2016, it has over 1.5 billion users, capturing significant attention. Sixty percent of users are aged 16 to 24, with popularity in the U.S., Southeast Asia, and Europe (Pew Research Center, 2024). TikTok stands out from other social media platforms in several ways. Unlike platforms like Twitter, Facebook, and Instagram, which prioritize followers and social connections, TikTok’s algorithm mainly surfaces content based on user behavior rather than social networks (Gerbaudo, 2024). One of its most distinctive features is its mechanism for serendipitous discovery. The “For You Page” regularly introduces users to unexpected content through an exploratory logic that tests diverse videos, including those outside a user’s typical interests, to see what sparks engagement (Zulli & Zulli, 2022). This increases exposure to novel or fringe ideas, making TikTok not just a site of personalization but also one of algorithmic curiosity. Additionally, TikTok operates on a memetic logic that fosters “imitation publics” (Zulli & Zulli, 2022), with built-in editing tools that encourage users to replicate trending formats by reusing sound and mimicking visual styles, amplifying virality, and prioritizing replication over originality. Moreover, multimodality is both inherent and central to TikTok’s communicative style. The platform is fundamentally visual: creating or reposting a video or image is the entry point for content. These visuals are often modified, most notably with overlay text, and typically combined with music or sound, which serves not only as an aesthetic backdrop but also as a communicative layer. Just as TikTok is “always-visual,” it is also fundamentally a “sound-on” platform: over 85% of TikTok videos contain music, and 88% of users say that sound is essential to their TikTok experience. In contrast, 75% of users watch mobile videos on mute on platforms like Facebook and Instagram (Storykit, 2023). Popular sounds or tracks often carry shared meanings, jokes, or emotional cues across otherwise unrelated content, making music a particularly important mode, a dynamic far less central on platforms like Facebook, Twitter, or Instagram. Additionally, the role of text and hashtags on TikTok differs from other platforms, as text primarily appears as on-screen overlays, while post descriptions and hashtags are visually de-emphasized and often hidden unless actively expanded. 8 Prior research This section reviews relevant research on climate misinformation, conspiracy theories, TikTok and climate claims, and multimodal misinformation. Climate Obstruction and Climate Claims Climate misinformation is continuously evolving, marked by strategies that delay and obstruct climate action (Ekberg et al., 2023). Social media platforms have become powerful superspreaders of such content (CAAD, 2024), rapidly amplifying and mainstreaming climate denialist narratives (Brulle, 2021; Treen et al., 2020) and giving those narratives unprecedented visibility, enabling them to flourish beyond traditional political and media spheres (McKie, 2021; Vowles, 2024) This evolving online landscape of climate misinformation intersects with the longer- standing field of climate denialism research, which broadly examines efforts to challenge climate science and obstruct mitigation efforts. Also known as climate contrarianism or climate obstruction, this phenomenon is understood as a reactive force that resists climate action and maintains the status quo (Brulle, 2021; Oreskes & Conway, 2010). A loosely coordinated network of actors (Brulle, 2021), including fossil fuel interests, neoliberal think tanks, and conservative foundations, has systematically worked to spread doubt about climate science and delay policy action (Cook et al., 2019; Dunlap & McCright, 2015; McCright & Dunlap, 2011). Conservative media once played a central role, but social media now dominates in spreading climate misinformation and mainstreaming denialist claims (Almiron et al., 2020, 2022; Holder et al., 2023; McKie, 2021, 2023; Treen et al., 2020) There has been extensive scholarly debate over how to classify climate denial, skepticism, contrarianism, and obstruction, focusing on their nature, content, and strategies of denial. It ranges from skepticism about climate change itself (Rahmstorf, 2004) to notions of societal denial (Cohen, 2001), the use of neutralization techniques as a response to climate change (McKie, 2019), or through the lens of public understandings (Capstick & Pidgeon, 2014). Broader classifications of climate obstruction include economic, political, and psychological barriers at individual, societal, and institutional levels, and may even present as unintentional obstruction through everyday behaviors (Ekberg et al., 2023). This research further shows that, regardless of classification, the prominence of climate claims has shifted from explicit denial to more subtle forms of disbelief (Ekberg et al., 2023). This 9 shift toward discourses of delay (Lamb et al., 2020)—rhetorical strategies that acknowledge climate change but justify inaction by redirecting responsibility, promoting weak solutions, exaggerating costs, or framing change as unviable—is increasingly central to contemporary climate obstruction, operating through narratives that reinforce the status quo (Ekberg et al., 2023). These include faith in progress and advocacy for future technological solutions, which, while seeming constructive, ultimately delay necessary climate action by fostering a depoliticized sense of optimism rooted in technological faith (Cassegård & Thörn, 2022). Building on this scholarly discussion, Coan et al. (2021) developed a comprehensive taxonomy of contrarian climate claims in online discourse, supported by extensive computational empirical validation (Bohm & Pfister, 2024; Coan et al., 2021; Rojas et al., 2024). Their analysis identifies five overarching “superclaims” commonly used on social media to challenge climate action: (1) it’s not happening, (2) it’s not us, (3) it’s not bad, (4) solutions won’t work, and (5) climate science or scientists are unreliable (Coan et al., 2021, p. 2). Each superclaim includes sub- arguments; for example, claims that downplay extreme weather fall under “it’s not happening,” while those that minimize ecological effects align with “it’s not bad.” Claims that depict climate change as a conspiracy belong to the fifth category, which asserts that climate change is a fabrication part of a deliberate deceit. In their analysis of over five million climate-related tweets, Coan et al. (2021) also found that conspiracy theories make up about 20% of skeptical tweets, making them the second most common claim after personal attacks on climate figures, which comprised 40%. Notably, users sharing conspiracy claims were particularly active in generating content. The authors also note that conspiracy theories are the leading emotional response to climate change among skeptics. Conspiracy Theories With conspiracy claims driving climate misinformation on TikTok (Baltasar et al., 2024), it is essential to examine literature on conspiracy theories to understand their structure and rhetorical logic. Research on conspiracy theories in digital environments is expansive, interdisciplinary, and rapidly growing, addressing belief formation, spread, impact, and the production, representation, and consumption of online conspiracy content (for a comprehensive overview, see Mahl et al., 2023; Douglas et al., 2019). 10 Understanding online climate conspiracy theories as a form of misinformation requires engaging with the concept of conspiracy itself (Mahl et al., 2023). At its core, a conspiracy refers to “the conviction that a secret, omnipotent individual or group covertly controls the political and social order or some part thereof” (Fenster, 2008, p.1). However, theorizing about conspiracy has long divided scholars in the debate between the “pathological school” and the cultural perspective. The earlier approach views them as false and paranoid accounts of political extremism and societal anomalies (Hofstadter, 1964), focusing on their role in distorting political reality and promoting hatred and intolerance (Nefes, 2013). Conversely, the more recent cultural perspective considers conspiracies social manifestations that are part of mass culture, rather than mere political disorders, and does not solely concentrate on revealing their falsehoods (Nefes, 2013; Fenster, 2008). A widely cited definition in the literature on conspiracy theorizing in online environments states that conspiracy theories are “unique epistemological accounts that refute official accounts and instead propose alternative explanations of events or practices by referring to individuals or groups acting in secret” (Mahl et al., 2023, p. 1797). They offer alternative explanations for historical or current events (Douglas et al., 2019; Ylä-Anttila, 2018; Uscinski, 2018), suggesting that powerful actors control seemingly random occurrences, thereby imparting meaning and coherence to a complex world (Fenster, 2008; Muirhead & Rosenblum, 2018). Central to this is the idea of hidden knowledge: conspiracy theories depend on the belief that truth is hidden and can be revealed through heroic investigation and detective-like reasoning (Fenster, 2008; Muirhead & Rosenblum, 2018). This alternative knowledge constitutes counter-knowledge that may not directly contradict facts, is often difficult to disprove, and is not necessarily incorrect (Ylä-Anttila, 2018) Conceptualizing conspiracy theories in this way helps differentiate them from other forms of deceptive content, such as misinformation, defined as unintentionally false or misleading claims not supported by evidence or expertise (Treen et al., 2020); disinformation, the deliberate spread of falsehoods often driven by political intent (Hameleers et al., 2020); fake news, referring to fabricated reports or used as a label to delegitimize journalism; and rumors, simply unverified information. Although overlaps exist, conspiracies follow a distinct logic (Mahl et al., 2023). However, recently Muirhead and Rosenblum (2018) propose the notion of new conspiracism, describing a shift toward vague, evidence-free claims that are unconcerned with identifying hidden actors or uncovering covert patterns. It relies on innuendo and ambiguous assertions, such as “people are saying" and "rigged,” avoiding accountability through coy 11 insinuations and inviting open-ended speculation. Unlike traditional conspiracy theories that aim to make sense of disorder by asserting control behind events, this form erodes institutional trust without providing meaning or guiding collective action. While many conspiracy theories may be false or exaggerated, they are not entirely without foundation. Scholars caution against completely dismissing them (Douglas et al., 2019; Fenster, 2008), as there have been real conspiracies, such as the Watergate scandal or the Tuskegee Syphilis Study (Reverby, 2009). The United States has a unique relationship with conspiracy theories stemming from a history in which the government has sometimes targeted its own citizens, operated in secrecy, or even promoted conspiratorial narratives (Olmsted, 2019; Nefes, 2013). Notably, research often portrays the spread of climate denialism in the United States as a coordinated disinformation campaign orchestrated by the fossil fuel industry (Oreskes & Conway, 2010), exhibiting all the hallmarks of a conspiracy: a covert plot by a powerful few to conceal the truth about climate change for political and financial gain (Uscinski et al., 2017). While the fossil fuel industry is often framed as the conspiratorial actor behind climate misinformation, denialist narratives reverse this logic. Studies on climate conspiracy claims show that they frequently portray climate change as a hoax driven by funding motives or ideological agendas, rejecting anthropogenic climate change and accusing climate scientists of fabricating data, distorting evidence, or corrupting peer review (Uscinski et al., 2017; Uscinski & Olivella, 2017; Muirhead & Rosenblum, 2018; Douglas et al., 2019; Lewandowsky, 2021). Related work includes studies of the chemtrails conspiracy, which alleges that aircraft release toxic chemicals for purposes such as weather control or population manipulation, often linked to covert government agendas (e.g., Tingley & Wagner, 2017). TikTok & Climate Misinformation Despite growing research on climate misinformation and conspiracy theories, platform-specific studies remain uneven, with TikTok notably understudied, largely due to the analytical challenges posed by its multimodal format. As a result, most existing studies rely on small-n, manual content analysis and remain largely descriptive. Early computational work, such as Baltasar et al. (2024), focuses primarily on text-based features like video descriptions and hashtags, omitting other modalities. These studies suggest that TikTok hosts comparatively less climate misinformation than platforms like YouTube and Instagram (Duan et al., 2024), while some non-comparative 12 analyses report lower-than-expected levels (Basch et al., 2022; Baltasar et al., 2024). However, when misinformation does appear, it frequently takes a conspiratorial form. Baltasar et al.’s (2024) analysis of over 8,000 video descriptions found that climate misinformation often clusters around narratives of weather manipulation and chemtrails, commonly tagged with hashtags like #weathermanipulation and #weathercontrol. However, Hautea et al. (2021) found that hashtags are unreliable indicators of content due to their limited role in content surfacing and practices like hashtag bombing or hijacking, where unrelated or trending tags are used to boost visibility. These conspiratorial narratives are highly reactive, often spiking in response to extreme weather events (CAAD, 2024). Crucially, such events can act as turning points for public attitudes toward climate change. For instance, Ruttenauer (2024) finds that floods and heatwaves increase belief in climate change, even among sceptics, though these shifts rarely lead to behavioral change. These moments also trigger intensified climate discourse on social media, as users turn to TikTok to share experiences, opinions, and claims, with emotion playing a key role in this dissemination, as seen during the Australian bushfires, where users employed imitation, humor, and contrast to express complex feelings and cope with fear and uncertainty (Brown et al., 2024). Such emotionally charged responses are consistent with prior research showing that emotions like fear and powerlessness are closely tied to uncertainty and a diminished sense of control (Barbalet, 1998), factors that make fear a particularly salient reaction to extreme weather and the broader climate crisis (Wettergren, 2024). Multimodal Misinformation Given TikTok’s highly multimodal nature, where meaning arises from the interplay of visuals, audio, and text, it is crucial to engage with the expanding field of digital multimodal misinformation research. This body of work builds on traditions in communication, media, and semiotics, viewing meaning as shaped through the interplay of multiple modes, including image, text, sound, and gesture (Kress, 2010; Dicks, 2014; Marshall, 2022; Brennen, 2012). A mode refers to a socially and culturally shaped semiotic resource for conveying meaning, while multimodality involves the use and interaction of several modes in a communicative act (Kress, 2010; Kress & Van Leeuwen, 2020). These modes are not processed in isolation, but operate in dynamic relation to one another, with their combination generating meaning beyond what each could convey alone (Van Leeuwen, 2012; Forchtner, 2023; Geise & Baden, 2015). 13 Media, in particular, is consumed in a multimodal context (Geise & Baden, 2015). Visuals are especially powerful semiotic resources; they are processed faster than text, evoke stronger emotional responses, and leave more lasting impressions, a phenomenon known as the picture superiority effect (Geise & Baden, 2015; Powell et al., 2015). Congruent text-image or audio combinations enhance understanding and salience, while visual content enhances memory and attention (Powell et al., 2015) and also exerts a greater influence on political attitudes and behavioral intentions than text alone (Yang, 2023; Hameleers et al., 2020). Furthermore, image- based misinformation tends to be highly repetitive, an insight relevant to understanding virality and content propagation (Yang, 2023). Relatedly, Rossi et al. (2025) show that the same climate- related visuals circulate among both activists and skeptics but evoke contrasting emotional reactions. Visual framed misinformation, the intentional use of images by agents of disinformation to present a misleading or fabricated representation of reality (Hameleers et al., 2020), gains credibility due to the indexical quality of images, as they seem to provide a direct link to reality (Hameleers et al., 2020). Common manipulative strategies include decontextualization, reframing, and visual or multimodal doctoring, all of which alter perception (Hameleers et al., 2020). Visual framing analysis explores how images are used to emphasize certain aspects of reality, shaping interpretations and public responses (Brennen et al., 2020). Emotions are crucial in this process, as visuals frame complex issues into emotionally resonant, interpretive packages (Powell et al., 2015; Geise & Baden, 2015). Some emerging research offers multimodal perspectives on climate misinformation, such as Forchtner’s (2023) analysis of far-right climate discourse, which examines how visual and textual elements combine to construct environmental narratives. Drawing on case studies from Europe, the U.S., and Asia, it illustrates how far-right actors use imagery, symbols, and aesthetics to promote ideological positions, employing nostalgic landscapes to evoke national identity, scientific visuals to lend legitimacy, and emotionally charged imagery to heighten engagement. These strategies emphasize how multimodal communication embeds disinformation within persuasive visual frames, making it a crucial reference for understanding the visual and multimodal dimensions of climate disinformation. Törnberg (2024) presents one of the first large-scale multimodal computational framing analyses of climate misinformation, integrating textual and visual data. By analyzing over 17,000 14 image–text posts from Swedish online climate denialist sources, the study combines computational methods with qualitative framing analysis. It illustrates how denialist actors utilize scientific aesthetics, graphs, statistics, and technical visuals to project objectivity and legitimize skepticism, while representing climate activists through emotional and feminized imagery, framing them as irrational. This dual aesthetic constructs a narrative of depoliticized, rational skepticism versus emotional alarmism, positioning denialist views as neutral facts rather than political ideology. Despite the prominence of conspiracy climate claims online, they remain under-researched and are often discussed only in broad terms, with limited attention to their content and discursive structure (Coan et al., 2021; Douglas et al., 2024; Uscinski et al., 2017; Muirhead & Rosenblum, 2018; Lewandowsky, 2021). The research is predominantly quantitative, with few studies employing mixed or innovative methods (Mahl et al., 2023). Research utilizing digital methods tends to prioritize mainstream platforms and text-based content (Mahl et al., 2023; Pearce et al., 2020), while image-based and multimodal misinformation, particularly visual and auditory elements, remains significantly understudied, especially by researchers using computational approaches (Brennen et al., 2020; Yang, 2023; Baele et al., 2025). While computational content studies have begun to explore multimodality through text–image combinations, they have yet to account for music, an essential modality for meaning-making on platforms like TikTok. This situation arises from persistent methodological, technical, and theoretical challenges, including difficulties in collecting, storing, and analyzing visual content (Brennen et al., 2021), insufficient technical expertise, limited computational resources, and barriers to accessing platforms (Baele et al., 2025). Furthermore, existing analytical frameworks often prioritize text over images and inadequately theorize multimodal framing (Pearce et al., 2020; Geise & Baden, 2015). These limitations are especially apparent on TikTok. Baltasar et al. (2024), for example, had to exclude narration due to music lyrics interference, relying only on post descriptions and hashtags, excluding key meaning-making modes like sound, visuals, narration, and on-screen text. As calls grow for research into how modalities shape salience, meaning, and emotional appeal (Geise & Baden, 2015; Kress & Van Leeuwen, 2020; Huber et al., 2022), the emotional power of climate conspiracies, now a dominant emotional response among skeptics, warrants greater attention (Coan et al., 2021). 15 Theory This study employs a theoretical framework, which I term multimodal conspiracy framing, to investigate how climate conspiracies are constructed on TikTok. Framing theory serves as the foundation, offering a lens through which meaning is constructed and made salient. It is complemented by three perspectives: the theory of conspiracy, which informs what is being constructed; social semiotics, which explains how the multimodal construction occurs; and the sociology of emotions, which deepens understanding of how these constructions move and engage. Framing and Meaning Making Drawing on a Goffmanian view, framing is understood as the process by which individuals organize experiences and render events meaningful through interpretive schemas or frames, which serve as guidelines for interpreting and categorizing social reality (Goffman, 1974). Framing is conceptualized here as a cognitive, communicative, and social practice through which aspects of reality are strategically selected, emphasized, or downplayed to construct specific interpretations. This perspective is especially useful for analyzing how complex or contested issues, such as climate change, are simplified and made salient through communicative practices (Geise & Baden, 2015). Framing theory’s strength lies in its focus on how ideas, culture, and ideology are interpreted and combined within specific situations to create ideative patterns through which audiences understand the world (Lindekilde, 2014). While framing and discourse analysis share some foundations, they differ in scope and emphasis. Framing analysis is particularly concerned with the cognitive and strategic dimensions of communication, how social actors deliberately construct meaning to influence interpretation, whereas discourse analysis offers a broader lens on language, power, and social practice (Lindekilde, 2014). Framing can thus be seen as a sub-variant of discourse analysis, offering a focused analytical lens for examining climate conspiracies on TikTok. The theoretical conceptualization of framing is not restricted to language; frames can be constructed through any mode, such as text, visuals, or sound, that conveys interpretable meaning (Geise & Baden, 2015). A multimodal framing approach acknowledges that meaning unfolds through the interaction of multiple modes, each contributing uniquely to how frames are constructed and received (Geise & Baden, 2015; Brennen et al., 2020; Powell et al., 2015; Pearce et al., 2020). Ultimately, information from visual, textual, or other modalities contributes to the same interpretive process of making sense of reality (Geise & Baden, 2015). 16 Conspiracies The analysis adopts a cultural perspective on conspiracy theories, viewing them as social manifestations and alternative epistemological frameworks that refute official accounts and propose hidden explanations of events or practices by attributing covert intent to powerful actors (Fenster, 2008; Nefes, 2013). The theoretical conceptualization of a conspiracy includes three dimensions: the plot, the conspirators, and the epistemic dimension (inspired by Marczyński et al., 2025). The secret plot denotes a hidden scheme orchestrated by powerful actors with sinister intentions (Douglas et al., 2019; Uscinski, 2018), grounded in the belief that a secret group covertly controls the political and social order or portions thereof through secret machinations (Fenster, 2008; Mahl et al., 2023). This plot is thought to explain the ultimate causes of significant historical or ongoing events and practices (Douglas et al., 2019; Uscinski, 2018; Mahl et al., 2023). The conspirators, individuals or groups acting in secret (Mahl et al., 2023), are cast as responsible for creating disorder, using their influence to shape events behind the scenes. These conspirators are believed to control events, thereby providing structure and meaning to seemingly random or chaotic occurrences (Muirhead & Rosenblum, 2018). The epistemic dimension consists of a hidden truth, believed to be concealed from the public and discoverable only through investigation (Fenster, 2008; Muirhead & Rosenblum, 2018), and it includes an alternative explanation of events: unique epistemological accounts that reject official narratives and offer counter-knowledge (Uscinski, 2018; Mahl et al., 2023; Ylä-Anttila, 2018). Social semiotics To theorize how modes interact in meaning-making, this study draws on Kress and van Leeuwen’s (2020) social semiotics, a theory that explains how meaning is created through signs, or units of meaning, embedded in various modes as tools of communication in social and cultural contexts. Modality acts as a central resource for meaning-making (Kress, 2010), and multimodality is the interplay of different communicative modes, such as sound, image, text, or design elements (Kress & van Leeuwen, 2020). Each mode possesses distinct communicative affordances and constraints, fulfilling different functions depending on its cultural, social, and situational context (Kress & van Leeuwen, 2020). A key aspect of multimodal theory (Kress & van Leeuwen, 2020; Kress, 2010; Dicks, 2014; Rose, 2013) and multimodal framing analysis (Geise & Baden, 2015; Brennen et al., 17 2020; Powell et al., 2015; Pearce et al., 2020) is its focus on how various modes work together to guide attention, shape salience, and construct meaning around a central organizing idea. Analyzing a single modality is insufficient, as it overlooks the interplay between modes and the meaning that emerges from their alignment, contradiction, or integration (Geise & Baden, 2015; Brennen et al., 2020). A foundational theoretical assertion in social semiotics is that visual communication has its own grammar for generating meaning, challenging the assumption that visual images simply reflect reality “as it is” (Rose, 2013; Kress & van Leeuwen, 2020). In Kress and van Leeuwen’s (2020) theoretical framework, images are not neutral windows onto the world but structured compositions that actively participate in meaning-making. Composition refers to the arrangement of elements, spatial organization, and framing devices that guide emphasis and viewer positioning. Gaze is one such structuring device: when a represented participant looks directly at the viewer, it creates a demand, a sense of interpersonal engagement that asks something of them. An averted gaze, by contrast, positions the viewer as a detached observer, constituting an offer. Social distance is similarly encoded through the image’s scale: a close-up suggests intimacy or urgency, a near distance implies familiarity, and a long shot conveys detachment or impersonal observation. Angle and point-of-view further shape how meaning is construed. The choice of angle—frontal, oblique, high, low, or eye-level—conveys a subjective attitude toward what is shown and can imply power relations: a high angle signals dominance, a low angle suggests subordination, and an eye-level view implies equality. Images also vary in how they encode perspective. Subjective images reveal the scene from a particular viewpoint selected for the viewer. Objective images, such as diagrams, maps, or technical charts, use a central perspective and assert neutrality, essentially stating, I am this way, regardless of who you are. Visuals also make truth claims about what is real through validity markers, features such as color saturation, background detail, brightness, and abstraction. These markers suggest varying levels of realism and credibility. Importantly, different reality types —naturalistic, scientific/technical, or abstract—carry different truth claims, influencing how the image is interpreted and how persuasive its framing becomes. Naturalistic visuals without color and light modifications may seem more truthful and support claims of reality, but are not always seen as more credible. In scientific contexts, decontextualized, abstract images can hold greater epistemic authority, where ideas of the ‘real’ differ. A technical line drawing might be seen as 18 more valid than a photograph, depending on the communicative purpose and type of knowledge conveyed. Based on van Leeuwen’s (2006, 2012) musical discourse theory, sound also acts as a means of creating meaning. From this perspective, music conveys mood, social distance, and power dynamics. Tonal modality primarily expresses emotional mood: in Western music, major scales are generally associated with a cheerful, lighthearted feeling, while minor scales evoke sadness, seriousness, or tension. Social distance is signaled by volume, which ranges from intimate to public; loud or high-pitched sounds signify assertiveness, power, or urgency, while soft, low, or background sounds suggest intimacy, subtlety, or detachment. These auditory characteristics affect how messages are perceived and interpreted, collaborating with visual and textual modes to shape understanding. Finally, the textual mode is understood as a key framing device that guides and constrains the interpretation of visuals and sound. This is particularly important given the openness of visual modes to multiple, context-dependent meanings (Geise & Baden, 2015; Powell et al., 2015). Sociology of Emotions The study takes a sociological perspective on emotion, drawing specifically on Barbalet’s (1998) radical approach, which rejects the separation of emotion and rationality. Instead, emotions are viewed as structurally embedded and essential to action, providing motivation, orientation, and evaluative guidance (Barbalet, 1998; Wettergren, 2019). Emotions are understood to consist of situational appraisals, bodily sensations, expressive gestures, and culturally embedded labels that make them intelligible (Sieben & Wettergren, 2010). Emotions are not merely internal states but relational and socially produced processes shaped through interaction (Barbalet, 1998). In this approach, emotions are closely tied to social action: they shape how individuals respond to circumstances and anticipate future outcomes. This includes their distancing or approaching functions, which guide individuals either to withdraw from or engage more closely with objects or situations (Barbalet, 1998). Rather than being inherently positive or negative, emotions are understood through their relationship to action and action outcomes (Wettergren, 2019) and how they influence behavioral trajectories over time (Wettergren, 2024)—a perspective valuable for understanding how emotions drive behavior and social action. Emotions may also work in 19 combination as companion emotions, where one emotion is used to manage or regulate another (Wettergren, 2024). This study specifically highlights epistemic emotions; emotions with a clear knowledge- driven quality, such as curiosity, interest, fascination, surprise, suspense, certainty, and ́ the feeling of knowing´ (Arango-Muñoz, 2014). These emotions are particularly relevant to conspiracy theories, as they shape how individuals seek, engage with, and evaluate alternative explanations, as well as how they influence behavior. Epistemic emotions are object-focused (Muis et al., 2018) and arise from appraisals of how new information aligns or misaligns with preexisting beliefs or knowledge structures (Törnqvist & Wettergren, 2023; Muis et al., 2018). Among these, curiosity and interest are central to understanding the social phenomenon of conspiracy theories. Curiosity is an approach-oriented emotion that motivates learning and exploration, drives engagement with novelty, and is particularly activated when individuals encounter unexpected or complex events and attempt to make sense of them (Silvia, 2008). It is intimately linked to its object and the desire to know more, with an action orientation that compels approach (Törnqvist & Wettergren, 2023). It arises from incongruity or an information gap between what one knows and what one seeks to understand (Muis et al., 2018). Curiosity also functions as a social and epistemic drive, motivating and organizing knowledge production in society (Bineth, 2021). Certainty and “the feeling of knowing” are linked to confidence and information saturation (Törnqvist & Wettergren, 2023). Surprise, caused by unexpected events, can draw attention to its source. When a situation is perceived as comprehensible, surprise may inspire curiosity; if not, it can lead to confusion (Muis et al., 2018). Confusion, in turn, arises from appraisals of uncertainty related to novelty, complexity, conflict, or unfamiliarity (Muis et al., 2018). Ultimately, epistemic emotions play a crucial role in motivating cognitive engagement. They stimulate inquiry when individuals feel frustrated by an outcome or encourage them to reassess epistemic standards in the face of doubt or anxiety. This theoretical lens provides a compelling perspective by shifting focus away from questions of belief or factual accuracy of a conspiracy, which can obscure the emotional dynamics that make conspiratorial narratives appealing. Instead, it foregrounds how emotions may shape individuals’ engagement with conspiracy content. It offers insight into the emotional functions these narratives serve and may enhance our understanding of why people engage, while also offering clues to the puzzle of persistent climate inaction. 20 Together, these four theoretical perspectives—framing, conspiracy theory, social semiotics, and the sociology of emotions—form the conceptual basis for analyzing meaning- making across visual, textual, and auditory modes, in the context of conspiratorial climate claims on TikTok. Method This section outlines the study’s mixed-method research design, data sampling and collection, data processing and analytical approach, as well as considerations of validity, reliability, and ethics. Methodological Approach and Research Design The study utilizes a mixed-method case study design, combining inductive large-scale computational multimodal content analysis of TikTok posts with in-depth deductive qualitative analysis of a focused subset of representative posts. This approach facilitates broad pattern identification alongside a deeper understanding of how climate misinformation is constructed and communicated on TikTok. The study’s ontological and epistemological foundations guide its methodology. Grounded in critical realism, the ontological stance posits an independent reality, while acknowledging its socially constructed meaning and effects (Alvesson & Sköldberg, 2017). This perspective views TikTok as a site of cultural production, where platform-specific mechanisms—visuals, text, and music—shape how climate misinformation is expressed and experienced. A constructivist epistemology also acknowledges that knowledge is influenced by cultural, contextual, and interpretive frameworks. By balancing systematic observation with interpretive analysis, this study integrates computational pattern detection and qualitative inquiry, aligning with critical realism’s aim to uncover underlying mechanisms (Alvesson & Sköldberg, 2018) that shape multimodal climate misinformation on TikTok. It adopts a descriptive case study approach suited to complex, real-world phenomena (Yin, 2018), with the limitation of single-case designs offset by a mixed- method design. Given the scale and rapid turnover of online information, along with the complexity of multimodal communication on platforms like TikTok, using a large dataset with inductive computational methods is essential for detecting recurring and unexpected patterns. In this “era of data abundance,” small-n qualitative approaches risk missing these patterns; these human methods 21 are also too slow to keep pace with the speed of digital media and face challenges in providing a panoramic view of vast, image-rich environments (Baele et al., 2025). At the same time, computational methods have limitations. While their inductive logic allows for scalable, bias- reducing exploration, they are often theory-blind, struggle to interpret latent meaning, and fall short in delivering nuanced, contextualized insights offered by human interpretation (Baele et al., 2025). Computational Grounded Theory This study draws on computational grounded theory (Nelson, 2020), which integrates large-scale, inductive pattern detection with human-led qualitative interpretation. This approach enables themes to emerge algorithmically from extensive datasets, which are then examined through theory-informed analysis (Fig.2). Fig2: Multimodal Computational Grounded Theory Approach Combining distant computational analysis with close qualitative reading offers a robust framework for understanding climate misinformation on TikTok, addressing macro-level patterns and micro-level details (Lindgren, 2020). By merging the breadth of computational scale with the depth of human interpretation, this approach offsets the limitations of each method and enhances conceptual and interpretive richness (Baele et al., 2025). 22 Computational Multimodal Topic Modelling For the computational multimodal analysis, I utilize recent advancements in artificial intelligence, specifically unsupervised machine learning models that detect patterns inductively without predefined guidance (Baele et al., 2025). These models generate embeddings, which are numerical representations of patterns expressed as high-dimensional vectors that often consist of hundreds of values. These vectors are positioned in a semantic space, where the relative distance between them signifies similarity. Since these embeddings are numerical and spatially comparable, they enable computational comparisons across modalities such as text, visuals, and audio, facilitating integrated multimodal analysis. This study employs three models, each optimized for a specific modality: CLIP (Radford et al., 2021), a vision-language model, generates visual embeddings that capture visual similarity, reflecting how images look and feel, based on features such as objects, composition, and color (e.g., a radar map of a hurricane ≈ a shoreline hit by storm surge). Sentence-BERT (Reimers & Gurevych, 2019) generates text embeddings that capture semantic similarity, how sentences mean similar things, based on contextual meaning at the sentence level (e.g., “I’m happy” ≈ “I feel great”), rather than relying on individual word tokens or patterns of word co-occurrence. OpenL3 (Cramer et al., 2019) generates audio embeddings that capture acoustic similarity, how the sound or music feels based on patterns in rhythm, tone, and mood (e.g., dramatic orchestral music ≈ tense cinematic soundtrack). CLIP and BERT were selected for their established use in computational multimodal analysis (Baele et al., 2025; Törnberg, 2024). Given the novelty of sound integration, OpenL3 was chosen for its ability to produce interpretable variation in music after testing multiple models. BERTopic (Grootendorst, 2022), a topic modeling framework, then analyzes multimodal content by combining the separate embeddings generated for text, visuals, and audio for each TikTok post. Each modality’s embedding exists in its own semantic space, reflecting different types of similarity: linguistic, visual, and acoustic. To integrate these diverse modalities, a feature- level early fusion approach (Barnum et al., 2020) is used where the three embeddings are concatenated into a single joint vector for each post. BERTopic then clusters these based on combined patterns of similarity, resulting in a multimodal grouping that captures co-occurring patterns across language, imagery, and sound, each of which constitutes a topic. The number of topics produced by the model is determined inductively, based on the distribution and 23 distinctiveness of patterns in the data. Each topic includes a list of associated textual keywords identified by the model as representative of its content, a list of posts assigned to that topic, and a ranking of the most representative posts: those whose combined textual, visual, and audio features are closest to the topic’s semantic center. This enables the creation of visual grids that display representative images alongside prominent audio segments, supporting a multimodal understanding of each topic. Qualitative Distant & Deep Reading Computational methods reveal large-scale patterns but require human interpretation. The output allows for two levels of qualitative engagement: a panoramic overview of dataset patterns via distant reading and a computationally guided selection of posts for in-depth analysis, ensuring unbiased selection and systematic exploration of each topic. By anchoring qualitative analysis in computational outputs, the method avoids cherry-picking (Törnberg & Törnberg, 2016) while enabling nuanced, context-sensitive interpretation. Distant reading enables a broader understanding of the topics, and the advantage of not ‘close reading’ lies in its capacity to focus on units both smaller and larger than each post, enabling exploration of patterns across keywords, visuals, and audio without requiring close examination of every post (Lindgren, 2020). This is complemented by what Nelson (2020) calls computationally guided deep reading, a close deductive analysis of representative posts. This supports the development of thick descriptions, context-rich, interpretive accounts that illuminate situated meaning (Geertz, 1973) and contribute to establishing qualitative rigor and analytical credibility (Tracy, 2010). While technically complex, this approach builds on my background in computer science, with over 20 years of software development experience before re-entry to academia. Further, I co- authored a published study using topic modelling (LDA) to analyze YouTube climate misinformation (Vallström & Törnberg, 2025), as well as a network analysis of the Swedish climate countermovement (Törnberg & Vallström, 2025). This foundation supports both the methodological design and a sustained research focus on climate misinformation. Empirical Data Sampling and Collection This study utilizes a dataset of 7,658 TikTok posts, with videos usually ranging from under a minute to three minutes in length, identified and collected through purposive sampling and automated scripting. 24 The purposive sampling strategy built on prior research that highlighted where climate misinformation clusters on TikTok, particularly regarding content about weather manipulation and hashtags like #weathercontrol and #weathermanipulation (Baltasar et al., 2024). Recognizing that hashtags alone do not reliably indicate message content (Hautea et al., 2021), the sampling was expanded through targeted keyword searches, combining misinformation-related terms (e.g., “weathercontrol”) with event-specific keywords (e.g., “flood,” “wildfire”) to capture the connection between climate misinformation and extreme weather (CAAD, 2024) (see Appendix 2 for all queries). This approach ensured broad yet thematically focused coverage. Although it introduced false positives, these were filtered out in later stages. The sampling was limited to English-language content to maintain interpretive consistency. Data collection occurred in two steps. First, posts were gathered using an automated scripting method via Apify, a public automation platform for TikTok data collection. The script ran in repeated iterations over a three-month period (January 15 – April 15, 2025), continuously querying TikTok for publicly available videos that matched the defined search terms and hashtags. It retrieved posts without regard to publication date, spanning from July 2019 to April 2025. The extended querying window, along with rotating proxy networks across global locations to reduce location-based content surfacing biases, aimed to maximize coverage. The collection ended once the queries started yielding few new results, resulting in a final dataset of 7,658 posts after excluding advertisement content. In the second step of data collection, the relevant data for each TikTok post identified in step one was retrieved using automated scripts (Table 1). This process involved downloading two types of data for each post: (1) the TikTok video file, which includes visual content, overlay text, and the audio file, and (2) post-level metadata, which consists of the video description, subtitles, and play count. Notably, the subtitle file 25 automatically generated by TikTok to support accessibility (TikTok, n.d.) offers a reliable source of spoken content. This feature overcomes limitations in previous studies (e.g., Baltasar et al., 2024) that encountered challenges with automatic transcription due to interference from music lyrics. By using the subtitle file, this study circumvents that issue and enables an accurate capture of narrated content. Data Processing and Analytical Approach The data processing and analytical approach followed the two-stage mixed methods structure, beginning with computational pattern detection in the TikTok dataset, followed by qualitative analysis of the identified patterns. Computational Pattern Detection This process involved three key steps: data preprocessing to extract the modalities, embedding generation, and the actual multimodal topic modeling. Modality Extraction The first step involved extracting and then combining the collected data into three distinct modalities: visuals, text, and sound (Table 2), through an automated process implemented via custom Python scripts and supporting tools (complete list of tools see Appendix 1). All posts in the dataset contained visual video content, converted into image frames using a method that captures only visually distinct frames. This approach reduces redundancy, preserves meaningful visual variation, and optimizes the dataset for analysis. As a result, over 100,000 JPEGs formed the visual component of the dataset. Approximately 80% of the posts included music. For these, instrumental tracks were extracted to eliminate lyrics that could interfere with 26 audio analysis, producing a cleaner music dataset. The instrumental tracks were saved in high- quality WAV format to ensure compatibility and optimized performance for audio embedding, forming the music component of the dataset. The textual component was constructed by combining video descriptions available for nearly all posts, overlay text extracted from video frames present in about 90% of posts, and spoken narration from roughly 55% of posts extracted from subtitle files. Together, these sources included at least one textual element, constituting the language-based component of the dataset. All extracted data was tagged with unique post-level identifiers and compiled into a structured file format (JSONL) to facilitate the reliable linking of each element to its original TikTok post. Modality Embeddings The three modalities were programmatically transformed into embeddings using CLIP for visuals, Sentence-BERT for text, and OpenL3 for audio. The textual data were cleaned through lemmatization, which reduced words to their base form, and stopword removal, filtering out common words like “the” or “and” that carry little meaning. This process improves the model’s ability to detect meaningful patterns by minimizing noise in the data. Posts lacking text or music had their embeddings substituted with zero vectors. This allowed the model to account for partially missing entries, treating the absence as a signal rather than an exclusion criterion, thus facilitating pattern detection across modalities, even when one was missing. Multimodal Topic Modelling Once the three modalities were transformed into embeddings, they were combined into a unified representation for each post. This enabled multimodal analysis through the BERTopic framework, generating topics that captured distinct patterns in the TikTok dataset. A two-step topic modelling process was conducted. The first step aimed to filter out irrelevant content introduced by the broadened search strategy, thereby narrowing the focus to misinformation-related posts. The full dataset of 7,658 posts was processed through BERTopic using equally weighted textual and visual embeddings, ensuring that neither modality dominated topic formation. Audio was excluded at this stage to prioritize content-based patterns since the objective was content exclusion. The model produced 114 topics. To identify those relevant to climate misinformation, a distant reading was conducted, reviewing topic content to exclude those unrelated to the study’s aims. This process yielded a refined subset of 2,075 posts for deeper multimodal analysis. In the second step, combined embeddings of text, visuals, and music were 27 processed through BERTopic on the refined subset of 2,075 posts. Each modality was given equal weight to prevent any single mode from dominating pattern detection and to support the most inductive approach possible. Similarly, no constraints were imposed on the number of topics, allowing the model to surface patterns freely based on the data. This process generated 38 distinct topics, a manageable number for subsequent qualitative analysis. Qualitative Analysis Identifying Panoramic View To gain a panoramic view of the TikTok dataset and interpret each topic’s thematic focus, I conducted a distant reading of the model’s output by reviewing the ten most representative posts for each topic. Based on this qualitative review, I excluded 11 topics due to their limited relevance to climate misinformation. The remaining 27 topics, comprising 1,220 posts, formed the basis for the subsequent in-depth multimodal framing analysis. Each of these 27 topics was assigned a title derived from this review, along with model-generated keywords, representative images from the top posts, and a descriptive music label, which I assigned with support from a music emotion detection tool (Kang & Herremans, 2025). Deductive Qualitative Framing Analysis The final stage of analysis involved the qualitative, theory-informed multimodal conspiracy framing analysis of representative TikTok posts drawn from the 27 topics. For each topic, I analyzed the top-ranked representative posts, as determined by the model, typically five to eight per topic, until reaching theoretical saturation. This multimodal conspiracy framing analysis was guided by the integrated theoretical framework developed in the preceding section, which synthesizes multimodal framing, conspiracy theory, social semiotics, and the sociology of emotions. Each post was coded for the core dimensions of conspiratorial narratives: (1) the secret plot disrupting the established order; (2) the conspirators allegedly acting in secrecy; and (3) the epistemic dimension involving hidden truths or alternative explanations. Visual framing was coded for composition (elements, framing devices, spatial organization), gaze, social distance (close-up, near, far), attitude (point-of-view, angle), validity markers (e.g., color, saturation, brightness), and reality type (scientific, naturalistic, abstract). Sound was assessed for presence (none, narration, or music); where music was present, mood and auditory distance (background or foreground) were coded to capture emotional and 28 communicative effects. Textual elements, overlay captions, post descriptions with hashtags, and subtitles were analyzed as framing devices regarding how they guided, constrained, or interacted with other modes in constructing meaning. To capture the emotional dimension, the analysis focused on two key clusters: fear-related emotions (e.g., anxiety, suspense, powerlessness), especially relevant to climate crisis framing; and epistemic emotions (e.g., curiosity, interest, the ‘feeling of knowing’), central to conspiratorial framing. These were identified across visual, textual, and sonic cues, focusing on how emotions were invoked, implied, or amplified in each video and how they served to direct attention, guide interpretation, and encourage engagement. Recognizing the interpretive challenges in researching emotions, such as when emotions are concealed in the data and the disparity between expressed and felt emotions (Flam & Kleres, 2015), I focused on the emotions I perceived the creator aimed to evoke. While I used my own responses as sensitizing tools, I took help from the emotion theory concepts to guide me, as “we need theory to support our interpretations of emotions” (Flam & Kleres, 2015, p. 123). Reliability, Validity The computational approach offers a rigorous yet interpretive content analysis (Nelson, 2020). It facilitates the exploration of large datasets, inductively identifying patterns and highlighting representative videos, which increases the validity of generalizations while reducing researcher bias. Reproducibility is supported through algorithmic classification and a transparent account of data processing and methods, enabling independent replication and meeting standards of intersubjectivity (Nelson, 2020). Upholding these standards with large datasets requires rigorous documentation and validation to ensure transparency, reliability, and reproducibility. To maintain data quality, each post received a unique identifier to enable traceability and prevent data corruption, supported by extensive debugging scripts that validated integrity, post counts, and cross-modal alignment. Each modality’s embeddings were also independently tested prior to integration. Text extracted from images was manually spot-checked, addressing minor issues like special characters in overlay text through stopword filtering. HTML dashboards enabled structured inspection of topic outputs across images, audio, and text. All code was version-controlled, and methodological memos were consistently maintained to document analytical decisions, ensuring transparency and traceability. 29 However, limitations persist. Researchers’ choices in tools, algorithms, and data sources may introduce bias, and qualitative interpretation remains influenced by individual perspectives. The use of unsupervised models adds further complexity, as they function as ‘black boxes,’ offering limited transparency into how topics are formed. Outputs are shaped by opaque design choices and pre-training biases, which can complicate interpretation and reduce methodological clarity. Furthermore, the multimodal design introduces analytical complexity, which can pose challenges for peer review, especially among readers who are unfamiliar with computational methods. Beyond model-related limitations, this study adopts early fusion, a design choice with specific trade-offs. It provides a flexible and interpretable approach that is well-suited for exploratory topic modeling with specialized, non-jointly trained models, such as sentence-level text and music. However, mapping modalities into a shared semantic space could enable deeper alignment but typically requires large, aligned datasets and joint training. Lastly, a limitation of TikTok's data collection is its non-deterministic nature. The platform’s algorithmic filtering, geographic variations, and content moderation, particularly regarding climate misinformation, suggest that scraping captures only the content visible at the time of collection rather than providing a comprehensive archive of all relevant posts. Despite challenges, this design seeks to meet Tracy’s (2010) standards for quality qualitative research. It addresses the pressing issue of climate misinformation, ensuring topical relevance. Rigor is demonstrated through a computational mixed-methods approach, grounded in prior research, applied to a large dataset, and supported by thorough analysis. Ethical and reflexive practices ensured transparency and privacy sensitivity. Finally, the study achieves coherence by offering clear insights that deepen our understanding of climate misinformation on digital platforms. Ethics The study involves humans and adheres to ethical guidelines to minimize harm (Vetenskapsrådet, 2024). Digital data use poses challenges, especially with covert research where participants are unaware of their involvement, raising concerns about informed consent and misrepresentation; such research may be ethically acceptable if well justified (O’Reilly, 2009). Digital research praxis differentiates public and private spaces by accessibility, user awareness, and membership (BSA, 2017). This study used only open-access data that does not require a login; therefore, the content 30 is considered part of the public domain. No user profile data was collected, and engagement metrics were used solely in aggregate. Visual data were managed with care to protect participant identities and ensure responsible use (Marshall, 2022); faces were blurred in posts featuring identifiable individuals, except in reposted or news content where subjects already appeared in public contexts. All data was stored locally on a secure computer, and machine learning models and tools were downloaded, scripted, and executed in an isolated environment to prevent any transmission to public or cloud platforms. To mitigate misrepresentation, reflexivity was maintained by being self-aware of subjective influences and by using demonstrative examples to ensure a transparent analysis. This stance prompts reflection on my positionality as a privileged, middle-aged white woman with a 20-year background in computer science, influencing my systematic, computational, and problem- solving approach. While this ensures methodological rigor, it may bias my interpretation, reflecting a positivist mindset often found in engineering and limiting engagement with ambiguity or subjective nuance. My familiarity with data modeling risks overconfidence in computational outputs, possibly leading to overvaluing algorithmic clarity at the expense of nuance in meaning- making. This reflection guided my decision to adopt a theoretically grounded approach in the qualitative analysis. Findings & Analysis In this section, I first address RQ1 by presenting findings from the computational topic modeling, offering a panoramic overview of key themes in the TikTok video corpus. I then turn to RQ2, using multimodal framing analysis to examine how conspiracy claims are constructed and to explore the emotional orientations they invoke. Panoramic Overview of Topics Using computational topic modelling on the TikTok corpus, with equal weighting given to visual, textual, and auditory modalities, the model identified key themes in the data by generating 38 topics (Appendix 3). A qualitative ‘distant reading’ of the outputs, based on the ten most representative TikTok posts per topic, was conducted to interpret their thematic content. Following this review, 11 topics (comprising 855 posts) were excluded due to limited relevance to the study’s focus on climate misinformation (see Appendix 3 for exclusion details). This left 27 topics, 31 represented by 1,220 TikTok posts that collectively garnered 1.44 billion views, relevant to climate misinformation related to weather manipulation and extreme weather events. To provide a panoramic overview of the thematic landscape of climate conspiracy content on TikTok and address RQ1, Table 3 presents the 27 topics identified by the computational model. Each topic includes a description that captures its essence based on the review, along with textual keywords, visual examples, music or audio descriptions, and total views. To increase interpretability for the reader, topics are organized by the qualitative descriptions grouped by conceptually related narratives, with some topics sharing descriptions due to only minor topic differences, such as music. The paragraphs following the table elaborate on each topic in further detail. Note: T2 lacks keyword output, an issue that can occur in multimodal fusion. 32 33 34 35 Table 3. The 27 topics generated by the computational model, with descriptions from the qualitative review, model-generated keywords, visual examples, music descriptions, and each topic’s prevalence and views. Topics are sorted by description similarity. T14 focuses on NASA's cloud-seeding technologies, featuring a recurring clip—originally aired on the BBC—of an alleged “cloud machine,” typically paired with explanatory narration and mysterious, ominous background music. Related content appears in T8, which covers Dubai’s cloud-seeding efforts using narration alone, and in T2, which takes a broader view of cloud- seeding and weather manipulation, combining narration with ominous music. T29 and T28 address weather manipulation via sun-blocking technologies, using documentary-style explanations of geoengineering to reflect sunlight and cool the Earth, with narration either alone or paired with educational-style music. A set of topics focuses on sky imagery: T30 and T6 feature cloud formations, airplane trails referred to as contrails or chemtrails, and atmospheric scenes with personal spoken narratives referencing cloud seeding or chemtrails, typically without music. T3 presents similar visuals but uses only mysterious-sounding music, omitting narration. T7 also showcases sky imagery but frames it as evidence of geoengineering rather than cloud seeding, combining mysterious and ominous music without narration. Similarly, T16 uses mysterious and ominous music without narration, focusing on strange fog imagery in the UK through user-generated clips that imply unnatural or manipulated origins. T5 presents footage of snowstorms and so-called “fake snow.” Notably, most posts simply document snowfall, while only a subset shows users filming “strange” snow or unexpected smells, accompanied by hashtags or narratives implying a “weather machine.” These videos typically pair narration with mysterious background music. 36 T23 and T27 both present reports and commentary on a 2012 CBS News segment featuring physicist Michio Kaku discussing weather manipulation using lasers, with the difference that T23 consists primarily of narration alone, while T27 incorporates narration alongside mysterious and ominous background music. T21, T32, and T20 focus on documentary-style compilations referencing historical weather control efforts and government projects. All use mysterious and ominous music but vary in narration: T21 includes full narration, T32 features partial narration with both background and foreground music, and T20 combines narration with background music only. T31 and T33 both feature cartoon-based, meme-style videos using The Simpsons to depict disasters, fires, floods, cold spells, and power outages, as predicted or manipulated events. T31 includes narration with danger-themed, ominous music, while T33 uses partial narration with a blend of comedic and ominous music. T10 and T34 feature AI-generated “what if” scenarios about manipulated weather, using cartoon-like visuals to narrate hypothetical consequences. All include narration with mysterious, ominous, or majestic music. Five topics center on hurricanes and cyclone weather events. T22 and T17 focus on Hurricane Milton in the U.S., mostly showing radar visuals and storm trajectories. T22 includes no narration or music, while T17 uses energetic, danger-themed foreground music without narration. Most posts offer straightforward updates, and only a small subset subtly implies that the radar imagery may indicate signs of manipulation. T35 and T18 both cover Cyclone Alfred in Australia, using radar footage and foregrounded music without narration. While T35 features danger-themed music and includes only one explicit reference to weather manipulation, T18 predominantly presents manipulation claims, paired with loud, epic, and ominous music. T4 features dramatic footage of Hurricane Milton’s landfall, combining storm impact visuals with personal commentary. While most posts reflect on the destruction, a small subset suggests that “something strange” is happening. These videos blend partial narration with danger-themed music in either background or foreground layers. In some cases, they feature soulful music while the video shows how the hurricane devastates everything in its path. T4 accounts for the highest share of views in the corpus (9%), and the five topics focused on hurricanes and cyclones together represent about 70% of total views, which may reflect findings that extreme weather events trigger intensified emotional engagement and content sharing on TikTok (Brown et al., 2024). Importantly, however, these high-view topics did not consist solely of misinformation; rather, 37 conspiracy claims were often embedded within broader, largely legitimate content, such as standard storm updates, which likely contributed to the increased view counts. Lastly, T19 presents DIY-style content on weather manipulation through witchcraft, showing individuals speaking to the camera or displaying witchcraft manuals and ceremonies, accompanied by narration and mysterious music. T9 features humorous clips of users tapping on- screen weather emojis to simulate controlling the weather. While primarily playful in tone, with positive background music, a few posts express genuine concerns about manipulated weather. In summary, the multimodal topic modelling revealed a distinct set of topics and themes in the corpus. Narratives centered on weather manipulation and control dominated most topics, though they varied in focus, from specific weather manipulation technologies and historical accounts of manipulation projects to sky imagery, hurricane updates, cartoons, and AI-generated content playing on future potentials and fears of weather manipulation, and humorous DIY weather control. Notably, on certain topics—like hurricane radar updates, snowstorm documentation, and emoji-controlled weather—claims of weather manipulation were found only in a subset of posts. The analysis also demonstrated strong semantic coherence at the topic level, with alignment across visual, textual, and audio elements. Variation across topics was sometimes driven by audio, with topics appearing textually and visually similar yet differing in soundtrack. Furthermore, identical ominous or mysterious audio tracks were often reused across posts and topics, highlighting the repetitive, templated nature of sound usage. Similarly, nearly a third of the topics recycled the same visuals. The surprising strength of this semantic coherence may be attributed to TikTok’s memetic logic, which encourages the replication of both sound and visual formats (Brown et al., 2024; Zulli & Zulli, 2022) and may help explain the consistency in topic patterns observed across modalities. This consistency also aligns with findings that image-based misinformation tends to be highly repetitive, aiding its recognizability and virality (Yang, 2023). Having outlined the multimodal topic structure of the corpus through computational topic modeling and addressing RQ1 by presenting the thematic contours of climate conspiracy content, the analysis now shifts to RQ2—an in-depth qualitative multimodal framing analysis of how climate and weather conspiracy claims are constructed through visuals, text, sound, and emotion, and the interpretive and emotional orientations they invoke. 38 Multimodal Framing Analysis of Conspiracy Claims The following section presents the theory-informed, interpretive analysis of how climate conspiracy narratives are constructed. It examines how meaning is produced through the framing of the secret plot, the identification of conspirators, and the construction of an epistemic dimension, with interpretation guided by the multimodal conspiracy framing framework and previous research. The Secret Plot The secret plot’s overarching theme is that weather manipulation technologies are covertly used by various national states to induce extreme, abnormal, and dangerous weather events, such as devastating hurricanes, floods, strange snow, and toxic fog, to exert control. This plot is constructed multimodally. In the topics focusing on technologies, the visuals (Fig.3) adopt a reportage style with naturalistic coloring, objective frontal angles, and primarily near social distance, avoiding close-ups except in news segments (3d), where interviewees speak to reporters instead of directly to the viewer, preserving the news-reporter reality (Kress & van Leeuwen, 2020), reinforced by the absence of music. Visual markers like the NASA sign (3a), visible BBC logo, and the #bbc hashtag enhance credibility. The visual truth is anchored in realism, aligning with a naturalistic reality type to signal an objective attitude (Kress & van Leeuwen, 2020). Fig.3(a) (b) (c) (d) (e) (f) In other weather control technology posts, visuals adopt a scientific or technical reality type (Fig.4), using explanatory diagrams that lend scientific and objective salience to the claims. 39 Fig.4(a) (b) (c) (d) (e) However, the naturalistic, objective, and scientific realism in the visuals is contextualized with overlay text, which serves as a key framing device. It guides interpretation and constrains the meaning of an otherwise objective visual frame. Phrases like “If you think the government can’t control the weather?” (3a) or, in the post on Dubai’s use of these technologies, “Do you think the US does it to us too?” (3d), insinuate the plot. These framings may evoke fear and a sense of powerlessness by suggesting that governments covertly control powerful technologies capable of manipulating extreme weather events and the real cause of “global warming” (3f). Some posts go further, framing geoengineering as "dangerous technologies” (4e). These framings recontextualize real technologies like geoengineering and cloud seeding through a conspiratorial lens, exaggerating their power and depicting them as hidden tools of control. When paired with loud, foregrounded ominous music, which indicates close social distance (Leeuwen, 2012), this intensifies emotional proximity and heightens the sense of fear and urgency regarding the perceived risks of these technologies. At the same time, the content evokes epistemic emotions, surprise through overlays like “Most amazing” (3b) and “That’s unbelievable” (3c), and curiosity through rhetorical questions, visual cues like the NASA sign (3a), and red-highlighted prompts such as “think again.” The use of mysterious and ominous background music reinforces this tone, framing the content as something hidden or unresolved that invites further exploration. These emotional dynamics can be understood through the concept of companion emotions (Wettergren, 2024), where fear and powerlessness are heightened not only by the ominous tone and visual cues but are also managed or redirected by epistemic emotions such as surprise and curiosity, shifting attention and action from fear toward investigation and knowledge-seeking (Törnqvist & Wettergren, 2023). 40 TikTok post descriptions that combine text and hashtags further reinforce the central organizing idea that weather manipulation technologies are the hidden cause of extreme weather and the climate crisis. Phrases such as “Controlling weather disasters through cloud machines,” “It is not global warming,” and “The fabrication of global warming/climate change, geoengineering!” appear alongside hashtags like #dangeroustechnology, #weathercontrol, #climatecrisis, and #hurricanes, guiding interpretation and framing these events as manufactured. These descriptions also contribute to the emotional framing. While fear may be evoked through terms like “controlling weather disasters” and specific extreme disaster-related hashtags such as #hurricanemilton, curiosity is sparked by questions like “Cloud seeding?” and complemented by hashtags like #interestingfacts, #science, and #hurricanes, subtly suggesting that weather events like Hurricane Milton are linked to technological interventions. These textual elements frame causality, positioning the technologies as responsible for current climate events, and help structure the broader conspiratorial plot. Thus, the textual mode serves a crucial framing function by guiding and limiting how visuals and sound are interpreted. This is especially vital due to the inherent openness of visual modes, which can yield various context-dependent meanings (Geise & Baden, 2015; Powell et al., 2015). In sum, while the visuals present seeds of truth, drawing on objective and real reportage about technologies that exist, the multimodal framing constructs a central organizing idea: these technologies are more powerful than we are led to believe, they are used in secret, and they are the true cause of the climate crisis and extreme weather events. This plot framing differs from the most referenced climate conspiracies, which depict climate change as a hoax orchestrated by climate science and policy (Coan et al., 2021; Uscinski et al., 2017; Muirhead & Rosenblum, 2018). However, it partially mirrors chemtrail conspiracies in that it similarly frames the plot around covert weather control (Tingley & Wagner, 2017). Notably, through this framing, the conspiracy plot subverts several core claims by turning typical climate obstruction narratives upside down: climate change is happening, it is dangerous, it is urgent, and it is caused by humans (Coan et al, 2021). This framing shifts the cause from general human activity to a specific group of powerful actors, “they”, the conspirators, who are secretly manipulating the weather. 41 The Conspirators The overarching theme of how the conspirators are framed suggests that U.S. government institutions are the primary actors behind weather manipulation. Additionally, the technologies are portrayed as spreading and advancing, increasing their prominence and potential to empower new conspirators, potentially leading to more severe future outcomes. The conspirators are often named through overlay text placed on otherwise objective-style visuals, such as photos of government facilities, scientific graphics, or radar imagery, echoing earlier patterns of naturalistic or scientific reality to convey objectivity, now used to enhance the credibility of the attribution (Kress & van Leeuwen, 2020). The U.S. government is framed as the primary conspirator, frequently represented through specific institutions such as HAARP (5a-c), NASA (5d), or the CIA (5e). HAARP is the most prominent, appearing across nearly all topics. It is a U.S. research initiative that studies the Earth’s atmosphere, originally military-funded and now university-operated, but often misrepresented in conspiracy theories as responsible for engineering natural disasters. ´ Fig.5(a) (b) (c) (d) (e) (f) (g) At times, specific conspiratorial attribution is conveyed through text and hashtags, such as “MAN-ipulated Weather” inserted into radar footage (5g), and a #HAARP hashtag clarifying who the conspirator is. In other cases, the conspiratorial attribution is absent from any visual cues, but conveyed through hashtags only, like #haarpweathercontrol, #NASA, #governmentsecrets, #governmentcoverup, #treason, and thereby framing weather phenomena as a government-led manipulation. Some also reference other governments, such as Dubai, in relation to cloud seeding, or invoke #ukgovernment and #CERN, suggesting that weather manipulation involves a broader network of global conspirators beyond the United States. Identifying the government as conspirators intersects with previous studies on climate conspiracies, which often target 42 institutional bodies (Uscinski et al., 2017). However, it sets itself apart by not concentrating on climate scientists, proponents of socialism, or advocates for a global one-world government. Additionally, with the clear concern for identifying operators plotting in the shadows, it diverges from the emerging form of ‘new conspiracism,’ which lacks both this focus and a clear plot (Muirhead & Rosenblum, 2018). The topics involving The Simpsons meme-style videos—depicting disasters like fires, floods, cold spells, and power outages as predicted or manipulated events—and those featuring AI-generated “what if” scenarios do not rely on a naturalistic visual reality type (Fig.6a-g). Instead, they draw on an abstract and stylistic aesthetic, characterized by saturated colors, exaggerated forms, and an artificial visual style. Fig.6(a) (b) (c) (d) (e) (f) (g) Sometimes, visuals do not aim to depict reality as it is but rather to construct a different kind of real (Kress & van Leeuwen, 2020)—in this case, imagined futures where weather manipulation technologies (6a, b) have advanced, proliferated, and become weaponized, thereby amplifying both the power and scope of the conspirators (6c) and the severity of the resulting disasters (6d,e,g). Through heightened colors and artificial visual styles, these posts evoke a “more than real” effect (Kress & van Leeuwen, 2020, p.155). Rather than aiming to provide proof, these posts highlight potential consequences and future escalation, using a more-than-real aesthetic to increase the salience of futurism. They function less as claims of current reality and more as speculative warnings, planting the seed of what may come and inviting viewers to “prepare for the new normal” (6f). Further, these posts do not name specific conspirators but maintain a hypothetical framing, alluding to vague, imagined future actors behind escalating weather control. Textual elements help emphasize these growing dangers by linking them to real-world climate events, using hashtags such as #floods, #lafires, #climateemergency, #weathercontrol, and 43 #climatechange. These features can be interpreted as triggering the salience of an idea of an escalating threat: powerful weather control technologies in the wrong hands. However, these framings can also be read through the lens of mass culture, where conspiracy narratives circulate as shared references, rooted in film, television, and internet meme culture (Fenster, 2008). AI and cartoon predictions are reanimated through TikTok’s meme-driven, humorous style (Zulli & Zulli, 2022), just as topics on DIY weather control and emoji-tapping reflect playful, satirical, or spiritual expressions of weather manipulation. In sum, the conspirators are framed as state governments, primarily the U.S., using weather manipulation technologies for power and control. Imagined futures suggest more actors may emerge as other governments adopt these tools. Unlike typical climate conspiracies, blame is not directed at climate scientists faking a crisis. By identifying conspirators and presenting a structured plot, this framing aligns more with traditional conspiracy theories than with the emerging concept of “new conspiracism,” which lacks both. A conspiracy that operates through an epistemic logic in which truth is hidden, awaiting revelation through careful observation and decoding. The Epistemic Dimension The overarching theme of the epistemic dimension is that intentional weather manipulation is presented as an alternative explanation for the existence of climate change and extreme weather. This is presented as a hidden truth revealed through evidence, supported by historical accounts, current technologies visible in plain sight, and subtle clues scattered across the skies, snow, and radar maps, and sometimes, in the escalation of devastating hurricanes. Posts referencing historical weather control projects (Fig.7) frame their claims through multimodal cues of credibility and continuity. Visually, these posts draw on archival imagery, black-and-white footage, low-resolution photographs, and document scans resembling copier- quality reproductions, which collectively signal authenticity and historical legitimacy of evidence (Kress & Van Leeuwen, 2020). 44 Fig.7(a) (b) (c) (d) (e) (f) Overlay text naming “Project Cirrus” and “Project Storm Fury,” along with specific years (e.g., 1947, 1962, 1964), enhances the perceived credibility of the evidence. Documentary-style narration, paired with mysterious and ominous background music, frames the content as both revealing and consequential. Narration, such as “During the Vietnam War, the United States actually used weather as a weapon,” and “Project Cirrus is the first official attempt to modify a hurricane… the first real demonstration that man was able to alter the course of the weather,” positions the material as evidence of long-standing capabilities. Using hashtags like #hurricane, #milton, #cirrus, and #manipulation establishes intertextual connections that link historical references directly to contemporary events like Hurricane Milton. This set of posts is a strong example of visually framed misinformation, as these framing gains epistemic legitimacy through the indexical quality of the visuals, which seem to provide direct, documentary-style evidence of reality (Hameleers et al., 2020). Yet, the meaning is manipulated through decontextualization, with real visuals paired with misleading text (Hameleers et al., 2020), highlighting selective aspects of reality to guide interpretation (Brennen et al., 2020). By drawing on real historical projects and presenting them as causal evidence of present-day climate manipulation, these posts contribute to the conspiratorial narrative by reframing historical facts as proof of an ongoing hidden agenda. This content also exemplifies alternative knowledge, claims that remain compatible with facts, resist easy falsification, and may in some cases be accurate (Ylä-Anttila, 2018). Similarly, the frequently reposted 2012 CBS News segment featuring a physics professor presents the clip as retrospective proof (Fig.8). The framing implies the continuity of weather manipulation and may also invite viewers to engage in detective work, interpreting the information for themselves (Fenster, 2008), encouraged by overlay text such as “This aired 9 years ago” and 45 “I mean what did those really work then?” prompting viewers to connect the dots that if such technologies were publicly acknowledged then, they must now be even more advanced. Fig.8(a) (b) (c) (d) This framing can be seen as intensified by either ominous or mysterious music, which reinforces a sense of hidden knowledge, or by the original news narration, which lends credibility. These multimodal cues help distill complex issues into emotionally resonant, interpretive packages (Powell et al., 2015), evoking epistemic emotions like curiosity and prompting closer engagement (Barbalet, 1998; Muis et al., 2018). Perceiving the information as comprehensible may also foster confidence and certainty through the feeling of knowing (Törnqvist & Wettergren, 2023), further reinforced by captions like “truth about climate change” and hashtags such as #truth, #hurricane, #wildfires, #lasers, #weathercontrol, and #climatechange. As with earlier examples, the evocation of epistemic emotions can be understood through the concept of companion emotions, where fear of extreme weather or climate change is eased through a fear–curiosity oscillation—a dynamic that can lead to fixation on the conspiracy itself, suppressing the original fear object (Wettergren, 2024)—the climate crisis—and ultimately diverting attention away from it. At the same time, the framing helps construct a unique epistemological perspective on climate change, centered on the organizing idea that weather manipulation is the true and concealed cause of extreme weather and the climate crisis. Another form of evidence emerges through subtle cues that, when examined closely, reveal signs of hidden weather manipulation. A substantial portion of topics presents this evidence through visual documentation of the natural environment, sky, fog, and snow, as sites of observation and suspicion (Fig.9-10). 46 Fig.9(a) (b) (c) (d) These images can be interpreted as offering evidence through a subjective attitude constructed by a first-person gaze, positioning the viewer as both observer and truth-seeker. The upward angle may also suggest a power difference, with the sky as a source of power at an impersonal social distance (Kress & van Leeuwen, 2020). The reality type is naturalistic, with minimal modification to color, detail, and light—a visual realism that constructs a truth based on what is visibly present in the physical world (Kress & van Leeuwen, 2020). Overlay text also acts as a validity marker, using timestamps (9a–c) to enhance the sense of empirical observation and evidence collection, along with specific counts like “3 planes at the same time” (9d), which may imply coordinated activity. Similarly, location markers on strange fog across the UK, US, and Canada (Fig-10) support evidence collection while hinting at a global phenomenon. Fig.10(a) (b) (c) 47 The overlay text poses epistemic prompts, such as “Is it me or does something seem off with this fog??? !” (11a), “Anyone else see this change” (11c), or “Are these clouds even real?” (11d), inviting interpretation, evoking epistemic emotions such as curiosity. Fig.11(a) (b) (c) (d) This exemplifies curiosity as an approach-oriented emotion arising from incongruity (Muis et al., 2018) between expected and observed weather phenomena, which may motivate exploration and knowledge-seeking (Silvia, 2008). Curiosity directs attention and inquiry toward resolving perceived anomalies, illustrating how epistemic emotion engages and drives behavior in response to uncertainty (Silvia, 2008; Törnqvist & Wettergren, 2023), here toward investigation (Fenster, 2008). These evidence-collecting posts reflect a key dimension of counter-knowledge (Ylä- Anttila, 2018): a belief in truth as something achievable through inquiry, not via mainstream experts, but where knowledge is grounded in the experiences and observations of ordinary people, which they possess by virtue of their proximity to everyday life. Some posts feature no background music, relying solely on the natural sounds of the recording, reinforcing a naturalistic and realistic framing. In others, mysterious or ominous music, often foregrounded, frames the visual phenomena as both fear-inducing and curiosity-provoking, while the absence of narration reinforces the idea that the visual evidence speaks for itself. This taps into the picture superiority effect (Powell et al., 2015), where visuals act as particularly powerful semiotic resources, eliciting stronger emotional responses than text, responses further shaped by music framing. Textual elements and hashtags, such as #inplainsight, #truth, #weatherchange, and #chemtrails, link these personal observations and empirical collections to the broader conspiratorial narrative. 48 Another type of evidence focuses on extreme weather events, such as hurricanes, as large- scale manifestations of weather manipulation, with radar maps serving as the primary visual proof across related posts and topics (Fig.12). Fig.12(a) (b) (c) (d) (e) Unlike naturalistic visuals, these posts feature radar imagery with highly saturated and contrasting colors, typically set against dark backgrounds and dominated by accentuated reds and greens. The reality type is scientific, given the use of radar data, the visuals also adopt an objective attitude, supported by a central, built-in detachment created through a perpendicular top-down angle. However, the intensified use of color heightens the salience of danger; this visual doctoring (Hameleers et al., 2020) is far less present in radar posts that simply provide storm updates without manipulation claims. The social distance of the bird’s-eye view is impersonal and detached, but this is countered by the common use of loud, danger-themed, energetic background music, which reduces emotional distance and frames the event as dangerous, urgent, and close (van Leeuwen, 2012), whereas mysterious music, common in other topics, is largely absent here. Overlay text plays a framing role, ranging from declarative claims “Cyclone Alfred was manipulated & here is the proof” (12a) to epistemic prompts “What is this on my radar?” (12b) and clue-based statements like “Looks like low-frequency radio waves being focused at the storm” (12c). In the case of a radar map with spiky triangular signals directed toward Valencia (10e), the visual composition, according to social semiotics, implies a transactional structure in which vector shapes suggest an actor–goal relationship, as if the storm is being deliberately targeted (Kress & van Leeuwen, 2020). The triangle-like forms amplify this effect, symbolizing power, action, and conflict (Kress & van Leeuwen, 2020, p. 54), reinforcing the framing of the event as intentional 49 rather than natural, further supported by hashtags such as #weathermanipulation, #haarp, #suspicious, and #climatechange. A final set of posts (Fig.13) presents emotionally charged, first-person responses to Hurricane Milton. Filmed in real time with a direct-to-camera gaze and close social distance, these posts create a strong interpersonal demand, inviting the viewer into an intimate, subjective exchange (Kress & van Leeuwen, 2020). Fig.13 The naturalistic visual realism and the absence of background music reinforce a framing as authentic and immediate, enhancing the sense of a genuine personal encounter. Creators express emotional overwhelm while posing epistemic prompts such as “What if these pilots are the ones doing the weather modification?” and “I could be crazy… but has anybody else thought about that?” These rhetorical questions do not assert truth but rather invite interpretation, signaling uncertainty and doubt. They can be understood through epistemic emotions, which are especially activated when individuals encounter unexpected or complex events and try to make sense of them (Silvia, 2008; Muis et al., 2018). In this context, confusion or anxiety arising from being in the midst of the storm may act as emotional triggers that prompt inquiry and motivate closer cognitive engagement (Muis et al., 2018). This exemplifies how epistemic emotions such as curiosity redirect emotional energy toward sense-making, which can, in turn, generate a feeling of knowing, facilitating cognitive alignment and emotional relief by organizing uncertainty into a coherent alternative explanation (Törnqvist & Wettergren, 2023). Hashtags like #weathermanipulation and #hurricaneMilton act as textual markers reinforcing this conspiratorial narrative as that alternative explanation. 50 In summary, these framings present weather manipulation as the hidden, true cause of extreme weather and climate change, supported by historical references, scientific artefacts, and subjective observations interpreted as empirical evidence. Through multimodal strategies and emotionally resonant framing, fear-related and epistemic emotions prompt users to interpret anomalies, connect the dots, and construct an alternative epistemological account rooted in everyday observation and inquiry. The emotional shift from fear to curiosity is amplified by music, questioning overlays, and visual cues, transforming uncertainty into a drive for investigation and certainty. This may offer emotional relief through continued engagement, but ultimately diverts attention toward the conspiracy and away from meaningful climate action. Conclusion This study examined a large corpus of climate misinformation on TikTok, specifically weather- related conspiracy theories, by first identifying thematic patterns to gain a panoramic view of the corpus and then analyzing how these climate claims are constructed multimodally to shape interpretive and emotional orientations. The computational topic modeling revealed a distinct set of themes that focused on weather manipulation, spanning specific technologies, historical accounts, sky imagery, hurricane updates, AI-generated and cartoon content imagining future scenarios, and humorous DIY weather control. It also uncovered strong cross-modal coherence and extensive recycling of visuals and music. This strength of semantic coherence aligns with findings on TikTok’s templated, memetic culture and the repetitive nature of visual misinformation (Brown et al., 2024; Zulli & Zulli, 2022; Yang, 2023). The framing analysis revealed a coherent narrative that reinterprets extreme weather and climate change as the result of covert technological interventions by powerful yet concealed actors, often governmental or military entities. This alternative explanation is constructed through the interplay of visual evidence, overlay captions, narration, emotionally charged music, and attributive hashtags. It presents the conspiracy as a suppressed truth uncovered through signs in the sky, historical clips, radar maps, and scientific imagery, invoking fear while simultaneously channeling epistemic emotions toward investigation and revelation. From this framing, three key insights emerge. First, it reveals a distinct type of climate conspiracy claim that does not neatly fit within established climate misinformation taxonomies (Coan et al., 2021). Rather than denying or discrediting climate science, it constructs a narrative 51 that affirms climate change but reframes its cause. As such, it falls outside the logic underpinning dominant climate conspiracy claims, which typically rely on the “climate science is unreliable” superclaim, making the claim more intricate and harder to categorize, and highlighting limitations in current typologies. Secondly, this new type of climate conspiracy claims not only bursts beyond the boundaries of traditional climate conspiracies, but it also fundamentally inverts the core superclaims typically used in online climate contrarianism arguments. Instead of being preoccupied with denying or downplaying climate change, the framing affirms its reality, urgency, and danger in relation to extreme weather, attributing it instead to covert technological interventions by powerful actors. In other words: (1) it is happening, (2) it is us—human-caused (by “them”), (3) it is bad, and (4) the “solutions work” (but perhaps too well). By flipping the script, this framing presents challenges for existing online climate misinformation detection efforts. Third, the framing creates a fear-curiosity oscillation: while it invokes fear, this is continually redirected through epistemic emotions. Whether the conspiracy is “believed” is secondary; what matters is how the emotional framing, understood through the sociology of emotions (Wettergren, 2019, 2024; Barbalet, 1998), provides relief by transforming fear and uncertainty into curiosity. This sustained focus on uncovering the conspiracy distracts from the original fear object—the climate crisis—and ultimately diverts engagement from meaningful climate action, illustrating how epistemic emotions can direct attention toward conspiratorial narratives instead of climate solutions, even in the eye of a storm. These findings were made possible through the multimodal, computational, and mixed- method approach, revealing patterns and framings unlikely to be captured through single-mode or manual analysis. This extends the typology of climate conspiracy claims, broadens the understanding of multimodal climate misinformation, and demonstrates how claims evolve in response to TikTok’s distinctive media logic, contributing to a fuller picture of the shifting dynamics of climate misinformation, the challenges it presents for detection and classification, and how climate conspiracies may act as a form of delay by redirecting attention toward conspiracy engagement. 52 Discussion The finding that misinformation is nested within regular content challenges the logic of echo chambers. Instead of staying within closed networks, conspiratorial content may spread more widely on TikTok through algorithmic surfacing, embedded within otherwise factual posts, and thus extending its reach to unintended viewers. This is particularly notable given that extreme weather topics, contained this type of embedded content, and that such events not only serve as windows of opportunity to influence climate attitudes and increase acceptance of climate change (Ruttenauer, 2024), but are also moments when many users turn to the platform for information (Brown et al., 2024). Hence, the presence of misinformation at these times is especially consequential. This also points to a potential limitation: the initial exclusion of seemingly irrelevant topics—reducing the dataset from 7,658 to 1,220 posts—may have filtered out substantial embedded misinformation, raising questions about whether TikTok truly hosts relatively little climate misinformation (Duan et al, 2024, Basch et al, 2002, Baltasar et al, 2024). This underscores the need for research that examines misinformation in context and how it is embedded, surfaced, and interpreted within everyday content flows beyond echo chambers and predefined interest groups. The climate conspiracy has all elements of a classical conspiracy: a plot, named conspirators, and hidden truths, but is constructed across multiple posts. This fragmentation is likely shaped by TikTok’s platform affordances, whose short and memetic format encourages narratives to unfold across posts rather than within one. This fragmented structure may clarify the emergence of “new conspiracism” (Muirhead & Rosenblum, 2018). What appears vague or incoherent at the level of an individual post may result from neglecting how conspiracy narratives are surfaced, pieced together, and sustained across new media platforms like TikTok. This suggests an avenue for future research into how fragmented claims unfold within users’ content flows on platforms like TikTok. This study focused solely on the content; future research could investigate engagement and responses through comment analysis. Similarly, while sound was crucial in the framing analysis, its role in topic modelling, where similar topics differed only in soundtrack, was not analytically utilized. Future work could examine how audio variation relates to engagement. 53 The theoretical approach in this study was fruitful, particularly in analyzing visuals and music, which are hard to interpret. However, a different theoretical lens might have provided other insights. For instance, the strong use of epistemic emotions may arise from platform affordances designed to maximize engagement; curiosity might function as a tool to maintain user attention rather than prioritize truth-seeking and solely reflect a conspiratorial logic. Further, the study did not explore humor, irony, or sarcasm inherent in TikTok's cultural dynamics, which also could significantly shape how climate conspiracy content is constructed. Both are promising directions for future research. While based on a specific case, this study enables analytical generalization (Halkier, 2011) regarding the role of multimodal elements in shaping climate misinformation, which may enhance understanding of other types of misinformation on TikTok. Although platform affordances limit generalization beyond TikTok, the methodological approach is transferable across platforms and represents a promising avenue for future studies. 54 References Almiron, N., Boykoff, M., Narberhaus, M., & Heras, F. (2020). Dominant counter-frames in influential climate contrarian European think tanks. Climatic change, 162(4), 2003-2020. Almiron, N., Rodrigo-Alsina, M., & Moreno, J. A. (2022). Manufacturing ignorance: think tanks, climate change and the animal-based diet. Environmental Politics, 31(4), 576-597. Alvesson, M., & Sköldberg, K. (2018). Reflexive methodology : new vistas for qualitative research (Third edition [Kindle edition] ed.). Los Angeles : Sage. Arango-Muñoz, S. (2014). The nature of epistemic feelings. Philosophical psychology, 27(2), 193-211. https://doi.org/10.1080/09515089.2012.732002 Baele, S. J., Brace, L., & Naserian, E. (2025). More is More: Scaling up Online Extremism and Terrorism Research with Computer Vision. Perspectives on Terrorism, 29. Baltasar, C., Maceiras, S. D. A., Martín, A., & Camacho, D. (2024, 2024). Analysis of Climate Change Misleading Information in TikTok. Countering Disinformation with Artificial Intelligence 2024, CODAI 2024, Barbalet, J. (1998). Emotion, Social Theory, and Social Structure: A Macrosociological Approach. Cambridge University Press. https://doi.org/DOI: 10.1017/CBO9780511488740 Barnum, G., Talukder, S., & Yue, Y. (2020). On the Benefits of Early Fusion in Multimodal Representation Learning. https://arxiv.org/abs/2011.07191 Basch, C. H., Yalamanchili, B., & Fera, J. (2022). Climate Change on TikTok: A Content Analysis of Videos. Journal of community health, 47(1), 163-167. https://doi.org/10.1007/s10900-021-01031-x Bineth, A. (2023). Towards a sociology of curiosity: theoretical and empirical consideration of the epistemic drive notion. Theory and society, 52(1), 119-144. https://doi.org/10.1007/s11186-021-09464-y Bohm, G., & Pfister, H.-R. (2024). Exploring climate change discourses on the internet: a topic modeling study across ten years. Journal of risk research, 1-28. https://doi.org/10.1080/13669877.2024.2387337 Brennen, B. S. (2012). Qualitative Research Methods for Media Studies : An Introduction. Taylor & Francis Group. http://ebookcentral.proquest.com/lib/gu/detail.action?docID=1075433 Brennen, J. S., Simon, F. M., & Nielsen, R. K. (2020). Beyond (Mis)Representation: Visuals in COVID-19 Misinformation. The international journal of press/politics, 26(1), 277-299. https://doi.org/10.1177/1940161220964780 Brown, Y., Pini, B., & Pavlidis, A. (2024). Affective design and memetic qualities: Generating affect and political engagement through bushfire TikToks. Journal of sociology (Melbourne, Vic.), 60(1), 121– 137-121–137. https://doi.org/10.1177/14407833221110267 Brulle, R. J. (2021). Networks of Opposition: A Structural Analysis of U.S. Climate Change Countermovement Coalitions 1989–2015. Sociological inquiry, 91(3), 603-624. https://doi.org/10.1111/soin.12333 Brulle, R. J., Hall, G., Loy, L., & Schell-Smith, K. (2021). Obstructing action: foundation funding and US climate change counter-movement organizations. Climatic change, 166(1-2). https://doi.org/10.1007/s10584-021-03117-w CAAD. (2024). Extreme Weather, Extreme Content: How Big Tech Enables Climate Disinformation. C. A. A. Disinformation. https://caad.info/analysis/reports/extreme-weather-extreme-content/ Capstick, S. B., & Pidgeon, N. F. (2014). What is climate change scepticism? Examination of the concept using a mixed methods study of the UK public. Global Environmental Change, 24, 389-401. Cassegård, C., & Thörn, H. (2022). Post-apocalyptic environmentalism : the green movement in times of catastrophe. Basingstoke : Palgrave Macmillan. Center, P. R. (2024). Social Media Fact Sheet. Pew Resarch Center. Retrieved April 2, 2025 from https://www.pewresearch.org/internet/fact-sheet/social-media/ Coan, T. G., Boussalis, C., Cook, J., & Nanko, M. O. (2021). Computer-assisted classification of contrarian claims about climate change. Scientific Reports, 11(1), 22320-22320. https://doi.org/10.1038/s41598- 021-01714-4 Cohen, S. (2001). States of denial: Knowing about atrocities and suffering. Cambridge : Polity Press. 55 Cook, J., Supran, G., Lewandowsky, S., Oreskes, N., & Maibach, E. (2019). America Misled: How the fossil fuel industry deliberately misled Americans about climate change. Cramer, A. L., Wu, H.-H., Salamon, J., & Bello, J. P. (2019, 2019). Look, listen, and learn more: Design choices for deep audio embeddings. ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Dicks, B. (2014). Action, experience, communication: three methodological paradigms for researching multimodal and multisensory settings. Qualitative research : QR, 14(6), 656-674. https://doi.org/10.1177/1468794113501687 Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political psychology, 40, 3-35. Duan, Y., Khoury, C., Joh, U., Smith, A. O., Cousin, C., & Hemsley, J. (2024). Comparing Climate Change Content and Comments across Instagram Reels, TikTok, and YouTube Shorts and Long Videos. Proceedings of the Association for Information Science and Technology, 61(1), 103-114. https://doi.org/10.1002/pra2.1012 Dunlap, R. E., & McCright, A. M. (2015). Challenging climate change. Climate change and society: Sociological perspectives, 300. Ekberg, K., Forchtner, B., Hultman, M., & Jylhä, K. M. (2023). Climate Obstruction: How Denial, Delay and Inaction are Heating the Planet (1 ed., Vol. 1). Oxford: Routledge. https://doi.org/10.4324/9781003181132 Fenster, M. (2008). Conspiracy Theories : Secrecy and Power in American Culture (Vol. Rev. and updated ed) [Book]. Univ Of Minnesota Press. https://www.upress.umn.edu/9780816654949/conspiracy-theories/ Flam, H., & Kleres, J. (2015). Methods of Exploring Emotions (1 ed.). Oxford: Routledge. https://doi.org/10.4324/9781315756530 Forchtner, B. (2023). Visualising far-right environments : communication and the politics of nature. Manchester : Manchester University Press. Geertz, C. (1973 ). Thick description: Toward an interpretive theory of culture. In The interpretation of cultures: Selected essays (pp. 310-323). New York, NY: Basic Books. Reprinted by permission of Basic Books, a division of HarperCollins Publishers, Inc. Geise, S., & Baden, C. (2015). Putting the Image Back Into the Frame: Modeling the Linkage Between Visual Communication and Frame-Processing Theory. Communication theory, 25(1), 46-69. https://doi.org/10.1111/comt.12048 Gerbaudo, P. (2024). TikTok and the algorithmic transformation of social media publics: From social networks to social interest clusters. New media & society, 14614448241304106. Goffman, E. (1974). Frame analysis : an essay on the organization of experience (1. Harper Colophon ed.). New York : Harper & Row. Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794. Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media. Political Communication, 37(2), 281-301. https://doi.org/10.1080/10584609.2019.1674979 Hautea, S., Parks, P., Takahashi, B., & Zeng, J. (2021). Showing They Care (Or Don’t): Affective Publics and Ambivalent Climate Activism on TikTok. Social media + society, 7(2). https://doi.org/10.1177/20563051211012344 Hofstadter, R. (1964). The Paranoid Style in American Politics. Harper's Magazine, 229(1374), 77. Holder, F., Mirza, S., Carbone, J., & McKie, R. E. (2023). Climate obstruction and Facebook advertising: how a sample of climate obstruction organizations use social media to disseminate discourses of delay. Climatic change, 176(2), 16. Huber, B., Lepenies, R., Quesada Baena, L., & Allgaier, J. (2022). Beyond Individualized Responsibility Attributions? How Eco Influencers Communicate Sustainability on TikTok. Environmental communication, 16(6), 713-722. https://doi.org/10.1080/17524032.2022.2131868 Kang, J., & Herremans, D. (2025). Towards Unified Music Emotion Recognition across Dimensional and Categorical Models. Cornell University. https://arxiv.org/abs/2502.03979 56 Kleres, J., & Wettergren, Å. (2017). Fear, hope, anger, and guilt in climate activism. Social movement studies, 16(5), 507-519. https://doi.org/10.1080/14742837.2017.1344546 Kress, G. (2010). Multimodality: A social semiotic approach to contemporary communication (1 ed.). Oxford: Routledge. https://doi.org/10.4324/9780203970034 Kress, G., & Van Leeuwen, T. (2020). Reading Images: The Grammar of Visual Design (3rd ed.). London: Routledge. Lamb, W. F., Mattioli, G., Levi, S., Roberts, J. T., Capstick, S., Creutzig, F., Minx, J. C., Müller-Hansen, F., Culhane, T., & Steinberger, J. K. (2020). Discourses of climate delay. Global sustainability, 3, e17. Lewandowsky, S. (2021). Climate Change Disinformation and How to Combat It. Annual review of public health, 42(1), 1-21. https://doi.org/10.1146/annurev-publhealth-090419-102409 Lindekilde, L. (2014). Discourse and Frame Analysis: In-Depth Analysis of Qualitative Data in Social Movement Research. In D. Della Porta (Ed.), Methodological practices in social movement research (pp. 195–227). Oxford : Oxford University Press. Lindgren, S. (2020). Data theory : interpretive sociology and computational methods. Cambridge Medford, MA : Polity. Mahl, D., Schäfer, M. S., & Zeng, J. (2023). Conspiracy theories in online environments: An interdisciplinary literature review and agenda for future research. New media & society, 25(7), 1781-1801. Marczyński, P., Brack, N., & Papaioannou, K. (2025). Why do politicians share conspiracy theories on social media? The role of ideology, incumbency, and electoral cycle European Consortium for Political Resarch (ECR), Marshall, C. (2022). Designing qualitative research (Seventh edition. Kindle edition ed.). Thousand Oaks, California : SAGE Publishing, Inc. McCright, A. M., & Dunlap, R. E. (2011). Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change, 21(4), 1163-1172. https://doi.org/https://doi.org/10.1016/j.gloenvcha.2011.06.003 McKie, R. E. (2019). Climate change counter movement neutralization techniques: a typology to examine the climate change counter movement. Sociological inquiry, 89(2), 288-316. McKie, R. E. (2021). Obstruction, delay, and transnationalism: Examining the online climate change counter- movement. Energy Research & Social Science, 80, 102217. McKie, R. E. (2023). The Foundations of the Climate Change Counter Movement: United States of America. In The Climate Change Counter Movement: How the Fossil Fuel Industry Sought to Delay Climate Action (pp. 19-50). Springer. Muirhead, R., & Rosenblum, N. (2018). The new conspiracists. Dissent, 65(1), 51-60. Muis, K. R., Chevrier, M., & Singh, C. A. (2018). The Role of Epistemic Emotions in Personal Epistemology and Self-Regulated Learning. Educational psychologist, 53(3), 165-184. https://doi.org/10.1080/00461520.2017.1421465 Nefes, T. S. (2013). Political parties' perceptions and uses of anti-Semitic conspiracy theories in Turkey. The Sociological review (Keele), 61(2), 247-264. https://doi.org/10.1111/1467-954X.12016 Nelson, L. K. (2020). Computational Grounded Theory: A Methodological Framework. Sociological methods & research, 49(1), 3-42. https://doi.org/10.1177/0049124117729703 Olmsted, K. S. (2019). Real enemies: Conspiracy theories and American democracy, World War I to 9/11. Oxford University Press. Oreskes, N., & Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues From Tobacco Smoke to Global Warming. Bloomsbury Press. Pearce, W., Özkula, S. M., Greene, A. K., Teeling, L., Bansard, J. S., Omena, J. J., & Rabello, E. T. (2020). Visual cross-platform analysis: digital methods to research social media images. Information, communication & society, 23(2), 161-180. https://doi.org/10.1080/1369118X.2018.1486871 Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A Clearer Picture: The Contribution of Visuals and Text to Framing Effects. Journal of communication, 65(6), 997-1017. https://doi.org/10.1111/jcom.12184 Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., & Clark, J. (2021, 2021). Learning transferable visual models from natural language supervision. 57 Rahmstorf, S. (2004). The Climate Sceptics. Potsdam Institute for Climate Impact Research. http://www.pikpotsdam.de/stefan/Publications/Other/rahmstorf_climate_sceptics_2004.pdf Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Reverby, S. M. (2009). Examining Tuskegee: The infamous syphilis study and its legacy. Univ of North Carolina Press. Rojas, C., Algra-Maschio, F., Andrejevic, M., Coan, T., Cook, J., & Li, Y.-F. (2024). Hierarchical machine learning models can identify stimuli of climate change misinformation on social media. Communications Earth & Environment, 5(1), 436. https://doi.org/10.1038/s43247-024-01573-7 Rose, G. (2013). Visual Methodologies. In G. Griffin (Ed.), Research Methods for English Studies (Second edition ed., pp. 69-92). Edinburgh University Press Ltd Rossi, L., Segerberg, A., Arminio, L., & Magnani, M. (2025). Do You See What I See? Emotional Reaction to Visual Content in the Online Debate About Climate Change. Environmental communication, 19(3), 449-467. Roxburgh, N., Guan, D., Shin, K. J., Rand, W., Managi, S., Lovelace, R., & Meng, J. (2019). Characterising climate change discourse on social media during extreme weather events. Global Environmental Change, 54, 50-60. https://doi.org/10.1016/j.gloenvcha.2018.11.004 Ruttenauer, T. (2024). More talk, no action? The link between exposure to extreme weather events, climate change belief and pro-environmental behaviour. European societies, 26(4), 1046-1070. https://doi.org/10.1080/14616696.2023.2277281 Sieben, B., Wettergren, Å., & Palgrave, C. (2010). Emotionalizing organizations and organizing emotions. New York : Palgrave Macmillan. Silvia, P. J. (2008). Interest: The Curious Emotion. Current directions in psychological science : a journal of the American Psychological Society, 17(1), 57-60. https://doi.org/10.1111/j.1467-8721.2008.00548.x Storykit. (2023). Scroll-stopping social media videos: Does music matter? https://storykit.io/blog/social- media-videos-does-sound-matter TikTok. (n.d.). Accessibility for your videos. https://support.tiktok.com/en/using-tiktok/creating- videos/accessibility Tingley, D., & Wagner, G. (2017). Solar geoengineering and the chemtrails conspiracy on social media. Humanities & social sciences communications, 3(1), 12. https://doi.org/10.1057/s41599-017-0014-3 Törnberg, A. (2024). Images of Climate Denialism: A Computational Multimodal Framing Analysis of Climate Change Misinformation Political Ecologies of the Far Right, Törnberg, A., & Törnberg, P. (2016). Muslims in social media discourse: Combining topic modeling and critical discourse analysis. Discourse, context & media, 13, 132-142. Törnberg, A., & Vallström, V. (2025). Asymmetric Alliances in Climate Misinformation: A Network Analysis of the Swedish Climate Change Countermovement. . https://doi.org/10.31235/osf.io/wvgqe_v1 Tornqvist, N., & Wettergren, A. (2023). Epistemic emotions in prosecutorial decision making. Journal of law and society, 50(2), 208-230. https://doi.org/10.1111/jols.12421 Tracy, S. J. (2010). Qualitative Quality: Eight “Big-Tent” Criteria for Excellent Qualitative Research. Qualitative inquiry, 16(10), 837-851. https://doi.org/10.1177/1077800410383121 Treen, K. M. d. I., Williams, H. T. P., & O'Neill, S. J. (2020). Online misinformation about climate change. Wiley interdisciplinary reviews. Climate change, 11(5), e665-n/a. https://doi.org/10.1002/wcc.665 Uscinski, J. E., Douglas, K., & Lewandowsky, S. (2017). Climate change conspiracy theories. In Oxford research encyclopedia of climate science. Uscinski, J. E., & Olivella, S. (2017). The conditional effect of conspiracy thinking on attitudes toward climate change. Research & politics, 4(4). https://doi.org/10.1177/2053168017743105 Vallström, V., & Törnberg, A. (2025). From YouTube to Parliament: the dual role of political influencers in shaping climate change discourse. Environmental sociology, 1-16. https://doi.org/10.1080/23251042.2025.2475519 Van Leeuwen, T. (2006). Sound in perspective. In 2006) The Discourse Reader (pp. 179-193). Van Leeuwen, T. (2012). The critical analysis of musical discourse. Critical Discourse Studies, 9(4), 319-328. Wettergren, Å. (2024). Emotionalising hope in times of climate change. EMOTIONS AND SOCIETY, 7(1), 133-151. https://doi.org/10.1332/26316897Y2024D000000021 58 Yang, Y., Davis, T., & Hindman, M. (2023). Visual misinformation on Facebook. Journal of communication, 73(4), 316-328. https://doi.org/10.1093/joc/jqac051 Yin, R. K. (2018). Case study research and applications : design and methods (Sixth edition [Kindle edition] ed.). Thousand Oaks, California : SAGE. Ylä-Anttila, T. (2018). Populist knowledge: 'Post-truth' repertoires of contesting epistemic authorities. European journal of cultural and political sociology (Print), 5(4), 356-388. https://doi.org/10.1080/23254823.2017.1414620 Zulli, D., & Zulli, D. J. (2022). Extending the Internet meme: Conceptualizing technological mimesis and imitation publics on the TikTok platform. New media & society, 24(8), 1872-1890. https://doi.org/10.1177/1461444820983603 59 Appendix Appendix 1 – Complete list of Tools used Tool Description Purpose Website or Source Apify A publicly available automation platform that enables large-scale TikTok data collection, facilitating the Data collection Apify extraction of videos and metadata. A general-purpose programming language used here with Python various libraries (e.g., pandas, numpy) to automate data preprocessing, analysis, and integration across Scripting Python modalities. CLIP (Contrastive A vision-language model by OpenAI that connects text Language- and images by learning joint representations, allowing for Visual CLIP on GitHub the semantic matching of visual content with textual Embedding Image descriptions. Pretraining) A transformer-based model that extends BERT for Sentence-BERT producing semantically meaningful sentence Sentence-BERT embeddings, optimized for tasks such as clustering and Text Embedding Paper semantic similarity A deep audio embedding model developed by researchers at NYU, trained to extract high-level OpenL3 on OpenL3 representations of audio content using a self-supervised Audio GitHub learning approach. It captures tonal, rhythmic, and Embedding emotional characteristics of audio, making it suitable for Paper analyzing soundtrack mood and structure. A deep learning model developed by Google, A topic modelling technique built on BERT embeddings that BERTopic identifies coherent and interpretable topics in large text Multimodal Topic BERTopic corpora. It supports dynamic and multimodal input for Modelling Documentation unsupervised clustering. A Python library used to extract audio from TikTok video MoviePY files (MP4) when the original sound was missing from Sound MoviePy on the Apify scrape. It generated MP3 files to ensure preprocessing GitHub complete audio coverage for all videos. A deep learning model developed by Meta AI for music source separation. Used to isolate instrumental audio by Demucs on Demucs removing vocals from TikTok videos. Resulting audio was Sound GitHub saved as WAV files, with only clips above a set RMS preprocessing threshold retained to ensure musical content. An open-source computer vision library used to extract OpenCV video frames at two-second intervals. Perceptual hashing Video OpenCV was applied to filter out near-duplicate frames and retain preprocessing distinct visual scenes for each TikTok video. A deep learning-based optical character recognition Text EasyOCR on Easy OCR (OCR) tool used to extract overlay captions from TikTok Preprocessing GitHub video frames for text analysis MER (Music A deep learning-based tool that classifies emotional Emotion mood in music using audio features like rhythm, tempo, MER application Recognition) and tonality. It outputs mood labels (e.g. sad, energetic, Music Mood MER model calm) for each soundtrack, enabling large-scale analysis Detector paper of emotional tone in TikTok videos. 60 Appendix 2 – Search terms (hashtags and keyword based) HASHTAGS = ["weathercontrol", "weathermanipulation",] HASHTAGS = [ "weathermanipulation", "weatercontroll", "weathercontrol", "climateengineering", "climatemanipulaiton", "climatemanipulation", "controllingtheweather", "controllingweather"] HASHTAGS = [ "climatecontrol", "climateengineering", "climatemanipulaiton", "climatemanipulation", "controllingtheweather", "controllingweather", "controltheweathercontroltheworld", "weatercontroll", "weathercontrol", "weathercontroll", "weathermanipulation", "weathermanipulationgonewrong", "weathermodification", "weathermodificationisreal", "wettermanipulation"] SEARCH_QUERIES = [ "weathermanipulation flood", "weathermanipulation wildfire", "weathermanipulation storm", "weathermanipulation drought", "weathermanipulation heatwave", "weathermanipulation tornado", "weathermanipulation hurricane", "weathermanipulation typhoon", "weathermanipulation extreme weather", "weathermanipulation climate change", "weathercontrol flood", "weathercontrol wildfire", "weathercontrol storm", "weathercontrol drought", "weathercontrol heatwave", "weathercontrol tornado", "weathercontrol hurricane", "weathercontrol typhoon", "weathercontrol extreme weather", "weathercontrol climate change"] SEARCH_QUERIES = [ "weathermanipulation globalwarming", "weathermanipulation geoengineering", "weathermanipulation cloudseeding", "weathermanipulation chemtrails", weathercontrol globalwarming", "weathercontrol geoengineering", "weathercontrol cloudseeding", "weathercontrol chemtrails",] SEARCH_QUERIES = [ "weathermanipulation globalwarming", "weathercontrol globalwarming", "weathermanipulation", "weathermanipulation weathercontrol" "weathercontrol", "weathercontrol weathermanipulation"] 61 Appendix 3 – Output from Computational Topic Modeling This appendix presents the full output of the topic modeling process, which generated 38 topics. Excluded topics Following a qualitative review of the ten most representative posts per topic, 11 topics comprising 855 TikTok posts were excluded due to limited relevance to the study’s focus on climate misinformation. These included two concert topics: Taylor Swift’s "Thunderstorms" (T13) and Lana Del Rey’s "Chemtrails" (T37), one with only German-language posts (T24), and four on general weather content, such as storm chasers (T11), weather apps (T25, T26), and reporting (T36). The remaining topics focused on sustainable living (T0, T1), refutations of weather control claims (T12), or unrelated comedic content (T15). Textual elements The topic and keyword overview of all 38 topics identified through multimodal topic modeling. Note: Some topics display no meaningful keywords, represented only by punctuation (e.g., ,,,,,,,). Although all posts contained text, this can occur when captions are minimal or low in variance, offering little distinctive signal. Additionally, in multimodal fusion, textual features may be overshadowed by more salient visual or audio embeddings in the joint vector, leading to underspecified keyword outputs. 62 Visual elements of the topic – some samples 63 64