KRISEN KOMMER! – VI FRÅGAR CHATGPT. En kvalitativ studie som undersöker om generation Z:s användning av AI-tjänster kan utgöra en risk om krisen kommer.

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The study explores how Generation Z use generative AI services such as ChatGPT, Google Gemini and My AI, with a particular focus on gaining a deeper understanding of their usage. The interest in the topic emerged when Internetstiftelsen (2025:chapter 2) published their 2025 report, which showed that the younger generation uses AI services more frequently than traditional methods for seeking information. Combined with our own experiences as part of the target group, this raised the question of whether this behavior could become problematic during a crisis. To answer weather this could pose a risk in a crisis, we therefore wanted to understand why they choose these services and to what extent the respondents trust the answers, . This constitutes the main focus of this study. The research addresses the following questions: - How does Generation Z describe their information seeking through AI services in relation to their needs and motivations? - Where does Generation Z turn to when seeking information during a crisis? - Why does Generation Z continue to use AI and how has it changed the way they seek information? The theoretical framework used to describe the results is Uses and Gratifications Theory (Ruggiero, 2000) which has been applied to analyze the participants’ narratives. What we can see, previous research has used quantitative methods. In contrast, we wanted to gain a deeper, contextual understanding and therefore chose a qualitative approach. Ten semistructured interviews were conducted with young adults and teenagers between 15 and 28 years old. The selection was strategic and based on the requirement that participants use AI services at least once a week. The interviews included questions about their usage of these services as well as hypothetical crisis scenarios. Our overall results show that respondents feel uncertain about how to act in a crisis and therefore fall back on their habits. Since AI usage is habitual for many of them, some respondents found it natural to turn to an AI service. Others stated that they would instead use news sites, Google or ask a family member, as those actions align better with their habitual patterns. A concerning finding is that only a few respondents considered it natural to doublecheck the answers they received from an AI service, and those who did often double-checked with another AI service. None of the respondents reported fully trusting AI, yet no one stated that this lack of trust prevented them from using the services daily. The behavior forms a habit in itself. Previous research shows that during stressful situations, such as crises, people rely on their habits. This means that even if they do not fully trust AI and do not use it in every situation, daily use still establishes a strong habit. Applying the Uses and Gratifications framework helps explain how AI services fulfil needs that other platforms do not, which clarifies why respondents continue using AI even when they do not fully trust it. The uncertainty expressed by the respondents regarding how to act in a crisis can be understood through the fact that crisis communication in Sweden does not reach them. None of the participants knew how to respond if the most important emergency signal were to sound, which indicates a failure from the authorities. What we observe is that the information exists, but it does not fit within the frames of what this generation needs. A report from the analysis company Demoskop (2018) shows that the communication strategy of Swedish riskcommunication is less effective for younger citizens than for older citizens. The Uses and Gratifications perspective helps explain this as a ”non-gratification”, where the respondents needs are not being satisfied. This restarts the process and lead them to search for information elsewhere where their needs are fulfilled, in this case through an AI service. Discussed in the study is the finding that effectiveness consistently outweighs precision. Effectiveness emerges as a key criterion when choosing an information source or search tool. The respondents describe that the function of receiving a quick answer is ranked higher than the answer being completely accurate. The respondents’ source criticism varies and is dependent on their prior knowledge. A paradox is identified in that they use a source they do not fully trust because they are dependent on the speed. The conclusions of the results indicate that three aspects of this usage appear to be associated with risk. The information may be incorrect, as it is provided by services that explicitly warn users about this possibility. The information may also be insufficient, as it is shortened and personalized. In addition to this, the trust in public authorities may show to decrease. We reflect on the risk that public authorities may lose credibility and that the younger generation may develop limited knowledge, as tendencies toward this have been observed. This highlights the need for authorities to take action in order not to fall behind technologically. They must strive to be present where younger generations are active to build knowledge and trust, and to compete with new technologies in meeting these needs. A recommendation for future efforts by Swedish public authorities is to increase their presence and communication in the channels where younger generations are active. We believe that AI services are, and will continue to be, an integral part of society. The focus should therefore be on developing alongside these technologies rather than working against them.

Description

Keywords

ChatGPT, Google Gemini, My AI, Generation Z, Kris, Kriskommunikation, Vanor, Medievanor, Generativ AI, AI, kvalitativ metod, Uses and Gratification Theory

Citation

ISBN

Articles

Department

Defence location

Endorsement

Review

Supplemented By

Referenced By