Startup interaction with AI A multiple case study on startups' ethical navigation when using Generative AI tools May 2024 Helia Hajianhosseinabadi & Nicole Lindau Supervisor: Rögnvaldur Saemundsson Master of Science in Knowledge-based Entrepreneurship Graduate School 1 Acknowledgements This section of our thesis is devoted to those who supported us during this research process. We would like to express our true gratitude, as their support was essential for successfully completing our Master's program. First of all, we want to thank our supervisor, Rögnvaldur Saemundsson, who provided fundamental help to us throughout all stages of the research. Thank you for your valuable feedback, expertise, and insights. Furthermore, we would like to extend our gratitude to our networks who facilitated the process of reaching out to the respondents. To all 15 respondents, we appreciate your interest, time, and participation. Your precious insights helped us to conduct this research and contribute to the startup environments. Finally, we want to acknowledge our time at the School of Business, Economics, and Law at the University of Gothenburg and thank those who have made it truly memorable. Gothenburg, May 21st, 2024 Helia Hajianhosseinabadi Nicole Lindau 2 Abstract Due to the rapid development of Generative AI, there are still large gaps in the research landscape of AI, sustainability, and entrepreneurship. Which makes it hard for businesses to navigate and act responsibly. Both when it comes to adopting Generative AI, assessing social sustainability performance and benchmark progress over time. This master thesis seeks to delve into the intricate relationships between Generative AI, ethics, and startups. By examining the current landscape, ethical frameworks and emerging best practices, this study aims to shed light on how and if startups navigate the challenges of Generative AI and integrate sustainable and ethical principles into their Generative AI strategies. The study also intends to explain how startups plan to develop their navigational initiatives and how they can reach those plans. The research is made with a qualitative approach and multiple case study design. The results of the 15 semi-structured interviews showed that informal and individual navigation initiatives are the most common when ethically navigating Generative AI tools. Furthermore, it is in the startups interests to develop formal navigation initiatives in the future as well as receiving support from external actors to facilitate their ethical navigation. Additionally, the thesis suggests formal ethical practices that startups can implement and highlights the factors affecting the current navigational practices. Key words: Social sustainability, ethical navigation, startups, innovation, Generative AI 3 Table of Contents Acknowledgements 2 Abstract 3 Table of Contents 4 1. Introduction 6 1.1 Background 6 1.2 Problem discussion 8 1.3 Purpose and research questions 9 2. Literature review 11 2.1 Startup definition 11 2.2 Generative AI definition 12 2.3 Generative AI usage 13 2.4 Generative AI ethical concerns 15 2.4.1 Data collection and bias 15 2.4.2 Trustworthiness 16 2.4.3 Job displacement and power shift 18 2.4.4 Cognitive atrophy 19 2.5 Ethical AI framework 19 3. Methodology 22 3.1 Research approach 22 3.2 Research design 22 3.3 Data collection 23 3.4 Analysis of data 26 3.5 Assessing research quality 26 3.6 Ethical considerations 28 3.7 Limitations 29 4. Results 30 4.1 Generative AI usage in startups 30 4.2 Ethical concerns 32 4.2.1 Data collection and privacy 32 4.2.2 Bias 34 4.2.3 Trustworthiness 35 4.2.4 Job displacement 38 4.2.5 Cognitive atrophy 40 4.3 Functional problems 40 4.3.1 Expectations 41 4.3.2 Input and output 41 4.3.3 Communication 43 4.3.4 Weak database 45 4.4 Navigation initiatives 46 4 4.4.1 Specific navigation 46 4.4.2 General navigation 49 4.5 Awareness 52 4.5.1 Team knowledge 53 4.5.2 Innovation adoption 54 4.6 Future navigation 55 4.6.1 Internal future navigation 55 4.6.2 External future navigation 56 5. Discussion 58 5.1 Navigation initiatives 58 5.2 Factors impacting navigation 59 5.2.1 Functional problems and ethical concerns 59 5.2.2 Awareness 61 5.3 Future navigation 62 6. Conclusion 67 6.1 Theoretical and practical contributions 69 6.2 Suggestions for future research 70 References 71 Appendix 75 Interview guide 75 5 1. Introduction The introduction chapter consists of a background discussion, introducing the concept of Generative AI, the connection between Generative AI and startups as well as the importance of integrating ethical and sustainable practices. The chapter further highlights the ethical risks of Generative AI, which ultimately leads to the purpose of the study and the research questions. 1.1 Background In an era of rapid technological progress, Artificial Intelligence (AI) has emerged as a transformative force, progressively integrated into various facets of human life. This current integration is propelled by the Generative AI's ability to analyze large datasets, make informed decisions, adapt to dynamic environments and create new original content by learning patterns in data (World Economic Forum, 2024a). Although Generative AI is still mainly used for sound, images, text and code, it is transcending the traditional boundaries, impacting industries, economies, and societies on a global scale. As the Generative AI technology evolves, the capability will extend beyond mere automation to complex problem-solving, decision-making and predictive analytics (World Economic Forum, 2024a). Looking forward, the significance of Generative AI is poised to grow exponentially. According to ComputerWorld (2024) the investments in Generative AI are expected to double in 2024 and grow to 151,1 billion dollars, until 2027. With advancements in machine learning, natural language processing, and robotics, Generative AI is not only expected to redefine industries, but is also going to reshape labor markets, and influence societal structures. According to Business Sweden (2022), 50% of the Swedish companies have reported to be in an advanced stage of AI maturity, while the European aggregate was 32%. Hence the Swedish companies can be considered to be among the most AI mature in Europe. For many companies, leveraging Generative AI tools is a matter of efficiency and productivity as it automates routine tasks and simplifies several operational activities (Brynjolfsson & Mcafee, 2017). 6 In order to compete in this time of technological change, it is important to be at the forefront, especially for the Swedish startups which often are characterized by agility, innovation and disruption. Due to their smaller organizational structure, startups have an increased capability to manage and adapt to external changes. Adopting Generative AI and implementing it in their operational activities could therefore make them more competitive (Metrick & Yasuda, 2021). The development of new business models, products, services and system solutions have been identified as one of the largest areas of potential regarding Sweden’s AI capability (Vinnova, 2018). Which makes the Swedish startups an important actor with great market opportunities. Just like AI will be a relevant topic in the upcoming years, so will sustainability. In the Global Risks Report issued by World Economic Forum (2024b), an increase of the technological short term and long term risks is presented. AI plays a central role in these technological risks as it can generate adverse outcomes and facilitate the spread of misinformation. In this world of increasing technological insecurities, businesses are confronted with an urgent imperative to incorporate ethical and sustainable strategies. Social sustainability is a central part of corporate responsibility, which has become a key determinant of long-term success (Epstein & Buhovac, 2014). Businesses are increasingly recognizing the need to mitigate social impacts to ensure the well-being of the communities they operate in and enhance their performance. Mainly due to the growing awareness among consumers, investors, and regulators. Consequently, companies are compelled to adopt comprehensive sustainability frameworks, ethical sourcing and social equity. A central part of socially sustainable business activities and operations revolves around the ethical principles of transparency, fairness and accountability, since these serve as a foundation for fostering trust and stakeholder engagement (Schlegelmilch & Szőcs, 2020). Ethical considerations can be integrated into business strategic objectives, governance structures and operational practices through sustainable business models. By integrating ethical considerations into innovation processes, organizations can develop solutions that address societal challenges, create shared value for stakeholders and contribute to the achievement of the United Nations Sustainable Development Goals. 7 1.2 Problem discussion Ethical innovation in business operations involves the development and implementation of new approaches, processes, and technologies that contribute to positive social outcomes while minimizing negative impacts on stakeholders and the environment. Generative AI is one of these innovative technologies that has the ability to drive sweeping change. It is of significant importance that the businesses who both develop and use Generative AI need to act ethically and be responsible stewards (Ammanath, 2021). The main reason for that is the many ethical risks that are intertwined with Generative AI development and usage. Such ethical risks are privacy, security, fairness, transparency, safety and performance (Buehler et al., 2021). Edquist et al. (2022) raises that ethical use of AI is strongly related to data management. They define data ethics as data-related practices that seek to preserve the trust of, and protect customer information. No matter how good intentions a company has there will still be traps to fall into regarding data management. Some of these traps concern thinking that data ethics does not apply to your particular company, that legal and compliance have data ethics covered and that data scientists have all the answers. Additionally, Edquist et al. (2022) pinpoints that although data ethics is about following regulatory requirements however it extends beyond mere legality. Therefore, businesses are often compelled to make decisions before relevant laws are enacted. Blackman (2023) suggests that the risks associated with new technology should be raised as either ethical risks or potential ethical nightmares. Where violations of privacy, spread of misinformation or generating unsuitable content to children are examples of ethical nightmares. Moreover Blackman (2023) claims that prohibiting the ethical nightmares from becoming reality is to get comfortable talking about ethics and let the leaders express their worst fears and then explain how to prevent them from ever happening. Hence ethics might be claimed to be the most important part of social sustainability regarding AI. The strategic integration of sustainability is not only a moral imperative for the businesses but also a measure of securing resilience in evolving markets and regulatory landscapes (World Economic Forum, 2022). However, despite the compelling rationale for ethical business operations, organizations face numerous challenges in translating ethical principles into 8 practice and achieving meaningful social impact (Kickul & Lyons, 2012). These may include conflicting stakeholder interests, resource constraints, regulatory compliance, supply chain complexity, and cultural barriers. Moreover, the lack of clear metrics and standards for assessing social sustainability performance poses challenges for companies seeking to measure, report, and benchmark their progress over time. As mentioned previously the regulatory gaps within innovation make it even harder for businesses to navigate and act responsibly. Nevertheless, businesses do have a great responsibility to behave ethically even in evolving markets where there might be a temporary regulatory gap (Schlegelmilch & Szőcs, 2020). The pressing requirement to quickly evaluate and adopt innovations that promote fairness, accountability, and transparency is according to Solane & Zakrzewski (2022) particularly pronounced for AI startups, which often are viewed as vehicles for cultural innovation and economic advancement in European contexts. Despite facing regulatory demands, in relation to larger organizations, these startups lack comparable resources and expertise for capacity building within “AI ethics”. In a landscape where regulations are evolving, startups find themselves navigating uncharted territory, balancing the pursuit of innovation with adherence to ethical and sustainable practices. Arising the questions if they are aware of the sustainability implications of their AI applications and if they apply any navigational activities. The term navigation is referring to the concrete things that are done and that startups rely on when addressing experienced ethical problems. 1.3 Purpose and research questions Considering the significant increase in the usage of AI tools and the concerns regarding ethical usage and sustainable innovation adoption, the purpose of this thesis is to investigate the ethical implications and challenges of implementing Generative AI tools in startups in depth. Moreover, the thesis seeks to figure out how entrepreneurs navigate and address these implications to improve the ethical performance of their startups. Hence, the outcome of this thesis aims to help startups to figure out which navigational practices they should implement to manage the ethical concerns. 9 With the given background and purpose, the following research questions have been formulated: How do startups navigate ethically when using Generative AI tools in their business practices? How do startups plan to develop their ethical navigation of Generative AI in the future? 10 2. Literature review The literature review starts with explaining the theoretical definitions of startups (2.1) and Generative AI (2.2). Thereafter, the different usage areas of Generative AI are discussed (2.3), followed by four ethical concerns regarding Generative AI (2.4). Finally, an ethical AI framework is presented (2.5), it works as a basis for the upcoming analysis and discussion about future navigation in chapter 5. 2.1 Startup definition According to Feld & Wise (2017), the term “startup” has significantly turned into a part of the popular lexicon in recent years and become prevalent in discussions about entrepreneurship and the establishment of new companies. Nowadays, it is realized as something distinct from a small business. Back in 2005, Luger & Koo researched and expressed three defining criteria that were utilized in the literature discussing startups. "New," "Active," and "Independent" are the mentioned criteria that are recommended to consider altogether when classifying businesses as startups. Steve Blank (2010) defined startups as follows: A startup is an entity established to identify a business model that is both repeatable and scalable. Feld & Wise (2017) had some reflections on Blank’s definition, stating that a startup does not stay a startup forever. It either goes out of business or figures out a solution that people want to buy. Also, the startup aims to discover, try out, and confirm an unfulfilled demand. At first, startups rely on assumptions, working to confirm them through changes until they become valid. When the business model is confirmed and the startup is self-sustainable, it is not considered a startup anymore. Skala (2019) describes startups as a new organization, lacking an operational history, explores an innovative business model in circumstances of high risk and sometimes unrecognized demand. The primary resource it relies on is the combination of the founders' knowledge, skills, experience, and social capital. At the core of the innovative business model is a groundbreaking product or service resulting from the application of knowledge and technology. Its revolutionary nature and expert implementation led to the establishment of a disruptive scenario in the market. This scenario presents an opportunity for extensive scaling of the business model, provided that barriers to demand are successfully addressed. Furthermore, a startup serves as an agent for innovation, particularly in the latest 11 advancements in science and technology. A startup helps the economies of developed countries, which are tired from the financial crisis, to recover and get a "fresh breath" (Skala, 2019). 2.2 Generative AI definition Tom Freston is acknowledged for expressing “Innovation is taking two things that exist and putting them together in a new way” (Feuerriegel et al., 2023, p.2). The assumption of creating and performing artistic, creative tasks such as writing poems, creating software, designing fashion, and composing songs by humans existed for a long time. However, this assumption has changed significantly with the recent development in artificial intelligence (AI), which can generate new content in ways that are not distinguishable from human craftsmanship (Feuerriegel et al., 2023). Generative AI is a kind of AI that can generate new content such as text, audio, images, and video in autonomous ways (Lv, 2023). Generative AI is considered a comprehensive term encompassing machine learning solutions that, after extensive data training, generate outputs based on users’ prompts (Sætra, 2023). Critical theoretical foundations of Generative AI are machine learning, natural language processing (NLP), image processing, and computer vision which help Generative AI to learn new content from massive amounts of data in order to produce different content based on various datasets. Furthermore, the application of these theories can contribute to the continual development of Generative AI and enable its utilization in diverse fields (Lv, 2023). Generative AI disrupted the world in 2022 with the emergence of multiple startups and tools such as DALL-E, MidJourney, and ChatGPT (Sætra, 2023; Mondal et al., 2023). ChatGPT with over a million users in five days became one of the most used platforms on the internet and it emphasizes the transformation of AI, shifting from being innocuous to invasive (Mondal et al., 2023). Mondal et al. (2023) suggested some ideas for interpretive, interactive, and immersive frameworks that Generative AI applications can work on and described them as follows: “Generative AI applications are interpretive, meaning that they interpret and execute code on the fly; interactive, meaning that they allow users to interact with them and provide feedback; and immersive, meaning that they can provide a rich and engaging experience that can make users feel like they are part of the application.” (p.11). 12 2.3 Generative AI usage Generative AI holds incredible positive potential, and there is no doubt that these technologies can enhance the lives of humans (Sætra, 2023). According to Davenport et al. (2022) Generative AI models demonstrate remarkable diversity, serving as universal content generators with numerous potential business applications. However, it is worthwhile to mention that effectively leveraging Generative AI necessitates human participation at both the beginning and the end of the process. Some of these applications are outlined below. AI is advancing with the rise of generative models, particularly foundation models that revolutionize application development. This innovation reduces development time, empowers non-technical users, and extends AI into realms traditionally reserved for humans, as seen in products like ChatGPT and Copilot. Generative AI enables computers to generate original content, sketches, and code, offering capabilities like analyzing large datasets (Mondal et al., 2023). Regarding coding, Ebert & Louridas (2023) investigated and found that Generative AI tools can increase productivity in software engineering. These tools can enhance software development by generating code, test case generation from requirements, re-establishing traceability, explaining code, updating old code, software maintenance with augmented guidance, and enhancing existing code. Moreover, Generative AI can simplify software development processes by automating tasks like testing, debugging, and deployment. Mondal et al. (2023) believed that Generative AI capabilities are user-friendly, making AI accessible even to those without extensive machine learning expertise, finally speeding up the development of new AI applications. The impact of technological progress on economic activity can be assessed across three domains: production, interactions, and transactions. From the Industrial Revolution to the era of AI, advancements in factory technologies have greatly enhanced manufacturing efficiency. Over time, technological changes have also influenced transaction processes. Previously, technological interventions in interaction labor were often seamlessly integrated into human behavior without much notice. However, this trend is evolving. The advent of Generative AI marks a transformative shift in customer service, offering the potential to significantly enhance operational efficiency through precise and timely task performance (Mondal et al., 2023). According to Chen et al. (2023), Generative AI has the potential to offer automated customer service solutions, which can assist businesses in boosting efficiency, cutting costs, 13 and improving customer satisfaction. For instance, organizations can employ Generative AI to comprehend customer needs, engage directly with customers, and customize marketing strategies. In the e-commerce sector, automated chatbots can be utilized by companies to quickly address customer inquiries, thus enhancing user experience and reducing expenses. Mondal et al., (2023) proposed these tools often demonstrate significant power and have the capacity to collaborate with humans, boosting their efficiency. The emergence of Generative AI marks the prospect of technology entering the domain of creativity, traditionally reserved for humans. Generative AI utilizes inputs and user experiences to generate original content. While debates about the impact of technology on creativity may persist, it is widely accepted that the use of Generative AI can contribute to the generation of fresh and innovative ideas. In addition, Lv (2023) mentioned ChatGPT, built on deep learning models, has the potential to significantly enhance the efficiency and quality of content production and dissemination across diverse contexts and needs. Beyond these advantages, ChatGPT aids in overcoming obstacles, enriching human understanding and creativity, and fostering valuable insights and innovations. Mondal et al. (2023) stated that the potential applications of these models in business practices are numerous, although they are still in the early stages of development. Emerging applications are being designed to operate across various functions. A primary application is in marketing and sales, where the focus is on generating and implementing personalized content and creating assistants designed to work with specific businesses. According to Feuerriegel et al. (2023), Generative AI is anticipated to automate the development of customized marketing materials, such as creating different sales slogans tailored to different personality types like introverts and extroverts. This personalized marketing content is more effective than using a one-content-fits-all approach. In areas related to risk and law, they contribute to tasks such as reviewing reports, drafting legal documents, and addressing complex questions (Mondal et al., 2023). In addition, Gupta & Yang (2024) researched and expressed that entrepreneurs can ethically harness Generative AI, such as ChatGPT, to autonomously produce detailed market research information in response to queries. ChatGPT can revolutionize entrepreneurship education by empowering student entrepreneurs to enhance creativity in idea generation, business planning, model creation, and customer interviews. 14 2.4 Generative AI ethical concerns While Generative AI has great creative and productive potentials, there are some potential pitfalls which are created by Generative AI that academics and others have warned about (Sætra, 2023). In order to better understand, different concerns are distinguished and categorized through four ethical concerns that are addressed in the following sections: 2.4.1 Data collection and bias According to Cheatham et al. (2019), the accessibility to data has become incredibly large during the last two decades, and it only continues to expand. As it on the one hand unlocks endless possibilities for businesses, it also unlocks the potential of pitfalls. Cheatham et al. (2019) describe that these great volumes of unstructured data make it hard to navigate and easy to accidentally end up using or exposing sensitive information. As Generative AI tools constantly learn and generate new output, sensitive data can easily be accessible to all users if it gets inserted in the tool. Therefore, working in compliance with privacy rules such as the European Union’s General Data Protection Regulation (GDPR) is extra important when using AI tools. A more specific risk related to data collection and Generative AI usage is bias. As described by Ammanath (2022) bias is a natural human feature, a cognitive rule of thumb which influences human behavior and makes it easier for the brain to make decisions. However, biases are far from objective and can be problematic if they occur in human developed technology, such as AI. AI itself is free from most of our human biases but not the input data. AI bias either refers to the difference between the predicted and actual output values from the training data or the data reflects bias that is embedded in our society. These biases are often occurring with data containing characteristics such as gender or socioeconomic status. To get equal outcomes, it is needed that biases are understood and addressed. Biased and discriminatory systems have negative impacts on certain groups (Sætra, 2023). Feuerriegel et al. (2023) expressed that everyday human-made content is filled with societal biases. When deep learning models are trained on biased data, they may amplify these biases, reproduce harmful language, or reinforce stereotypes related to gender, sexual orientation, political views, or religion. Generative AI tends to reinforce the status quo. Relying on historical data, these models may resist desired societal changes and perpetuate biases and discrimination present in human history (Sætra, 2023). In addition, Fui-Hoon Nah et al. 15 (2023) mentioned a monolingual bias emerging in multilingual contexts when the training data exclusively represents one language. In this regard, Sætra (2023) researched and presented that historically marginalized groups face enhanced discrimination based on biased data. Additionally, these groups are often underrepresented among those developing and controlling the systems, creating potential digital divides. Monetization strategies for Generative AI systems could strengthen these divides, not only within developed nations but also between nations with varying internet and computing infrastructure, limiting access to tools like ChatGPT even in its "free" version for many individuals. Fui-Hoon Nah et al. (2023) discussed individuals from marginalized or minority cultures may encounter difficulties due to language and cultural barriers if Generative AI models do not fully understand or integrate their cultures. Ammanath (2022) presented five leading practices aiming to develop and deploy AI systems that are free from bias, fair and impartial, delivering value while upholding ethical standards. The first practice includes building diverse business teams to foster collective reasoning and fairer decisions. Additionally, it is essential that benchmarks for evaluating fairness and facilitating diversity are incorporated into each stage of the AI lifecycle. The second practice is about oversampling if the data is skewed, weighting some data more if the inputs are unequally represented, or using synthetic data to compensate for missing samples. The purpose of these adjustments is to balance the datasets and ensure representativeness. However, the alterations should be made with precaution to not accidentally introduce new biases by overcompensating. The following practice is to probe data through exploratory analysis. The analysis will discover distinguishing correlations in the data which helps detect bias in the dataset. The next practice encourages the data team to engage in training and education where they reflect on their own thinking and identify biases that could influence AI fairness. The final practice is to establish feedback mechanisms for evaluating performance. By exploring feedback from a variety of end users the fairness and quality of the service can be assessed. Finally, Ammanath (2022) concluded that when the problematic biases in datasets are understood, identified, and wiped out, it is possible to create and contribute to a more fair and equal society. 2.4.2 Trustworthiness According to Siau & Wang (2019), trust holds significant importance in interpersonal relationships, human-technology interactions, and other relationships. It plays a crucial role in 16 the adoption of new technology. Despite AI's higher speed and capacity of processing compared to humans, it is not always competent, fair, neutral, and manageable. The risks related to AI have raised concerns regarding whether and to what extent it should be trusted. Ammanath (2022) added that each AI tool and its application should be considered independently based on its merits. Being trustworthy doesn't mean that every aspect of trust is fulfilled. The conversations, approaches, and tactics for promoting trustworthy AI vary depending on the organization, its specific functions, and the use case of the tool. What is necessary to one may be irrelevant to another. Feuerriegel et al. (2023) expressed that Generative AI models have the potential to generate output with errors, due to their reliance on probabilistic algorithms for making inferences. Consequently, challenges arise as the outputs may be indistinguishable from authentic content, potentially providing misinformation or deceiving users. In large language models (LLMs), this phenomenon is called hallucination, which refers to mistakes in the generated text that are semantically or syntactically plausible but are actually nonsensical or incorrect. Fui-Hoon Nah et al. (2023) discussed that fabricating information or fabrication is a more fitting term to describe the hallucination phenomenon. Generative AI has the capability to produce responses that appear correct but lack sense. Hallucinations can lead to the publication of misinformation. Generative AI models might produce fictitious information, fake photos or information with factual errors. According to Feuerriegel et al. (2023), verifying the output of Generative AI, particularly LLMs, is usually not easy. The accuracy of Generative AI models relies heavily on the quality of the training data and the learning process. Generative AI systems and applications can implement correctness checks to prevent specific outputs. However, because of the black-box nature of advanced AI models, the usage of these systems heavily relies on users' trust in the accuracy of outputs (Feuerriegel et al., 2023), it is harder for people to trust the things that are not understandable and controllable for them (Siau & Wang, 2019). Finally, Ammanath (2022) stated that despite the significant power and influence of today's AI capabilities, the technological revolution that will change the world is still in the beginning stage. The nascent stages of widespread AI adoption cause the lack of prescription for ensuring the reliability and trustworthiness of emerging tools. In the face of innovation, each business must determine independently which aspects of trust are important and how to empower individuals, refine processes, and advance technology. 17 2.4.3 Job displacement and power shift Technological advancement has always been a significant aspect of human development, but the current pace of change may be different (Siau & Wang, 2019). Sian & Wang (2019) discussed human labor was substituted with various energy sources over time: from animal power to steam power, and then electrical power, and now, another significant shift is going to occur, one that replaces the need for human cognitive abilities, not just physical power. AI could potentially create a ‘useless class’ of humans — individuals who not only be unemployed but also unemployable. In this regard, Chen et al. (2023) added Generative AI tools pose a threat to labor markets, particularly in developing nations. Likely, numerous simple and repetitive tasks will soon be carried out by AI instead of humans. Specifically, chatbot applications and other AI tools may replace numerous female-dominated occupations, such as customer service positions. However, this societal shift will increase the need for highly skilled workers to deploy and develop AI technologies. Siau & Wang (2019) expressed that despite AI taking over tasks that were just exclusive to humans and replacing them in numerous jobs, there's a belief that new employment opportunities or collaborative ventures between humans and robots will emerge. At least, jobs for humans will not simply disappear. Also, Sætra (2023) expanded the job displacement issue to Generative AI and presented that Generative AI transforms professions, changing power dynamics among professions, employers, and employees. For instance, copywriters may face changes when Generative AI systems like Large Language Models (LLMs) can create content for news and advertising. Professions always change with technological change. In this regard, Fui-Hoon Nah et al. (2023) mentioned that AI has the potential to displace jobs and reshape the labor market through a new division of labor between humans and algorithms. This shift may cause certain human-performed tasks obsolete, resulting in job losses being replaced by algorithms. On the other hand, the application of Generative AI has the capacity to generate new job opportunities across different industries. Zarifhonarvar (2023) conducted research in this area and discovered that 32.8% of occupations may face a Full Impact, 36.5% may encounter a Partial Impact, and 30.7% may see No Impact. This indicates that as AI becomes more widely adopted, the labor market will likely undergo substantial transformations. While integrating AI technologies such as 18 ChatGPT can enhance productivity and efficiency, it may also result in job displacement and unemployment for some workers. 2.4.4 Cognitive atrophy Sætra (2023) discussed that concerns arise about potential cognitive atrophy when relying on AI for mentally challenging tasks, possibly hindering individuals' own abilities in the long run. Similar to calculators affecting mental arithmetic skills, ChatGPT might detrimentally impact users' writing skills. Additionally, some worry that Generative AI designed for human interaction will become more skillful in persuasion, potentially crossing into the realm of manipulation. In this regard, Fui-Hoon Nah et al. (2023) mentioned that the convenience and power of ChatGPT might lead users to over-reliance on its answers and trust them. Unlike traditional search engines that offer diverse information sources for users to make informed decisions, ChatGPT generates specific responses for each prompt, potentially hindering critical thinking, creativity, and problem-solving. This reliance may contribute to human automation bias, as users habitually accept Generative AI recommendations without rationalization. 2.5 Ethical AI framework According to Ammanath (2021), a way to ensure ethical practices and navigate in the currently unregulated area is to adopt a sound framework, helping and preparing businesses to respond to unexpected impacts. Such a framework would need to offer something entirely new, covering the whole team and cutting across disciplines. Just like Ammanath (2021), Edquist et al. (2022) emphasized the adoption of a data ethics framework as it, compared to laws, can guide the business in strategic decisions and implementations. However, the incorporation of data ethics takes time and might even lead the business to turn down potential opportunities which could generate short-term revenues. However, businesses that fail to do the necessary work and comply with data ethics risk losing their stakeholders’ trust and destroying business value. Edquist et al. (2022) presented seven principles related to ethical data use, which can serve as a valuable initiative. The principles are: Set company-specific rules for data usage. A framework reflecting a shared vision and mission for the company’s use of data should be put together by the leaders of the business unit, functional area and legal and compliance. The executive leaders should be involved in 19 defining data rules to give employees a clear sense of the company’s threshold for risk. The framework should be adopted to the specific industry and to the offered products. It should be accessible to all employees as such rules can improve and potentially speed up individual and organizational decision making. Business leaders should plan to revisit and revise the rules periodically to account for shifts in the business and technology landscape. Communicate your data values, both inside and outside your organization. The established common data usage rules need to be communicated effectively inside and outside the organization. Ensuring that they reach out to all employees and explain to them on a proper level that is connected to their specific role in the company and the tasks they perform. Additionally, publicizing the company’s data ethics framework will be needed in order to earn the public’s trust. Build a diverse data-focused team. Both large and small companies need people who focus on ethics issues to ensure that it does not become a side activity. It can be done by appointing a chief ethics officer or setting up an interdisciplinary team. Furthermore, there can be great value in bringing in outside experts or a legal council to navigate particularly difficult ethical challenges as well as ensuring the correct application of existing and emerging regulations. Engage champions in the C-suite. Even though the CEO and the corporate board are not involved directly in the decision-making process, it is important to keep them informed of decisions and activities. Having a champion in the C-suite can signal the importance of data ethics to the rest of the company. Consider the impact of your algorithms and overall data use. Companies should consistently evaluate the impact of their algorithms and data usage, while also proactively testing for biases at every stage of the value chain. This entails acknowledging and addressing potential issues that organizations may inadvertently introduce during the development and usage of AI products. It is crucial to not only scrutinize the types of data being utilized but also to carefully consider their current and future applications. Think globally. The ethical utilization of data necessitates organizations to consider the interests of individuals who may not have a direct voice in decision-making processes. In essence, leaders must adopt a holistic perspective of their companies within the digital 20 economy, the broader data ecosystem, and societies worldwide. This may involve exploring avenues to support policy endeavors aimed at narrowing the digital divide, enhancing broadband infrastructure accessibility, and fostering diversity within the technology sector. Ultimately, addressing data ethics entails leaders confronting the escalating global disparities and the concentration of wealth and value, both within prominent technological hubs and among organizations leveraging AI technologies. Embed your data principles in your operations. Defining what constitutes ethical data usage and establishing data usage guidelines is one aspect. However, integrating these guidelines into organizational operations is another challenge altogether. Whether leaders choose to prioritize a specific department or aspect of the organization, they should identify KPIs to track and measure progress toward achieving data ethics objectives. Furthermore, to foster a culture where ethical data usage is ingrained in every employee's daily tasks, the leadership team must actively promote, support the development of and facilitate formal training programs on data ethics. According to both Ammanath (2021) and Blackman (2023) the involvement of executive leaders is one of the most important principles in data ethics by far. All employees should be having data ethics as their domain but since they most likely are not experts within that field, the executive leaders should be the ones bearing the greater responsibility for ensuring ethical use and application within the operations. 21 3. Methodology The methodology chapter includes a detailed description of the research approach of the thesis. It goes through the type of research which was conducted, how data was collected and analyzed, the considerations taken to ensure a high quality of the research and ensuring the approach to be ethical. Lastly, the limitations regarding the methodology are presented. 3.1 Research approach This thesis aims to figure out the ethical navigation surrounding startups’ use of Generative AI. To address the thesis question, it is necessary to investigate both the ethical concerns and the navigation initiatives adopted by startups. AI tools are still new in the world and there is a lack of knowledge regarding the efficient usage of these tools. Therefore, based on the exploratory nature of this research, an inductive strategy was chosen in order to contribute to new knowledge within this field with the help of empirical findings and academic theory. As Bell et al. (2019) expressed by the inductive approach, a theory is the result of the research, and the induction processes generate generalizable inferences out of observation. Since the inductive strategy was chosen, the proper research method is a qualitative method that entails sequential steps. Qualitative research usually demonstrates an approach in which theory and concepts emerge through the collection and analysis of data (Bell et al. 2019). Therefore, based on the in-depth nature of this research, association with qualitative studies was beneficial. Taking a qualitative stance on this subject presents the opportunity to deeply comprehend it, especially considering that AI tools are a novel phenomena in both practical and academic contexts. This requires using an exploratory method often linked with qualitative studies and their detailed capabilities. According to Patel & Davidsson (2011), choosing a qualitative approach could provide a deeper understanding of the subject. 3.2 Research design As anticipated and discussed, the objective of this research is to explore the ethical issues surrounding Generative AI when utilized by startups and how entrepreneurs respond to these concerns. Consequently, this paper was designed based on multiple-case study. The research looked for various insights into how startup individuals utilize Generative AI tools and 22 interpret their ethical implications, as well as how they manage their use to mitigate potential risks through 15 cases. After collecting data and testimonies from various startups and users, a comparative approach was taken to identify similarities and differences in the responses. This approach also aimed to identify common ideas on how entrepreneurs should manage Generative AI usage to improve efficiency and reduce risks for social sustainability in the future. Since the research aims to compare the ethical implications and navigations of using Generative AI, leveraging a comparative design allows the distinguishing characteristics of two or more startups and theoretical reflections based on contrasting findings (Bell et al., 2019). 3.3 Data collection The data collection consisted of both primary and secondary data. Starting with the primary data collection, the selection of startups was made through snowball sampling, which is a form of convenience sampling. The approach was chosen based on practical considerations since the approach enables an ease of access to identified sources, a quick data collection and access to respondents through their acquaintances and networks, as one contact led to another (Bell et al. 2019). Given the newness, expansive growth and applicability of Generative AI, it was determined to search startups beyond specific industries. Therefore, startups from various sectors were targeted and the main criteria was that the startups utilized Generative AI at recurring times in any part of their operations. Choosing not to specify the Generative AI usage to a specific activity or part of the operations was a strategic decision made to not limit the accessibility to startups and finding empirical data. Additionally, the research will be able to help a larger number of startups in their ethical navigation regarding Generative AI usage. The startups were selected through our network, incubators in Gothenburg, AI Sweden's startup map and through Vinnova's grant applications. When booking the interviews the aim was to have as many of them as possible in-person. However, due to the respondents geographical location, time constraints or illness some interviews were conducted online. All interviews were conducted between 5 March 2024 and 5 April 2024 and lasted between 30-60 minutes each. All interviews were conducted in English, which ensured consistency in the responses and prevented misinterpretation from translations. We collected data from 15 startups, chosen for their ability to provide informed and reliable information on the 23 utilization of Generative AI tools in their business practices. Two startups were based in Stockholm, one in Luleå and the rest were Gothenburg based. The platform used for the online meetings were Google Meet and Microsoft Teams. The total number of interviews held in person was eight and the amount of interviews held online was seven. The interviews had a semi-structured design as it was intended to catch the startup-perspective of using Generative AI. The interview guide was prepared before the interviews and followed four broader themes. The questions were derived from the literature review and the main topics of the thesis (see interview guide in the appendix). Due to the semi-structured design, the interview guide was followed during the interview and there was room left for follow-up questions. Asking follow-up questions enabled the receiving of detailed information and concrete examples of actual events that had happened historically within the startups. Furthermore it should be mentioned that the following categories of ethical concerns: data collection and privacy, bias, trustworthiness, job displacement and cognitive atrophy were predefined based on findings from the literature. Hence, they were not a discovery made from the empirical data. In order to receive data that was specific, truthful and giving a correct view of the ethical navigation, the respondents were prior to the interviews informed that neither their or the startups name would be published in the thesis. Additionally, all interviews except for one were recorded with permission from the respondents. The recordings give the opportunity to re-listen to the interviews, which strengthens and assures the quality and objectivity during the data analysis (Saunders et al., 2009). During the interview which was not granted permission to be recorded, detailed notes were taken instead. Table 1: Interview respondents Interview Respondents Position Industry Company Age number 1 Male Co-Founder IT 3 yrs. 2 Male & Frontend developer & IT 3 yrs. Female Business developer 3 Male Co-Founder & Business IT 4 yrs. 24 developer 4 Female Co-Founder & CEO Coaching 6 yrs. 5 Male CEO Medical 4 yrs. Research 6 Male Co-Founder & CTO IT & Healthcare 2 yrs. 7 Male Co-Founder Consulting 2 yrs. 8 Male Co-Founder & CEO Food Tech 3 yrs. 9 Male Founder & CEO IT 7 yrs. 10 Male Machine learning Finance 4 yrs. engineer 11 Female CPO IT & Consulting 4 yrs. 12 Females & Founder, Sales Manager, IT 4 yrs. Male Software Engineer & Marketing Manager 13 Male Product Marketing IT 6 yrs. Manager 14 Male Co-Founder IT & Consulting 7 yrs. 15 Male Co-Founder & CEO IT & Finance 1 yrs. The secondary data collection was made through the collection of information from articles and web pages. Employing multiple methods or sources to gather data is known as triangulation, which is aimed at increasing the validity and reliability of the data (Bell et al., 2019). Moreover, the approach allows researchers to cross-check the collected data from different perspectives, strengthening the credibility and depth of their findings. When conducting a case study, triangulating various sources of data is helpful as it can ensure 25 robust validity and reliability, thus enhancing the overall quality of the research process (Bell et al., 2019). 3.4 Analysis of data All recordings were transcribed shortly after each interview. This work was done continuously during the interview period to ensure that the collected material was still fresh in mind. The decision to record the respondents made it possible to be present during the interviews and focus on collecting good data (Bell et al., 2019). Moreover, the transcribed data was beneficial both since direct quotes could be added to the result section in the thesis and it made the thematic analysis that followed more efficient and accurate. According to Braun & Clarke (2006) the process of thematic analysis includes six steps; familiarization, generation of codes, theme identification, theme review, naming and defining themes and telling the story of the empirical data. Nevertheless, it should be brought up that the thematic analysis has been criticized for lacking a clear and specific definition (Bell et al., 2019). In the case of this study the approach was considered as a convenient approach of processing the empirical data, as it enabled a distinct and structured way of organizing the collected data and identifying themes and patterns in it. The thematic analysis was made in accordance with the six step process by Braun & Clarke (2006). Firstly, the empirical data was read and transcribed. Secondly, the dataset was coded in different colors by looking for specific questions or keywords, this step was made together to ensure that the coding was identically performed for all data. Thereafter, all codes were grouped into themes; an example to illustrate this step was that answers including “guidelines”, “double checking” or “trial and correction” were grouped under the theme “navigation initiatives”. The themes were aligned with the literature review correlations and links were discovered in the empirical data. At last, the themes were used as headlines to structure the data and create a narrative of the findings. 3.5 Assessing research quality Bell et al. (2019) suggested that reliability and validity are important factors for determining the quality of business research. However, there has been debate among qualitative researchers about how relevant these factors are. Some authors believe that qualitative studies should be assessed using different criteria than those used for quantitative research. Lincoln and Guba (1985) and Guba and Lincoln (1994) proposed alternative ways to establish and 26 evaluate the quality of qualitative research, focusing on trustworthiness and authenticity. In this research, the classification of quality criteria by Guba and Lincoln was taken into account, as it was seen to be more aligned with the current qualitative research. Therefore, it's suitable to continue investigating these concepts, based on the studies by Guba & Lincoln (1985, 1994). On the one hand, trustworthiness consists of four sub-criteria: Credibility: Ensuring the credibility of the results involves adhering to the "canons of good practice" in conducting the research. This means ensuring that the research accurately represents the real world and demonstrates the researcher's understanding of the phenomena under study. Additionally, establishing the credibility of findings involves having the research findings reviewed by individuals from the social world who were studied to confirm the researcher's understanding. This technique is often referred to as respondent validation or member validation.For this paper, respondent validation was considered and all respondents received the first draft of paper to review their responses in order to ensure the highest possible quality. Also, both researchers attended interviews and the thematic analysis and coding were done by both of them to prevent personal interpretation and misunderstanding. Transferability: Qualitative researchers are encouraged to generate results that can be applied and generalized to other situations. Since qualitative research often involves in-depth examination of a small group or individuals with specific traits, the findings typically focus on the unique context and importance of the social aspect being studied. As stated by Guba and Lincoln (1985, 1994), the decision of whether findings are applicable in another context, or even in the same context at a different time, is an empirical matter. In the case of the present research, the explorative approach was taken to investigate different aspects related to the concept of Generative AI (usage, awareness, general problems, ethical concerns, and navigation) in different time horizons (present and future usage and navigation), and in different startups and industries were explored in depth. The collection of various perspectives through interviews and multiple case studies allows for the potential of partial transferability of data, making it more generalizable than data obtained through a single case study or a more focused approach. Therefore, the results are expected to be transferable to all Swedish startups aged between 1 to 10 years, and potentially to other startups with close age, usage, and perspectives. Dependability: Dependability entails implementing an 'auditing' approach, which involves maintaining comprehensive records of all stages of the research process—such as problem 27 formulation, participant selection, fieldwork notes, interview transcripts, and data analysis decisions—in an accessible manner. Peers then serve as auditors, potentially during the research process and certainly at the end, to assess the extent to which proper procedures have been followed. In the specific case of this study, all interviews were recorded to produce transcripts that were subsequently analyzed in greater detail, allowing for easier thematic analysis and coding. Additionally, regular follow-ups and frequent update meetings were conducted with the supervisor of this paper from the University of Gothenburg. This facilitated ongoing suggestions, guidance, and clarification throughout the research process. Confirmability: Confirmability aims to ensure that, while acknowledging the impossibility of achieving total objectivity in business research, the researcher's actions demonstrate good faith. In other words, it should be evident that personal values or theoretical biases have not overtly or manifestly influenced the execution of the research and findings deriving from it. Throughout each stage of the research and collection of data, both researchers were present to review each other's work. Additionally, data collection via interviews and thematic analysis were conducted by both researchers to avoid any personal interpretations or alterations. In addition to the four criteria discussed above, a fifth criterion of authenticity is proposed by Lincoln and Guba (1985, 1994). This criterion brings up concerns about the broader social and political effects of research. Responsibility for authenticity is placed on the researcher to fairly represent various perspectives within a social context, allowing research participants to gain a deeper understanding of their situations and empowering them to take action to improve their circumstances. As mentioned earlier, before the interviews began, the respondents were informed about the purpose of the research and the backgrounds of the authors. They could ask questions if they needed more information, which helped avoid any confusion or biases at the start of the interviews. 3.6 Ethical considerations In order to enhance the research quality, ethical principles were constantly borne in mind, especially during the data collection process. As proposed by Bell et al. (2019), the idea of voluntary informed consent means making sure people know a lot about a study before they decide if they want to join it. They should have enough information to choose whether or not to take part. In this regard, a short introduction of the thesis authors, the purpose of the study, how long the interview would take, and anonymous treatment of the gathered data were 28 provided to the respondents through emails. Before the interview started, the respondents were once more informed that all data would be treated anonymously, and their consent was obtained to record the interview; all 15 respondents except one consented to that. The recording began after receiving the consent. For the interview that could not be recorded, notes were taken instead. Furthermore, it was communicated to the respondents that the recordings would be deleted after the interviews had been transcribed. Following the completion of data collection and analysis, the respondents were emailed the first draft of the thesis along with their individual statements and quotes, thereby allowing them the opportunity to review and confirm their responses. This practice, referred to as respondent validation as recommended by Bell et al. (2019), served to further enhance the quality of the study. This ethical measure was considered essential as a significant portion of the empirical findings depends on the personal perspectives of the respondents. 3.7 Limitations A few limitations are found in the current research, mostly concerning how data is collected and analyzed. Firstly, it should be noted that, as typical in qualitative research, full generalization of the results may not be feasible (Bell et al., 2019). The research focused on startups located in Sweden, particularly in Gothenburg, Stockholm, and Luleå due to ease of access and time efficiency. This may limit the generalizability of the findings to startups in other countries due to the unavailability of information regarding the usage of Generative AI tools and perspectives of startups in those countries. It cannot be guaranteed that the results can be expanded and applied to other types of companies in Sweden that fall outside the range of startups aged between 1 and 10 years. Another significant constraint of this study is that startups are still in the early stages of adopting Generative AI. Consequently, they are at varying levels of implementation, resulting in differing usage, quantities, and viewpoints among them. Another limitation arose from the diverse backgrounds of some respondents, resulting in challenges in accurately transcribing and understanding their responses due to varying accents. Moreover, one of the respondents did not consent to record the interview, which affected the quality and quantity of data collection. 29 4. Results The results chapter provides a comprehensive view over the empirical data that was collected. In order to understand the underlying factors of ethical navigation in startups the results starts with an overview of the respondents Generative AI usage (4.1). Thereafter, the respondents' experiences of the predetermined ethical concerns are highlighted (4.2). Followed by the empirical findings of functional problems (4.3), navigation initiatives (4.4), awareness (4.5) and future navigation. Direct quotes are in this chapter referenced with the number of the respondent. See table 1 in chapter 3.3 for more information about the respondents. 4.1 Generative AI usage in startups A criterion for the selected startups was that Generative AI should be used to facilitate their business activities and that the usage should be an occurring activity within the operations. Hence the extent to which Generative AI was used varied between the startups. For the majority of the startups Generative AI was incorporated in their daily or weekly activities. The majority of the respondents also highlighted that the amount of Generative AI usage in back-end activities could differ a lot between colleagues within the business and that it was up to everyone to decide if they wanted to include it as a working method or not. The same goes for which tools to use, it is up to every employee. As seen in table 2 below, coding and content creation were the most common business activities where Generative AI was used. Other activities were brainstorming ideas, business development such as strategy making and goal formulation, business solution, learning activities to increase the average knowledge about unknown topics or tasks, legal advice and summarizing information or documents. The types of tools that are used to perform the activities in the startups are different kinds of LLMs, code completion tools, image generating tools and productivity tools. The most commonly used LLM was Chat GPT, which was used in every startup and Copilot was the most commonly used tool for code completion. Table 2: Activities and tools 30 Interview The activities in which Generative Generative AI tools in use number AI is used 1 Coding, Content creation, Chat GPT, Gemini & Training an API. Brainstorming & Business development 2 Coding, Content creation & Chat GPT, GitHub Copilot & Notion Business development 3 Coding, Content creation & Legal Chat GPT advice 4 Coding & Content creation Chat GPT & Unknown tool for coding 5 Coding, Content creation, Business Chat GPT & Unknown tool for coding solution, Learning activities & Summarizing information 6 Coding, Brainstorming, Business Chat GPT & GitHub Copilot development & Learning activities 7 Coding, Content creation & Chat GPT Business solution 8 Content creation, Brainstorming, Chat GPT Business development & Learning activities 9 Coding & Content creation Chat GPT & Unknown tool for coding 10 Business solution & Summarizing Chat GPT, Llama, PaLM & Open AI´s information API 11 Business solution & Summarizing Chat GPT, Llama & Mistral AI information 12 Coding, Content creation, Chat GPT, GitHub Copilot 31 Brainstorming & Learning activities 13 Content creation, Brainstorming & Chat GPT, DALL-E & Notion Learning activities 14 Coding, Brainstorming, Business Chat GPT, GitHub Copilot & Self solution & Legal advice developed LLMs 15 Business solution Chat GPT, & Self developed financial copilot As observed, the way that the startups use Generative AI tools is connected to which ethical concerns the startups encountered and dealt with. Several startups used coding tools and therefore they reported that they did not address bias since the data consisted of objective numbers and signs. 4.2 Ethical concerns The “Ethical concerns” that startups encountered and dealt with were essential to discover in order to understand how they ethically navigate their usage of Generative AI tools. The respondents brought up the ethical risks which either were embedded in the Generative AI tools or was a consequence of their use. The collected concerns consist of five categories including data collection and privacy, bias, trustworthiness, job displacement and cognitive atrophy. 4.2.1 Data collection and privacy “Data collection and privacy” is referring to the social concerns of how the tools treat input information, the awareness of the problem and how the startups deal with the concerns in their operations. When raising the broader question if the startups had encountered any social concerns the majority of the respondents expressed concerns about data privacy and the potential risks associated with sharing sensitive information with Generative AI tools. They are cautious about sharing confidential or proprietary data, especially when using open API services like OpenAI's GPT. Respondent 9 described that “Chat GPT will leak information very weirdly”. 32 Similarly, respondent 11 expressed that “Chat GTP started spitting out sensitive information when I gave it prompts that requested it”. To avoid any risks the respondents are actively avoiding or selecting certain types of information. Respondent 8 handled it in the following way: “I've been less inclined on typing in anything that can involve trade secrets”. As it comes to using information about clients, several respondents handled it in the same way as respondent 7 who stated ”We've only been using information that we can scrape from the companies websites. Which means that everything that they openly say is something that we can use”. However, some startups avoid using customer data at all. Respondent 4 is one example “We are very strict on what we share, so we are not making any customer data available or using that”. In terms of data security several respondents expressed uncertainty about how data provided to AI models, such as ChatGPT or OpenAI's API, is handled. Additionally, there are concerns about the potential misuse or unauthorized access to data by Generative AI service providers. “I don't know where it actually ends up, or who gains access to it. So those vital things for the business, I never put in” - Respondent 8. Additionally as expressed by respondent 10, it is not only the startups who are concerned: When our clients come to reach out to us the major problem is about data privacy. Because they knew our system or our model is heavily relied on like open API. Their major concern is like "what if the data we sent to you and finally sent to Open AI?” Despite that, some other respondents expressed more confidence in the security measures implemented by AI service providers. Respondent 6 reasoned like this “For Chat GPT I'm buying this $20.00 per month account so that they do not save my data. Similarly with the GitHub copilot, I mean I can rely that they are not saving my code for themselves”. Furthermore, some respondents raised questions about ownership and control over the data used by Generative AI models. They expressed concerns about the lack of transparency regarding data ownership and how data provided to AI models may be used or stored by service providers. There's some concerns for instance, you use open AI and maybe you upload certain information that could be sensitive from a company point of view or a personal point 33 of view as well. And I think it's not very transparent what actually happens with that data and that information that you provide to the LLM - Respondent 15. Respondent 10 had a similar concern: We cannot predict how they will use our data or whether they just drop our data or just remain our data. The only thing we can do is to convert to another kind of local model instead of an API such as downloading some model to our own system and training it. Finally it should be mentioned that it is not always easy for the startups to know what to do in every situation when it comes to data collection and privacy. Respondent 13 communicated that “Knowing when to be careful can be a bit tricky”. 4.2.2 Bias “Bias” is referring to the social concerns of how the startups avoid and deal with subjective outputs. Biases are connected to the input data or to the data which a Generative AI model or tool is trained on. Respondents across multiple responses emphasized the importance of being aware of bias in data collection and management processes. They acknowledged that complete elimination of bias might be impossible but stressed the need to remain vigilant and mindful of potential biases as they are a part of the data handling process. Respondent 11 expressed it in the following way: “No one can claim that the dataset is non biased or completely free from bias. That is impossible, but we try to be aware”. Similarly respondent 6 said “All AI systems will remain biased, because there is a bias in the society and as an AI company, the best you can do is to document that bias”. Respondent 6 also highlighted that it is necessary to let your customers know if any part of your product or service is biased or can create a biased outcome. In that case the transparency aspect is fulfilled and it is up to everyone to decide how to use or trust the offering. Other respondents expressed a need for thorough evaluation and consideration of biases in data collection and model training processes. Several respondents mentioned that the nature of their data, such as financial analysis or code, tends to be objective, limiting the potential for bias in their applications of Generative 34 AI tools. One of them was respondent 10 “Since the data we mainly use is about financial analysis and tabular data, I think it would somehow be very objective”. These startups tended to not have any particular structures in place to handle potential bias in the data. Overall, the respondents expressed varying degrees of confidence in their ability to manage bias effectively. While some mentioned efforts to curate credible data sources to minimize bias, such as companies websites and others acknowledged the inherent challenges in completely eliminating bias, especially in subjective domains. Accordingly respondent 15 described their approach to address bias: Right now we're working on a list that we actually created ourselves with sources and so forth that we believe is relevant and credible. But of course if you put together a list yourself based on your opinion, it's going to be biased and that's inherent. Some other respondents adopted a more passive approach, still acknowledging bias as an inherent challenge but providing fewer or no concrete measures to address it. According to respondent 13 their startup addressed it in this manner: We don't have a check for it. It's more important for us that we do quality output that is relevant to our customers and that they feel is important, rather than it being unbiased. Given we're a company that is doing something specific for this industry, that we think is going to move the needle for our customers. I think you know there is a strong bias already there. Ultimately, it is of importance to raise that the respondents demonstrated differences in their ability to recognize bias in data collection processes. While some respondents expressed confidence in their ability to identify and manage bias, others indicated uncertainty or lack of awareness regarding potential biases in their data. 4.2.3 Trustworthiness “Trustworthiness” is referring to the extent of which the startups trust and rely on the Generative AI tools or the output it creates. The respondents' perspectives revealed several common views and similarities. Across the responses, there was a prevailing theme of cautious skepticism regarding the reliability and 35 trustworthiness of Generative AI output. This sentiment was particularly pronounced among respondents who emphasized the importance of human oversight and critical evaluation when using Generative AI tools. However there are a lot of nuances in the skepticism, as some respondents expressed confidence in certain applications of Generative AI, others were more cautious and critical. Starting off with respondent 1 who highlighted the potential social aspect of Generative AI, cautioning against blind trust in its output and emphasizing the need for human intervention and critical evaluation. Moreover he described using Generative AI more as a brainstorming partner than a source of definitive answers. There is a problematic social aspect of it is that we communicate with AI:s in a way where we feel like they provide us with ultimate answers…If I'm creating content I will do some brainstorming to get a lot of different perspectives and then if I find something that is relevant to me, even after that, I'll work myself on it, because I'm not going to trust everything it says blindly - Respondent 1. Correspondingly, respondent 6 who expressed a slightly higher lack of trust in Generative AI output due to frequent errors and failures, stated “We cannot rely completely on providing Gen AI services to our clients if there is no human involved in the loop”. Many respondents expressed a need for verification and fact-checking when relying on Generative AI output. This cautious approach reflects a common understanding among respondents that Generative AI output should be treated as a starting point rather than a definitive source of information. Respondent 4 took grammar as an example “When I'm checking grammar for example, I sometimes check with other spelling programs as well to make sure that this is something that it's correct but I would be more careful if I use it for other purposes”. Respondent 5 fact-checks the tools accordingly: For whatever it writes and claims it kind of gives you some reference to it. Where is it picking this statement from and generally, I would try to go back to the reference and try to see if it is coming from the right source. 36 A final example showing the commonness of checking the tools to be able to rely on the output is by respondent 7 “We're double checking as I mentioned before. When we get a response, there's always a feedback loop”. Additionally, respondent 14 also raised the problem with factfulness, but instead of cross-referencing with other sources, he emphasized the importance of having a good understanding about the subject or activity that you ask the tools for help with. As it makes you able to be more critical and use the output in a correct way. The factfulness is obviously a problem, you can't rely on it and you constantly need to know what you're talking about. They're very useful when you know the subject area that you're going into, but when you don't know that much, it's not that good…It's like talking to somebody and then you continuously judge whether it sounds good or not. Like what they're saying - Respondent 14. Respondent 9 resonated in a similar way but described it from his own perspective: When it's some completely new area then I tend to not use it to gain a lot of information because I don't trust that it is actually accurate. So, If I do, it's just to get a hint of what could be good for me to search on to actually find good sources for the information. Some respondents also shared the perspective that the Generative AI tools are better to trust regarding outputs containing general information that the tool has been trained on, rather than specific industry or company intel. Respondent 3 described it in this manner: The model's limitations sometimes make it better for me to do it myself, while certain parts, especially when there's a lot of data that it has been trained on suits better. General questions in applications for example but our specific questions that only we know about could be hard for it to explain even if we give it background data. Also, the tools' tendency to switch topics or hallucinate and how it impacts the trustworthiness was described by two respondents. Respondent 11 who was one of the most critical and skeptical respondents stated “We do not use Gen AI much because it is not really 37 reliable. It is hallucinating sometimes”. Respondent 9 who demonstrated a more nuanced perspective about trusting the tools declared: It depends. It's very different. If it's a simple email which is 80% done by me and it will be refined. I will quickly read it once, then I'm fine with it. If it's a document I can say that I have to read it very carefully. When it gets a little bit complicated, you need to look into it very precisely because you cannot trust it. Sometimes they switch channels, like I'm talking about something right now, then suddenly talk about weather. It goes to different discussions and that has happened a couple of times to me. So, sometimes I cannot trust it for sure. Lastly, it should be mentioned that the trustworthiness in the Generative AI tools was also affected by some functional problems that were reported by the respondents and discovered during the research process. The most common functional problems will be discussed in section 4.3. 4.2.4 Job displacement The code job displacement is connected to if the respondents have thought, reflected or experienced any job displacement impacts from Generative AI in their own startups. The responses provided by the respondents regarding their thoughts and concerns about job displacement due to Generative AI usage in their startups, showed that many respondents expressed awareness of the potential for job displacement due to Generative AI. However, none of the respondents have experienced job displacement within their startups, as they are still in the face of growing and scaling. This is explained by the following respondents: “We started as an AI company so there is no job that can be replaced. AI can only make us faster or more efficient” - Respondent 11, “Not what I heard of, we need more people. That's what we are talking about” - Respondent 12 and “We're creating jobs. So from our end, no. What we do wouldn't exist without Generative AI, so we're on the opposite side of the spectrum” - Respondent 15. Some respondents rather viewed Generative AI as advantageous for startups, enabling them to achieve more with limited resources. They see AI as a cost-effective solution that empowers small businesses to compete with larger enterprises and scale their operations 38 without extensive hiring. Respondent 9 argued “It helped me in the way that I don't have to hire more people. I can maintain the same team. With more efficiency and effectiveness”. On a similar note respondent 2 stated: I think it's also a good way to use the company's resources, obviously hiring a new person is way more costly than having a subscription of the GitHub Copilot. And I think as a startup, you have to be smart with how you spend your money as well. Another perspective was provided from respondent 6: I have a full financial model until 2029 of how we will evolve the customers and also how the team will evolve. I created it six months ago and that was when we didn't use so much of the GitHub copilot. I revised it two weeks ago and I cut down the workforce by 60%. That's a lot of how I would be hiring. Furthermore, respondent 6 continued with the argument and declared that as the tools play a larger role in the companies, there will be a need for people who can write good prompts and formulate the problems. Now we need people who can look at the problem, can find problems, can describe the problem and can evaluate the problem from different angles. Those are the kind of people that would be needed. Not the problem solver - Respondent 6. The more hypothetical scenario of not needing to hire new people due to that the existing team can become more efficient with the help of Generative AI, was a discussion raised by some other respondents as well. Respondent 5 shared that It's very helpful for a small startup because we don't necessarily have to employ a person. From a startup perspective I think that I don't necessarily need a person right now. To some extent I can rely on AI. A final perspective on job displacement was provided by respondent 7 who saw Generative AI as something which can remove a certain type of activities rather than an entire role: My goal is not to displace those kinds of jobs. What I want to displace is the menial or boring tasks or the things that you do not need humans to do. So it is a job 39 displacement, but it's more making sure that the few consultants that you actually do have or the few experienced people that you actually do get to do something meaningful. 4.2.5 Cognitive atrophy Cognitive atrophy refers to the progressive decline or deterioration in cognitive functions, such as memory, attention, language, and problem-solving abilities that could be connected to using Generative AI tools. Two of the respondents raised concerns about the impact Generative AI could have on cognitive atrophy. Their views were slightly contrasting, respondent 1 reflected on the potential cognitive atrophy caused by technological advancements, noting a decline in memory capabilities over time. He emphasized the consequences of technology on cognitive abilities, illustrating the loss of memorized phone numbers as an example: When I was at your age I could have dialed at least 30 different phone numbers in my head. Now I cannot even remember my own phone number. I'm the same person. Same level of IQ, probably a little bit lower because of the age. But what happened? It's just technology so that I can say that it in a simple form factor. There is a consequence of doing everything. So by the advancement of technology - Respondent 1. Moreover respondent 9 highlighted the importance of maintaining cognitive activity to mitigate such decline. In contrast, the other respondent acknowledged the convenience of technology but also recognized its potential downside in fostering over-reliance. She admitted to occasionally relinquishing cognitive effort and relying on technology for answers, which could contribute to cognitive laziness or atrophy. 4.3 Functional problems The “Functional problems” were discovered during the interviews. These problems inherent in Generative AI tools, represents several interconnected issues faced by users including 40 communication barriers, accuracy issues, contextual limitations, and the need for extensive manual intervention to achieve desired outcomes. The common theme for these problems are that they all have an impact on the trustworthiness of Generative AI tools and ultimately which navigational initiatives are taken. Collected issues are divided into 4 categories which are discussed in the following sections. 4.3.1 Expectations One of the less common problems that respondents mentioned is called “Expectation”. “Expectation” refers to the varying degrees to which the respondents believe the Generative AI tools meet their expectations. Three out of all respondents encountered this problem in their usage. Respondent 3 replied to the question “Have you encountered any problems with using Generative AI?” like this: “Not really, but I guess it depends on what the expectations are”. Responses indicate that expectations depend on the specificity of the Generative AI’s responses and the users’ knowledge of Generative AI limitations. Also, the majority of respondents expressed that the Generative AI's performance in meeting expectations can fluctuate. In this regard, respondent 5 stated: “You have expectations. Sometimes it does not meet the expectations... Sometimes it will meet your expectations, sometimes it will not”. While all respondents are generally concerned with the Generative AI's performance in meeting expectations, they each highlight specific aspects or experiences that are unique to their interactions. Respondent 3 emphasized the importance of expectations and acknowledges the Generative AI's limitations. Respondent 5 mentioned the variability in the Generative AI's ability to meet expectations and the potential for improvement through learning and asking better questions: “You will also learn. Maybe you will ask a better question and maybe next time it will meet your expectations better”. Respondent 8 specifically focused on the Generative AI being sometimes too general and not specific enough in its responses. 4.3.2 Input and output The most common functional issue with Generative AI tools among the respondents is termed “Input & Output”. The majority of respondents reported this problem. The “Input & Output” in the context of Generative AI tools refers to the relationship between the data fed 41 into the Generative AI tools and the generated content. The term signifies the critical connection between the information provided to the Generative AI models as input and the outcomes produced as output. In this context, the discussions revolve around the challenges and concerns related to input quality, output quality, specificity, coding, and relevance of the output generated by these tools based on the input data. Some respondents discussed the significance of quality input for obtaining meaningful and useful outputs. In this regard, respondent 6 mentioned: “I think everything boils down to what prompt you give it. Garbage in, garbage out”. Also, respondent 13 stated: But very often the output will depend on the input. We found kind of a way to make it work with whatever we put in and then the prompt. For us, those things are nearly more important than the output, because if the input is bad, the output is going to be bad. If the input is good, the output is going to be quite good. It is my experience. The second challenge related to “Input & Output” is the quality and reliability of output which several respondents mentioned. Respondent 4 who is using it for generating content declared: The content generated through Chat GPT is sometimes on a more advanced level than I am when I'm writing, so that's something I try to be aware of when I create content and so, I know it's still aligned with what I can write and I'm checking everything. You have to check it and you have to make sure that you understand what you write. Concerns about the accuracy, correctness, and reliability of the output generated by AI tools, lead to the need for manual intervention and careful review. Some respondents experienced unrelated outputs to their prompts. Respondent 1 mentioned: You ask it a question and it answers and that's it. But, it's not like that. It takes a lot of back and forth. Sometimes it just generates gibberish, which doesn't mean anything. It generates code that is not correct. In addition, respondent 9 shared the same experience: 42 I was talking about a specific subject. And then suddenly the response was about something different. So, they completely compromised the discussion I was talking about and suddenly the response was about a different story of someone else that I didn't know. That's where the first time I noticed something was weird about it. I'm especially talking about GPT from Open AI. The least repeated issue by respondents is related to broad output. Two respondents expressed a desire for more precise, focused outputs rather than broad or fluffy results. Respondent 13 considered it as a main issue and stated: One of the issues is GPT, which is way too broad for me to use out-of-the-box for work. That ultimately affects the quality of the output, which is the big issue in that case. It takes more time to review and change and improve the quality than it would have to just write the thing from the start. In this regard, Respondent 15 mentioned “When you're working with Generative AI, you sometimes feel that the output is quite fluffy and a little bit too broad, whereas we want to get it as firm”. Last but not least, there was a problem related to incorrectly generated codes, including debugging generated codes, dealing with errors, and the time-consuming nature of error correction. 4 respondents saw errors in the generated codes, especially by the tool Copilot. Respondent 2 was using Copilot and claimed that “In general, it generates 80% correct, then you usually spend a lot of time debugging the 20% that doesn't work. It takes a lot of effort to debug their generated code”. Respondent 6 reported the same issue: I would say like 70% of the time, 75% of the time it writes the wrong code. It's correct in a sense, but it hasn't understood what you want it to write and that is part of a bigger problem, because no one can actually describe what you want. Respondent 11 believed that errors are not easy to recognize and stated, “When it comes to coding it is extremely dangerous as there are mistakes that you can't spot”. 4.3.3 Communication The second common problem that several respondents encountered is called “Communication”. “Communication” can be described as a challenging aspect when 43 interacting with Generative AI tools. Regarding this issue, respondent 1 explained Generative AI tools in the following way: “You have to see it as a colleague who knows a lot of things but doesn't know how to communicate and you have to actually put a lot of effort into getting the best out of it”. Multiple respondents expressed challenges in effectively communicating with Generative AI tools. Sometimes that doesn't really work. Sometimes it just don't understand. Like when we start using Generative AI, we start with really simple questions and we think, OK, this is like a person. Who is talking to us? Of course, it's not a person, it doesn't have the experience. It’s not really the way people talk. We try to do it with Generative AI as well and of course, it doesn't understand it, it doesn't see your expressions. There are a lot of things that we don't say in our communication, but we understand each other, it's not the same with Generative AI. You have to be very clear and precise and even then, sometimes it just doesn't work out - Respondent 1. Respondent 6 perceived this problem a bit differently and delved into the difficulties of problem description: And that is our biggest problem as software engineers, you are good at writing programs, but you are not good at describing. Now you need to develop another skill, which is how you describe your problem and not just the solution … Now you need to describe the problem to someone else and that is what we are not good at. That means that what you get is also not correct. So, you need to have a certain level of expertise to be able to use these Generative tools to see that no, that's not what I want. I want something else. Respondents discuss the challenges of using Generative AI tools when lacking proficiency or understanding in the relevant subject matter. In this regard, respondent 11 mentioned that: Misinterpretation of statistics when you do not know about the assumptions that the statistics are built on it's easy to skew the data and use the statistics in a way that pleases the user. LLMs can't always rationalize or justify things that are wrong because they have been trained that way. 44 Also, respondent 14 declared “With problems like factfulness the way that they respond sometimes, is very, very sensitive toward how you prompt it”. While all respondents acknowledge challenges in communicating with Generative AI, their proposed solutions or approaches to overcome these challenges may vary, reflecting individual problem-solving strategies. Overall, all of the respondents point out the complexities and challenges involved in interacting with AI technologies, highlighting the importance of clear communication, effective problem description, and understanding of the subject. 4.3.4 Weak database Another problem that two respondents mentioned is a “Weak Database”. It describes a limitation or challenge with Generative AI tools where the database used for training the AI lacks sufficient data or context. This limitation results in AI responses that may not fully understand the user's problem or provide relevant or useful information, especially in specific or novel domains. Both respondents mention using the AI tool more for refining existing information rather than creating entirely new content. I'm talking with Chat GPT that doesn't even understand my problem. Perhaps the database that was trained didn't have that much data. There's also one thing we notice with programming too, because these kinds of problems that we are solving, no one has solved before. Which means that the response that we get is not very useful. It can’t help us right now - Respondent 6. We use these AI tools very much in a way where we have the information and then the AI uses it to create new, better-written information. Our domain is so specific, if you ask it to do something out of the blue, it will do something very random. Most of the time we use it more to refine what we do rather than create what we do - Respondent 13. Respondent 6 emphasized the challenge of solving unique problems where existing solutions may not be available. In contrast, respondent 13 focused more on the lack of context and specificity within their domain and stated that the generated responses may be influenced by the lack of understanding of the company's context by tools. 45 4.4 Navigation initiatives The “Navigation initiatives” refers to the processes of utilizing and interacting with Generative AI tools in ethical and efficient manners. Respondents shared their experiences, thoughts, and practices regarding the cautious and conscious usage of Generative AI. Additionally, they indicated their approaches to navigating the opportunities and challenges presented by Generative AI technology. These kinds of initiations are divided into two categories that are called “Specific navigation” and “General navigation”. 4.4.1 Specific navigation “Specific navigation” refers to the intentional and systematic initiatives that the respondents adopted in order to enhance efficiency or solve a specific problem or error caused by the Generative AI tools. The specific problems were mentioned in the previous chapters: “Ethical concerns” and “Functional problems”. “Double-check”, “Trial and correction”, and “Stop using”, were the specific navigation initiatives reported by startups. They will be explored in the following part of this section. “Double-check” is one of the common navigation initiatives that users apply in order to verify and confirm the accuracy, relevance, and appropriateness of the output generated by AI tools. In total nine out of all respondents mentioned that they made some type of double-checking as a navigation practice when using the AI tools. Several of these respondents emphasized the importance of fact-checking. They expressed the need to independently verify the information provided by Generative AI tools. You know that we shouldn't trust all the claims that it makes. That is one rule. If you should follow one rule with Generative AI, it is to fact-check all the claims and double-check everything - Respondent 1. Also, double-checking is considered as a routine when responding to the inherent uncertainty in the unreliability of AI-generated outputs. For some respondents, it became habitual to double-check the output. Respondent 2 mentioned: But a problem there could be that you can't trust the output so, you constantly have to double check to feel safe ... You then have a routine to know that you have to check it and know that it doesn't work perfectly. 46 Respondent 7 claimed: I'll have a look at it a bit more strictly, which means that when we are building things, we're always adding two or three or four layers of double checking. Either double checking with the same LLM or with another element that has to double check the results. Double-checking is portrayed as an ongoing process that involves continuous monitoring and evaluation of the AI-generated output. One respondent stated the involvement of experts in the double-checking process to review and validate the AI-generated content. There will be a lot of reviewing and fact-checking from the experts in the area. Like the people who know this stuff because there are concepts, I don't understand. Then I will always check with those experts if the output is good enough - Respondent 13. In addition, some respondents referred to the correlation between Generative AI usage and level of double-checking. The answers on more black and white things, I don't really find that it is reliable. Not enough that I would not double check with the different sources. But when it's more general things or action plans, I think it's been pretty good at predicting, but it's sometimes too general. It's not specific enough - Respondent 8. Similarly, respondent 12 stated that: I always double check in case I don't use it for brainstorming and inspiration. I don't see any purpose not to use it. I think it's great. The only thing that you should be aware of is always double checking and not use it as a browser. Respondent 7 shared a new idea and talked about how free and paid versions of Generative AI tools differ in the level of double-checking.; “I would say with the simpler models up to 3.5, when you're not paying, then you can end up with any crazy answers where you as a user have to screen the answers a lot”. Overall, the various approaches to double-checking reflect the diverse strategies and perspectives adopted by users to ensure the accuracy, reliability, and effectiveness of AI-generated content in various contexts. 47 “Trial and correction” is the most repetitive usable initiative among respondents. It encapsulates the iterative process of experimenting with Generative AI tools, assessing their performance, and making adjustments or corrections based on the observed outcomes. Several respondents declared that trial and correction are imposed by users. Respondent 6 claimed, “You need to do multiple iterations on it to guide it ... You continue the conversation until you have something where you feel that it's solving the problem”. Based on the responses this navigation has been done through various approaches such as refining prompts, providing additional context or data, or even deciding to handle certain tasks manually if the AI's performance is suboptimal. Regarding prompting and communication, respondent 9 mentioned: Yes, we did. We tried, but then the more you work with the framework or the software or the model, the more you realize how you should change your behavior and then later on that might actually change your behavior for good. I was the person that always asked: please do something. Now, since I tried it so many times, I would never use please anymore. I will be more direct: do that instead. Also, respondent 6 explained how he could see better communication and problem-describing skills over time; “I'm also seeing among people that they are becoming better and problematizing and becoming better with prompts”. Respondent 8 recommended using LinkedIn recommendations and feeds to improve writing prompts. In addition, respondent 3 mentioned manual correction when encountering a problem that stemmed from AI limitations. Knowing how those models work, it's going with something and making it work. After seeing that the response to a question was kind of wanky, I can ask it again to try again but often I understand the models’ limitations which make it better for me to just do it myself for certain parts. While certain parts, especially containing a lot of data that it has been trained on, suits better - Respondent 3. The process of trial and correction is seen as an ongoing learning experience. Startups recognize the need for continuous improvement and adaptation, both in terms of their understanding of how to interact with Generative AI models and in training the models towards more accurate responses. 48 The last and least common strategy among respondents to navigate their usage was “Stop using”. This initiative refers to the action taken by individuals to discontinue the use of Generative AI tools when they perceive them as ineffective, unreliable, or unsuitable for their needs. This decision is typically based on a negative assessment of the tools’ performance or outcomes. Only two respondents declared that they stopped using some Generative AI tools. Respondent 9 reported stop using Generative AI usage with a special behavior: But nowadays, I think I didn't really see. I actually didn't test it because I no longer go to that extent. My plan before was to try to push it, to give it enough information about me, my company, the way that so every decision I would have made, I would have educated. This is the question. This is the problem. This is how I see the problem and this is how I'm trying to fix the problem and pursue it. So, it was very intensive communication and chatting with our AI. But then I actually gave up on that. So I'm pausing that for now. Responses demonstrated various tolerance for errors among users. Low tolerance for errors caused an end of the usage as soon as users encountered an incorrect outcome, while a higher threshold for verification results in continuing the usage until users consistently encountered inaccuracies or issues. I try to verify the outcome when I see that it is wrong. I have tried the tools for many different purposes, for example, coding. But when I saw that the answer was not right, I stopped using it for that purpose - Respondent 11. Temporarily stopping usage while seeking alternatives or improvements in the tool has been seen as an approach among some of the respondents. Various levels and manners of stopping using Generative AI usage reflected different levels of adaptability and willingness to explore alternative tools. These differences highlighted individual variations in decision-making processes, issue tolerance, and problem-solving strategies when it comes to determining when to stop using Generative AI tools. 4.4.2 General navigation “General navigation” considers the broader ethical, practical, and knowledge-sharing aspects of using Generative AI within startup environments. These initiatives aim to navigate the 49 complexities of Generative AI adoption while ensuring ethical and responsible practices. In the following sections, both categories are studied in depth. “Instinctive recognition” refers to the intuitive sense or caution that users have when interacting with Generative AI tools. Trust in the tool's output is often managed by the user's own ethical standards and values, and they rely on their own judgment to navigate its use appropriately. Several respondents discussed this navigation initiative. “It comes down to how you use it and I think all of us have very strong values and ethics that we implement into our work” – Respondent 2 Many respondents expressed a sense of caution when using Generative AI tools, especially when dealing with sensitive information or critical tasks. They highlighted the importance of verifying the output independently and not fully relying on the tool. Regarding behaving cautiously, respondent 2 declared “I think you always like to handle it a bit with caution anyways, so I´m not trying to put any sensitive information in there. We don't, or at least I don't use it”. Moreover, respondent 9 also stated “But I know that all of them have a hesitation. The more you know the more hesitant you become to work with it”, which showed his reflection on knowing Generative AI and the amount of caution. Some respondents focused on individual responsibility and trust in employees' ethical judgment, while others thought there should be clear guidelines and supervision; “In the whole company, we have to trust our employees to make the right decision. We trust everyone to understand how to not use it in a bad way” – Respondent 3. “Share knowledge and discussion” refers to informal exchanges and conversations among colleagues within the startups about various aspects of Generative AI, problems, ethics, and related topics. The majority of respondents reported that they had discussions. “Yes, casual discussion, like anybody will have in society. But nothing beyond that” – Respondent 5. These discussions typically occurred in casual manners like lunchtime or spontaneous interactions rather than formal meetings. Yes, not in a structured way where we sit and say, let's share the knowledge. No, sometimes it just happens. Like when someone's saying "I used Chat GPT to do this and look how stupid it was" and then someone says "but if you tried this it would be better” and we share knowledge just spontaneously – Respondent 8. While the majority of respondents shared their knowledge within their startups and had discussions, just half of them declared that their discussions were related to ethics. 50 Respondent 12 was asked about whether ethical concerns and caution are topics of discussion, to which she responded: “No. But we don't use it in a bad way either”. Data privacy and the broader impact of Generative AI on humanity are the ethical topics that two of the respondents discussed within their team. Respondent 10 expressed that they rarely talked about discrimination and bias, since they think Open AI and Chat GPT in most cases deal with it. Some companies integrated discussions about AI ethics into their regular meetings or sessions, indicating a proactive approach to continuously learning and engaging with ethical considerations; “We've had lots of them. Because we have recurring weekly breakfasts when we talk about developments and those sections have been ethics, regulations, safety, governance” – Respondent 14. Responses showed that startups that integrated Generative AI models in their business solutions cared more about ethics and discussed it more frequently. In this regard, respondent 14 who is working intensively with Generative AI in the business solution mentioned that: Some companies involve external experts or consultants in discussions about AI ethics, suggesting a desire to access specialized knowledge or perspectives beyond their internal teams. These discussions extended beyond immediate concerns to include long-term considerations, such as how technology can be leveraged for societal benefit and validating news sources. The differences in the respondents' answers highlighted the diverse approaches and priorities that startups may have regarding discussions about AI ethics, reflecting variations in organizational culture, structure and strategic focus. The next initiative related to general navigation is “Guidelines” which refers to the formalized internal policies or routines within the company regarding the implementation of Generative AI tools. Almost all the respondents reported that they did not impose any guidelines for using Generative AI within the company. In this regard, respondent 9 mentioned, “As I mentioned using Generative AI in our company, there is no framework for it, there is no policy”. The exception was one respondent who clearly mentioned that the company has guidelines, just not written down. Also, another respondent added there are no established guidelines in place, but they have some structured processes in place for reviewing the created content by other colleagues. A few respondents emphasized the importance of trusting employees to make responsible decisions regarding Generative AI usage and suggested a reliance on individual judgment rather than strict guidelines; “As a 51 company in whole, we have to trust our employees to make the right decision. We trust everyone to understand how to not use it in a bad way. So, we don't really need that” – Respondent 3. One of the respondents focused on the importance of complying with relevant laws and regulations, such as GDPR, in the absence of formal guidelines. Respondents 4 and 15 both connected the necessity of guidelines to the stage and size of the startups. “That's not something that we've been prioritizing yet, given the stage that we're in” – Respondent 15. Respondent 4 stated, “we try to stay away from guidelines as much as we can when we are a small startup team”. In this context, respondent 10 mentioned the immaturity of Generative AI tools and stated: “Not for now. It's not quite mature, so we didn't develop a guideline”. Additionally, the need for guidelines was highlighted by several respondents as a potential topic for future discussion or development as the company grows and faces new challenges. The last and least common approach among the respondents was “Training the tool”. “Training the tool” involves training Generative AI models with specific data or content to produce customized results. This process can include using open-source models, interacting extensively with the Generative AI to educate it and overseeing its performance to ensure it matures and delivers suitable outcomes over time. “Focusing on using the open source model, training those models for our own data and on our own content, to produce something that would be uniquely ours” – Respondent 1. Additionally, respondent 12 recognized the role of the supervisor to oversee the Generative AI’s development and ensure its maturity over time: Now there are many different methods, but then you need some sort of supervision on your AI. AI is like a child, you have to be there. You have to see the input-output and then after some time it gets mature, but it takes a long time. 4.5 Awareness The “Awareness” refers to the level of understanding and familiarity of individuals and teams with AI technologies, particularly Generative AI. It was discovered during the research that the user's awareness affected their navigation initiatives. In the subsequent sections, team knowledge, educational activities, and adoption of innovative tools within startups environments will be discussed. 52 4.5.1 Team knowledge “Team knowledge” is described as the collective understanding and familiarity of the startup members regarding Generative AI, its usage, problems, and ethical concerns. This includes their level of education, experience, and awareness of the risks and benefits associated with AI, as well as their ability to apply AI tools in their respective roles and environments. Team members have a range of knowledge levels, from basic to advanced. In this regard, some respondents stated that they had a more limited knowledge about Generative AI. Respondent 9 considered himself as a basic user and stated; “My knowledge is very basic. I consider myself a user”. However, the most respondents reported the variety of knowledge among team members; I think there is a large gap because one of my seniors, like one of my supervisors, he just doesn't believe in Generative AI at all. He just thinks everything should be done in a hard coding way or in just a traditional programming way instead. I would say many people even if they use Chat GPT quite frequently, they might not be familiar with Generative AI. They might not know many things about it – Respondent 10. Nevertheless, startups that integrate Generative AI or any kind of AI in their business solutions seem well-informed about AI technology even among non-tech backgrounds; “ We are very aware. We are helping other companies and know what's safe and what is not” – Respondent 11. Additionally, respondent 3 mentioned how knowledge can affect usage; Well, that's because it's not a calculator, it's a statistic based model and its answers are just based on statistics and sense, so just having that knowledge beforehand makes us also understand where we can use it best. Respondent 7 expressed difficulty keeping up with the rapid pace of AI development and the emergence of new models; I would say it's quite high. But the interesting aspect as you mentioned, new models are coming out every day and it's impossible to keep up with the development and it's so quick. So even though we think that we're quite educated in the area, it moves just quicker and quicker every day, keeping up and being on top of things is difficult. 53 4.5.2 Innovation adoption “Innovation Adoption” is described as the process of integrating new tools or technologies into business practices, driven by factors such as curiosity, efficiency improvement, and staying competitive in the market. There is a tendency towards informal decision-making processes, with individuals or teams trying out new tools based on personal curiosity, recommendations, or perceived benefits rather than formalized procedures. “I guess it was mostly curiosity, it has become very famous and it attracts enthusiasts. We thought that this would be a great use in our business as well” – Respondent 3. Also, respondent 8 mentioned “We don't really have a process for introducing new innovative tools”. In this regard, respondent 9 emphasized the adaptability of startups and claimed “It's basically just a personal experiment out of curiosity for everyone to try and adapt themselves quickly. Then you have different tools that can help you”. On the other hand, respondent 10 expressed a mixed approach to informal and formal innovation adoption; I think it's a mixture. We have the first discussion, by implementing how our objective is, how we want to achieve this and then I try something else. Then maybe there's a second discussion about checking its performance and finally finalizing it with some capability or functionality. While some respondents perceived it as a matter of personal curiosity or experimentation, Respondent 1 viewed rapid innovation adoption as essential for staying competitive and avoiding obsolescence; The problem with innovative tools before and now is that you do not really have a choice. You either use them and stay competitive or you don't use them and you're jobless. I think what the Generative AI did is that it took away the space for reflection before you can decide. It almost feels like if you stop and reflect, it would be like valuable time that you're wasting. Your competitor would just take it and run with it. That is a dilemma we are facing right now, we do not have the time or the space to do any thoughtful considerations. We are just going to the idea; you just have to hope that disaster doesn't happen. So, it's very difficult, I think anyone who just stops, they'll be left behind. 54 4.6 Future navigation “Future navigation” refers to the internal and external navigational initiatives that the startups wanted to take in the future to improve their ethical performance regarding the use of Generative AI. The internal efforts are the ones that the startups can adopt on their own, whereas the external activities are dependent on external actors. 4.6.1 Internal future navigation Responses emphasized the importance of establishing guidelines and providing education on how to use Generative AI effectively in the future. In this regard, respondent 3 mentioned: Perhaps some sort of workshop or education to make the whole company understand how to utilize this tool in a better sense. I look at it as if we had gotten an advanced calculator. Some people would need to get some tutorials on how to use it in the best way, or what you could use it for. Several respondents reported the need for adaptation and possibly the creation of new roles or positions to oversee AI usage. “I think that what we need in the team is a person who is in front of AI, who can help us and guide us through this space and make sure that we always are in front” - Respondent 4. Some responses demonstrated another approach regarding the correlation between startups’ growth and the necessity of guidelines; I definitely feel like it's working as it is today, but maybe in the future it definitely is something that we should maybe discuss. If we have different teams, if people on our team have different vibes or if there is a conflict of ideas in the team, we might need to have explicit discussions about things, issues, opinions, and how we use things. But for now, I believe it's just the need that has not come up yet - Respondent 2. In addition, respondent 7 stated “We will of course have a need for policies or guidelines when we have more employees”. While most of the respondents emphasized the need for education and guidelines, respondents 10 and 11 expressed concerns about the practicality and necessity of such initiatives. “I don't know, maybe. If perhaps yes, then absolutely. But I don't know how easy it 55 would be to adopt those guidelines or what educational activities we would have for the employees” - Respondent 11. Respondent 10’s concerns were more related to the possibility of guidelines for ethics; I think we will absolutely have some guidelines, but this guideline wouldn't be quite so much about ethics or bias or something. It's only about how to use it or how to make a more accurate response or something like that. 4.6.2 External future navigation Eight respondents clearly communicated the necessity of external actors' involvement in facilitating the ethical navigation for startups. The mentioned external actors by respondents were policymakers, institutions, industry experts, and IT companies. External factors contribution can be through regulations, guidelines, guidance, developing ethical tools, increasing society’s knowledge, and tools’ transparency. Five respondents reported that external regulations or guidelines similar to GDPR and the AI Act that are designed by policymakers might be useful in order to responsible use of AI technologies. “Policymakers can make it easier to navigate ethically by providing careful regulation/guidance on the usage of Generative AI and defining clearly where the responsibility/liability is” - Respondent 14. Also, respondent 14 continued his approach by an example of fake content and stated: In terms of misinformation and fake content, I think there should be some kind of regulation governing how such content on social networks can be spread. It should always be possible to know who the sender is for example. In this regard, respondent 15 reported: I think we need to move towards a state where the status of input data for Generative AI is a bit more clearly defined and the usage of user data should be safeguarded, i.e. when uploading documents to prompt etc. Along with the data as such, there needs to be a higher degree of transparency in general as to where data is derived from. Otherwise, trust in Generative AI will eventually evaporate. This should/will be settled by regulations. 56 Some respondents mentioned the role of industry experts in both designing more ethical systems and increasing information and knowledge sharing in society which can help startups in decision-making regarding the ethical aspects. In this regard, respondent 13 stated “It's good if the industry experts start talking more about the ethical issues”. Moreover, respondent 14 suggested the contributions of IT companies such as OpenAI and Google in facilitating the ethical navigation for startups; Companies like OpenAI and Google can be much more open about which data their models were trained on and disclose information about the training/validation process and how bias was corrected. They can also provide the tools for users to easily correct for bias if they stumble upon it. 57 5. Discussion In this chapter, the empirical findings are discussed through the scope of the literature review in order to provide answers to the research questions. In the discussion the navigational initiatives are the driver of the discussion rather than the ethical concerns and problems that were the drivers of the result. The reason for this is to generate a clearer answer to the research questions. First a discussion about the navigation initiatives are held (5.1). Secondly, the connections and underlying reasons behind the navigational initiatives are reviewed (5.2). Lastly, it is discussed how startups can develop their initiatives of ethical navigation in the future. This discussion is mainly held with the ethical framework from the literature review (5.3). 5.1 Navigation initiatives In the current unregulated area, as Ammanath (2021) mentioned, a framework should be adopted by companies to ensure ethical practices and navigation. However, it was found that startups do not integrate any formal and structural framework in their usage due to several factors such as the size and stage of startups, the priority of other activities, lack of knowledge about ethical concerns or recent adoption of the tools. In addition, the incorporation of data ethics takes time, which might lead companies to miss potential opportunities (Ammanath, 2021; Edquist et al., 2022). Which is another potential factor why startups may neglect ethical navigation, deciding to continue their usage and remain in the competitive environment until issues arise. Instead of fixed, structured and written down frameworks or guidelines, the startups implemented various informal navigation initiatives to address functional and ethical issues related to Generative AI tools, as well as some general initiatives to ensure ethical and responsible usage. The specific navigation initiatives implemented by startup employees that were discovered in the empirical data were double checking, trial and correction and stop using. The general navigation initiatives were instinctive recognition, sharing knowledge and discussion and training the tool. Hence it can be concluded that startups navigate ethically when using Generative AI tools through informal navigation initiatives. 58 5.2 Factors impacting navigation 5.2.1 Functional problems and ethical concerns As noted throughout the empirical findings many respondents shared similar perceived problems with the Generative AI tools. Nevertheless, the perceived problems also differed between the respondents depending on how the respondents used the Generative AI tools and for which activities. Which ultimately led them to different ethical navigation initiatives. The navigation initiative “double-checking” was mainly used for addressing the ethical concern of trustworthiness and the functional input-output problem, as Generative AI tools sometimes generate erroneous outputs or hallucinate (Feuerriegel et al., 2023; Fui-Hoon Nah et al. (2023). Double checking the output helped the startups to address these problems and trust the tools enough to use them. Nevertheless, the trustworthiness of Generative AI tools have been met with caution, both from the startup perspective and from the literature. Siau & Wang (2019), state that the concerns surrounding AI risks have prompted questions about the level of trust that should be placed in AI and to what degree. These uncertainties are reflected by the respondents' variances in trusting the different tools, as the majority of them stated that they would never trust the tools or the output blindly even though they used them frequently. That perception aligns with the statement made by Ammanath (2022), claiming that being trustworthy does not mean that every aspect of trust is fulfilled, the trustworthiness will be influenced depending on each business, its specific functions, and the use case of the tool. Which is a possible explanation for the difference in the startup employees level of trust for the Generative AI tools. Moreover, it should be mentioned that all of the discovered functional problems affected the respondents level of trust in the tools. As they experienced the problems, it led to a decreased level of trust in the Generative AI tools. The low level of trust was then the driving reason for coming up with navigational initiatives such as double checking. Other navigational initiatives inheriting from similar problems are “trial and correction” and “stop using”. The iterative process of response-refining in “trial and correction” made the final output more trustworthy, accurate, and aligned with the user's needs. Solving the functional problems of expectation, input output and communication which are often linked to the prompt/input made by the user. The problems mainly arose when the user did not get the pre-expected 59 output or when there was a misalignment in the communication with the tool. “Stop using” was used as a specific navigation strategy when the respondents did not find the tools reliable, or unsuitable for their needs. The navigational initiative to stop using tools is mainly a reaction to the functional problem, weak database or the ethical concern of cognitive atrophy. The cognitive atrophy is connected to the avoidance of tools expressed by some respondents who feared that their intellectual capacity would decrease due to utilization of Generative AI. As mentioned by Fui-Hoon Nah et al. (2023) tools like ChatGPT which generate specific outcomes to prompts can potentially hinder abilities like critical thinking, creativity, and problem-solving. The general navigation initiative “instinctive recognition” was an approach taken by the respondents when dealing with the ethical concerns; data collection and privacy, bias, and trustworthiness. The startup managers trusted their employees to make the right decisions by themselves regarding ethical concerns. Hence, these decisions were made exclusively relying on the employees moral, current knowledge and experiences without the support of formal navigational efforts such as guidelines. However, instinctive recognition seemed like a suitable approach to handle the ethical considerations regarding data collection and privacy. The reason behind the claim was that all respondents showed a great understanding, awareness and caution about sensitive data collection and privacy. Additionally, the respondents showed caution regarding sharing confidential or proprietary data, especially when using open API services like OpenAI's GPT. Despite the awareness, some respondents shared that it can be tricky to know when to be careful, which is aligning with the pitfalls of the great volumes of unstructured data, making it harder to navigate correctly (Cheatham et al., 2019). Furthermore, bias turned out to be a more complex issue of the data collection as the startup employees demonstrated a greater variety in their ability to recognize bias in the data collection processes. All respondents shared an understanding of the existence of personal and societal biases in the human-made content (Feuerriegel et al., 2023). However, the majority of the startups did not look for bias or lacked structured processes to address the concerns. For some of the respondents the lack of response to biases was due to the fact that their input data consisted of objective numbers and signs and did not need checking or when the activity was coding. The remaining ones relied more on instinctive recognition to address bias. When comparing the respondents' answers, it was prominent that there was a subtle 60 variety in the respondents' bias awareness. Generally the startups who developed and trained their own tools had a slightly higher awareness about bias than the startups who were only using Generative AI tools. As claimed by Ammanath (2022), biases need to be understood and addressed in order to produce equal outcomes. Hence instinctive recognition would only be a suitable approach of ethical navigation if the individual is properly aware and knowledgeable within the area. Based on the empirical findings, addressing bias is identified as an potential area of improvement, if the startups want to improve their ethical performance when using Generative AI tools. The other general navigation initiative that was performed by some of the users was “share knowledge and discussion”. As the respondents declared, the majority of the ethical discussions that occurred had a casual nature and the knowledge sharing was spontaneous. In these cases the employees addressed a variety of ethical concerns, covering the ones previously mentioned. However, several startup employees did not have any ethical discussions or knowledge sharing activities at all. Which leaves the responsibility to acquire new or deeper knowledge to the individuals working in the startup. Lastly, Chen et al. (2023) mentioned that Generative AI tools pose a threat to labor markets and numerous simple and repetitive tasks will soon be carried out by AI instead of humans. However, the empirical data showed that none of the respondents have experienced job displacement, as the startups are still in the face of growing and scaling. Additionally several of the respondents have used AI since the startup was founded. Both Sætra (2023) and Fui-Hoon Nah et al. (2023) highlight that Generative AI transforms professions and creates a new division of labor between humans and algorithms. This aligns with one respondent who expressed that the need for problem solvers will be changed to the need for people who can write good prompts and formulate problems instead, since Generative AI can now solve the problems. 5.2.2 Awareness Team knowledge is perceived as one of the factors that affect informal and individual navigation styles among startups. During the interviews, some respondents reported basic knowledge about Generative AI and its associated concerns. Therefore, it is arguable that the lack of knowledge might lead to uncertainty about the root of the problem, which could result in increasing non ethical usage. 61 In addition, the collected data demonstrated a variety of knowledge levels among team members, which impacted how they perceived ethical concerns. This diversity might lead to various navigation initiatives. According to Edquist et al. (2022), executive leaders should be involved in setting frameworks for data usage and navigation. Ammanath (2021) and Blackman (2023) discussed that since not all employees are experts in data ethics, executive leaders should be the ones bearing the greater responsibility for ensuring ethical use and application within operations. However, the reality in researched startups occasionally differed from these theoretical claims. As an example, one company clearly mentioned that its senior members and supervisors do not believe in Generative AI at all, and even employees who use Chat GPT are unfamiliar with Generative AI. Therefore, this lack of knowledge among leaders has led to neglect of the responsible use of Generative AI and the absence of a structured, collective framework for the entire company. Supporting this claim, the collected data illustrated a startup that had a higher level of knowledge among team members and integrated Generative AI in its business solution, had developed unwritten guidelines. Moreover, the collected data indicated that innovation adoption could be considered as another factor impacting ethical navigation among startups. The majority of respondents reported curiosity as their initial motivation for adopting Generative AI tools. The informal and individual approach toward adopting Generative AI tools and trying various models instead of a formal and collective approach might lead to a variety of usages and ideas among users, resulting in an informal and individual style of navigation. Moreover, the competitive environment might hinder startup teams from waiting and reflecting on their usage. The collected data reported that Generative AI has eliminated the space for reflection; startups either use them to stay competitive or reject them and become less competitive. It was also mentioned that if one pauses to reflect, there is a risk of losing valuable time and being outpaced by competitors. Therefore, this urgency might make it challenging to consider the ethical implications before adopting tools and navigating afterward. 5.3 Future navigation As analyzed and discussed, researched startups are currently following an individual and informal approach to navigate ethically in their use of Generative AI. However, the collected data showed that the majority of respondents recognized the importance and future need for 62 formal guidelines and educational activities for employees. It appears that the lack of knowledge regarding Generative AI, its associated concerns and startup executives' tendency to blindly trust employees, combined with the stage of startups might be hindering them. As previously mentioned, rapid technological progress, increasing technological insecurities, and a lack of clear metrics for assessing social sustainability performance have forced an increased need for startups to look for ethical and sustainable strategies. So, what should startups do to develop their current navigations and reach a formal framework? Edquist et al. (2022) proposed some principles for an ethical framework regarding data usage since data is crucial for implementing Generative AI and many ethical concerns revolve around it. An efficient framework should reflect company-specific rules for data usage, ideally designed by executive leaders (Edquist et al., 2022). Despite this, many executive leaders in startups preferred to trust their employees' intuitive sense and ethical judgment. The collected data demonstrated a cautious approach among users when using Generative AI tools, dealing with sensitive information or critical tasks. Consequently, the view of handling ethical data is seen as an individual responsibility in startups. The individual approach might however lead to a fragmented way of addressing the same types of ethical concerns within the startup. To align all employees, it is suggested to set company-specific rules as it gives employees a clear sense of the company’s threshold for risk Edquist et al. (2022). In addition, such rules can improve and potentially speed up individual and organizational decision-making, making it beneficial for startups who often have limited time and resources. Furthermore, Edquist et al. (2022) added that data usage rules need to be widespread inside and outside the organization and explained to all employees. Since the startups had no formal and structured framework, employees mostly exchanged their knowledge regarding various aspects of Generative AI in an informal manner such as at lunchtime or in spontaneous interactions. Based on the collected data, just half of the respondents reported their conversations were about ethics. Interestingly, startups that have integrated Generative AI models into their business solutions tended to be more engaged in ethics discussions and even formal and regular meetings on the topic. Based on Edquist et al. (2022), it is suggested to transform the current informal meetings and discussions into structured ones and involve the entire company. Also, common data usage values and specific rules related to particular roles and tasks in the company should be communicated. 63 Another positive initiative for ethical Generative AI implementation that can be taken by companies is building a team that focuses on ethical issues (Edquist et al., 2022). It can be done by assigning a person to be in charge of Generative AI implementation in the company or seeking assistance from external experts to address ethical challenges. Due to size and stage limitations, these kinds of initiatives are not a top priority in startups. However, the ones who were more engaged with this technology or had more experts or data scientists in their companies utilized Generative AI tools cautiously. Some involved external experts and consultants in their internal discussions to enhance the startup employees' knowledge and perspectives in this field. Therefore, startup managers should consider this principle to ensure ethics issues do not become a side activity. Moreover, Edquist et al. (2022) suggested to engage champions in the C-suite. The collected data indicated that the Generative AI and ethics knowledge varies among team members, even among seniors and supervisors. Startups should consider this principle if they want to develop their ethical performance. Executives should therefore develop their knowledge and involvement in the Generative AI adoption process. Another principle suggested by Edquist et al. (2022) is to evaluate the impact of the algorithms and overall data usage. In this regard, most respondents did double-checking or trial and correction as their main activities to monitor and solve functional problems and address ethical issues. The collected data showed that double-checking has become a routine for several users. In some cases, they have involved experts in the double-checking process. Startup users trial and correct to evaluate data usage and improve outputs. This evaluation and improvement initiative has been done by refining prompts, providing additional context or data, or deciding to do the task manually if the AI’s performance is not proper. Based on the collected data in the worst scenario after evaluating the usage, users stopped their implementation due to ineffective, unreliable, or unsuitable outcomes. These navigational practices were performed individually and in an informal manner. Startups employees should take them seriously as an essential part of their usage and pay attention to Edquist et al. (2022) suggestion to proactively test for biases and other potential issues that may be inadvertently introduced during the usage by the startup team. As the last principle, embedding ethical data usage frameworks and guidelines into companies' operations poses a challenge (Edquist et al., 2022). However, in the startup 64 environment, the imposition of guidelines and internal policies is rare due to the immaturity of both Generative AI tools and startups, the lack of proper knowledge among users, and flexibility in startups. Based on Edquist et al. (2022), startups should identify principles to monitor and measure their progress toward achieving data ethics goals and foster a culture in which ethical usage is inherent in every employee’s daily tasks. Also, leaders should actively promote and provide formal training programs on data ethics. Education activities and workshops for the entire company were mentioned by respondents as beneficial initiatives for startup users to understand how to utilize these tools efficiently. Based on the findings from the empirical data and the existing literature, developing formal navigation initiatives seems like the best way to reassure ethical interaction with Generative AI tools in this rapid technological adoption rate. The collected data indicated that having guidelines has been an undiscovered need so far. In the future, when the number of employees will increase, due to the different vibes and ideas, the startup team might need to have explicit discussions and frameworks in order to have a united workforce and standardized work structures. In addition, although most of the startup employees were aware of the necessity of guidelines, the lack of proper knowledge may hinder their ability to design and implement effective frameworks for adoption in the company in the future. Respondents reported expectations for support from policymakers, institutions, industry experts, and IT companies to facilitate ethical navigation in the future. Future initiatives by external actors such as policymakers, institutions, industry experts, and IT companies might facilitate ethical navigation practices for startups. Cheatham et al. (2019) highlighted that data accessibility has greatly increased over the past two decades and continues to grow. While this offers numerous opportunities for businesses, it also brings potential risks and challenges. In this regard, it is noteworthy that the respondents expressed concerns about the data handling made by the IT companies behind the Generative AI tools. Additionally, they called for engagement from external actors in developing regulations and guidelines to enhance data transparency. Additionally, improving society’s knowledge in this area was seen as a responsibility of external actors, especially industry experts, which might lead to easier and more ethical decision-making by startup users. Therefore, another potential way for startups to develop their ethical navigation initiatives is to rely on the external actors who can help startups develop general ethical initiatives. Such support from external actors might improve 65 the startup team's abilities to navigate ethically without changing the natural informal state of startups too much. 66 6. Conclusion In the final chapter, both research questions which were formulated in chapter 1.3 are answered. Moreover, theoretical and practical contributions are presented, followed by suggestions on future research. As previously stated, this thesis aims to understand the ethical concerns arising from the implementation of Generative AI tools by startup users. It also seeks to examine how users navigate these concerns to enhance their ethical performance over time. By taking a qualitative approach and multiple-case study design, the provided results in this study contribute to the development of research on ethical navigation of Generative AI tools within startups and help startup users prepare for future navigation. Based on the empirical data and discussion, the first of the two research questions: How do startups navigate ethically when using Generative AI tools in their business practices? is answered in the following way. The perceived ethical concerns that were most common in the startups data collection and privacy, bias, trustworthiness, and cognitive atrophy. Additionally, respondents recognized certain functional problems that led to ethical concerns related to trustworthiness. The identified functional problems include expectations, input and output issues, communication challenges, and a weak database. All these functional problems and ethical concerns are addressed by startup users through informal navigational initiatives. Common navigational initiatives included double-checking, trial and correction, and instinctive recognition. The less common navigational initiatives were to stop using the tool, training the tool, and sharing ethical knowledge through informal discussions. As only one of the respondents navigated with the help of internally developed formal guidelines for ethical usage, it is clear that formal navigational initiatives are rare within startups. While formal activities were mentioned as the suggested way for ethical navigation regarding Generative AI usage in the literature review, a gap between literature and reality has been discovered. The lack of a formal and structured way of navigating ethical concerns of Generative AI tools is obvious among startups. Some initiatives have been suggested to reframe their current ethical navigations into formal frameworks that can address ethical issues more properly and collectively. It is worth mentioning that current navigational practices in startups involve individual approaches, designed and adopted based on individual decisions. Therefore, the 67 studied startups navigate their use of Generative AI tools ethically through informal and individual practices. The collected data demonstrated additional findings regarding factors which can affect ethical navigations. The type of ethical concerns encountered by startup users, the absence of a proper knowledge team, high trust on employees' instinctive recognition and the innovation adoption style of users in the current competitive environment may influence the ethical navigational practices employed by startup employees to address these issues. As startups navigate ethical concerns related to Generative AI tools informally and individually, the collected data showed a preference for trusting employees and their instinctive recognition over implementing formal guidelines. Therefore, startup executives need to review their team’s knowledge and assess whether it aligns with the proper ethical navigation practices or not. The second research question: How do startups plan to develop their ethical navigation of Generative AI in the future? is answered accordingly. Based on the collected data, formal guidelines or frameworks have not been observed in startup practices. However, startups are planning to develop their ethical navigation of Generative AI in the future by implementing formal navigational initiatives and educational activities aimed at increasing awareness among users. Additionally, external actors were recognized as a necessary influence in assisting startups future development. Mainly due to the external actors' involvement in the implementation of relevant regulations, raising societal awareness, and enhancing transparency within existing Generative AI tools. To be capable of developing their current ethical navigation of Generative AI, startups should adopt more formal practices and build a framework tailored for their usage. The suggestions are as follows: setting company-specific rules for data usage and navigational initiatives and sharing them with the entire startup in formal meetings; assigning a person or a team to be in charge of Generative AI implementation in the startup, along with its opportunities and risks, and update the entire startup regarding these topics; the executives should improve their knowledge in the relevant topics related to Generative AI technology and be more engaged in decision-making regarding Generative AI adoption; actively evaluate the impact of algorithms and data usage as an essential step in Generative AI usage, and fostering the current navigational initiatives such as double-checking and trial and correction in a standardized way aligning the entire startup; monitor the progression towards ethical goals, 68 and providing educational activities to embed the ethical consideration into the startup culture. 6.1 Theoretical and practical contributions The rapid utilization of Generative AI tools worldwide has attracted researchers. Topics related to Generative AI, especially Chat GPT, have become popular in the academic world. However, while current research is predominantly pursued by engineering and information management, journals, etc. there remains a notable gap in research on this emerging technology within the business field, particularly in entrepreneurship studies. This thesis contributes to filling this research gap by focusing on the ethical considerations of Generative AI tools within the startup context. Moreover, current research primarily focuses on Generative AI tool developers, while both users and developers bear significant ethical responsibilities. The thesis result contributes to monitor startup user behavior with Generative AI tools and determine the existing methods to address both functional and ethical challenges faced by users. The academic world bears the responsibility of providing accurate knowledge to enhance users' understanding of these tools, promote proper behavior and communication practices, and assess and address associated concerns and risks that may impact society and users' activities. Additionally, the thesis identifies the weaknesses of current navigation initiatives and the gap between those initiatives and the proposed ethical framework by Edquist et al. (2022). This research can be considered an initial step toward further exploration of topics related to ethical behaviors and ethical innovation adoption in startups, as well as the contributions of these companies to social sustainability. The results of this thesis offer concrete implications that can guide managers and startup users to take appropriate actions based on the findings. The thesis findings contribute to future initiatives where startup users can align their usage with ethical considerations and design structured ethical guidelines. The research figured out the current navigation style and practices in detail, enabling users to review their activities and gain insights from alternative perspectives. This perspective will enable users to assess their processes objectively and identify associated limitations and risks. Additionally, the suggested principles for a structured framework can help enhance current ethical navigation and reframe it over time. Also, the internal and external future approaches of users shed light on the upcoming initiatives both from startup users and external actors such as policymakers, industry experts, 69 and IT companies. Users and external actors can plan future initiatives to mitigate ethical concerns associated with using Generative AI tools. 6.2 Suggestions for future research Due to the newness of ethical navigation in startups regarding AI, this thesis has only scratched the surface of this research area. The analysis and discussion touched upon the future needs, where formal navigation initiatives for ethical concerns seemed to be the proper way. However, the impact of implementing all of the suggested formal navigation initiatives into startups are not assessed. It is unclear to what extent startups should implement the formal principles. Hence, a future suggestion would be to examine which formal activities are the most efficient to implement and which have the best impact on improving and developing the startups ethical navigation. Furthermore, external actors could potentially have a lot of influence and impact the development of startups ethical performance. Therefore, there is an interest in discovering how external actors respond to the development of Generative AI and what impacts it would have on startups. One potential research area would be to investigate how new regulations such as the AI Act developed by the European Union impact the ethical performance of startups. Due to the limitation of available relevant literature in the business journals during the research process, several additional needs for broader research were encountered. One of them was about how Generative AI is adopted and used in business concepts, both startups and more developed firms, as most existing research is within the tech industry. Another discovered need was a deeper investigation of the user role in ethical navigation of Generative AI, since the existing research is more focused on the developers of the AI tools and how they should behave. Both of these needs would be recommended to study and develop further in the future as it potentially can give more actors increasing knowledge and an ability to behave ethically and take responsibility. 70 References Ammanath, B. (2021). Thinking Through the Ethics of New Tech . . . Before There’s A Problem. Harvard Business Review. Retrieved in 12/2/2024 from https://hbr.org/2021/11/thinking-through-the-ethics-of-new-techbefore-theres-a-problem Ammanath, B. (2022). Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI. First edition. Wiley: Newark Bell, E., Bryman, A. & Harley B. (2019). Business Research Methods. 5th Edition. Oxford University Press: Oxford. Blackman. R. (2023). How to Avoid the Ethical Nightmares of Emerging Technology. Harvard Business Review. Retrieved in 15/2/2024 from https://hbr.org/2023/05/how-to-avoid-the-ethical-nightmares-of-emerging-technology Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology, Qualitative Research in Psychology, 3(2), pp. 77–101. Brynjolfsson, E. & Mcafee, A. (2017). The Business of Artificial Intelligence: What It Can -- and Cannot -- Do for Your Organization, Harvard Business Review Digital Articles, pp. 3–11. Buehler. K, Dooley. R, Grennan. L, and Singla. A. (2021). Getting to know—and manage—your biggest AI risks. Retrieved in 17/2/24 from https://www.mckinsey.com/capabilities/quantumblack/our-insights/getting-to-know-and-man age-your-biggest-ai-risks Business Sweden. (2022). Seasons of change: AI powered by Swedish collaboration, innovation and data. Retrieved in 30/1/2024 from https://www.business-sweden.com/globalassets/insights/reports/seasons-of-change.pdf Cheatham, B., Samandari, H., & Javanmardian, K. (2019). Confronting the risks of artificial intelligence. Retrieved in 7/3/2024 from https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-a rtificial-intelligence 71 Chen, Boyang, Zongxiao Wu, and Ruoran Zhao. “From Fiction to Fact: The Growing Role of Generative AI in Business and Finance.” Journal of Chinese economic and business studies 21.4 (2023): 471–496. Print. ComputerWorld. (2024). IDC expects genAI spending to double in these areas in '24. Retrieved in 3/1/2024 from https://www.computerworld.com/article/3711861/idc-expects-genai-spending-to-double-in-th ese-areas-in-24.html Davenport, T. H., Mittal, N. (2022). How Generative AI Is Changing Creative Work. Harvard Business Reveiw. Ebert, Christof, Panos Louridas, and Christof Ebert. “Generative AI for Software Practitioners.” IEEE software 40.4 (2023): 30–38. Print. Edquist. A, Grennan. L, Griffiths. S, and Rowshankish. K. (2022). Data ethics: What it means and what it takes. Retrieved in 12/2/24 from https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/data-ethics-what-it-mea ns-and-what-it-takes Epstein, J. M., & Buhovac, A. R. (2014). Making Sustainability Work. Second edition. Greenleaf Publishing. Feld, B., Wise, S. (2017). Startup Opportunities. 2nd Edition. Wiley. Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2023). Generative AI. Business & Information Systems Engineering, 66(1), 111-126. Wiesbaden: Springer Fachmedien Wiesbaden. Forbes. (2022). What Is A Startup? The Ultimate Guide. Retrieved in 18/2/2024 from https://www.forbes.com/advisor/business/what-is-a-startup/ Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: A Comprehensive Exploration of Applications, Challenges, and AI-Human Collaboration. Journal of Information Technology Case and Application Research, 25(3), 277-304. 72 Guba, E. G., & Lincoln, Y. S. (1994). ‘Competing Paradihms in Qualitative Research’, in N. K. Denzin & Y. S. Lincoln (eds), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Gupta, V., & Yang, H. (2024). Generative Artificial Intelligence (AI) Technology Adoption Model for Entrepreneurs: Case of ChatGPT. Internet Reference Services Quarterly, 1-2. Kickul, J., & Lyons, T. S. (2012). Understanding social entrepreneurship : The relentless pursuit of mission in an ever changing world. Routledge. New York. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage. Luger, M. I., & Koo, J. (2005). Defining and Tracking Business startups. Small Business Economics, 24(1), 17-28. Dordrecht: Springer. Lv, Z. (2023). Generative artificial intelligence in the metaverse era. Cognitive Robotics, 3, 208-217. Elsevier B.V. Metrick, A., & Yasuda, A. (2021). Venture capital & the finance of innovation (Third ed.). Mondal, S., Das, S., & Vrana, V. G. (2023). How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life. Technologies (Basel), 11(2), 44. Basel: MDPI AG. Patel, R., & Davidsson, B. (2011). Forskningsmetodikens grunder - Att planera, genomföra och rapportera en undersökning. 4th Edition. Studentlitteratur: Lund Saunders, M., Lewis, P., & Thornhill, A. (2009). Research methods for business students. 5th ed. Harlow: Pearson educated limited. Schlegelmilch, B. B. & Szőcs, I. (2020). Rethinking Business Responsibility in a Global Context: Challenges to Corporate Social Responsibility, Sustainability and Ethics. (First edition). Springer International Publishing AG. Siau, K. & Wang, W. (2019). Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda. Journal of database management 30.1: 61–79. Print. 73 Skala, A. (2019). Digital Startups in Transition Economies: Challenges for Management, Entrepreneurship and Education (1st ed.). Cham: Springer International Publishing. Solane, M. & Zakrzewski, J. (2022) German AI startups and AI Ethics: Using A Social Practice Lens for Assessing and Implementing Socio-Technical Innovation. 免费资源 _Miscellaneous Free E - Journals. arXiv.org [2331-8422] Steve Blank, “What’s a Startup? First Principles,” Steve Blank’s blog, January 25, 2010, http://steveblank.com/2010/01/25/whats-a-startup-first-principles/ Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, Article 102372. Oxford: Elsevier Ltd. Vinnova. (2018). Artificial Intelligence in Swedish Business and Society – Analysis of Development and Potential. Retrieved in 22/1/2024 from https://www.vinnova.se/contentassets/72ddc02d541141258d10d60a752677df/vr-18_12.pdf World Economic Forum. (2024a). Generative Artificial Intelligence. Retrieved in 2/1/2024 from https://intelligence.weforum.org/topics/a1G680000008gwFEAQ World Economic Forum. (2022). How businesses should respond to the EU’s Artificial Intelligence Act. Retrieved in 3/1/2024 from https://www.weforum.org/agenda/2022/02/how-businesses-should-respond-to-eu-artificial-int elligence-act/ World Economic Forum. (2024b). The Global Risks Report 2024: 19th Edition. Retrieved in 22/1/2024 from https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf Zarifhonarvar, A. (2023). Economics of ChatGPT: A labor market view on the occupational impact of artificial intelligence. Journal of Electronic Business & Digital Economics, Journal of Electronic Business & Digital Economics, 2023(12). 74 Appendix Interview guide Is it okay if we record the interview? Background questions 1. Please introduce yourself and your role/position at your startup. 2. Can you introduce your startup: - What do you do? - What is your value proposition? - Is sustainability part of your business mission/involved in your operations? AI Usage 3. Can you describe how you use Generative AI tools in your business? (How often, in what activities/areas) - Can you describe the specific Generative AI tools that your startup has implemented? 4. When did you start using AI? Why did you start using it? Problems & Responses 5. Looking back, have you encountered any problems when using Generative AI tools? 6. Have you encountered any social concerns regarding Generative AI tools. Can you see these concerns in your own usage? 7. Do you use your own data as input in the Generative AI models? - If they use/ collect their own data how do they check it for bias? 8. How do you trust the output?/ How do you rely on it? 9. Have you ever thought about or experienced AI impact on job displacement in your business? 10. Have you made any changes to address the problems you have experienced with AI so far? (Changes in policies, routines, activities etc) - If not, do you have any plans to prevent it from happening again? Awareness 11. How would you describe your knowledge about Generative AI? - Would you say that the rest of the team have similar knowledge? - Do you generally have discussions before adopting new innovative tools in your business? 12. Have you had any discussions about AI ethics within the company? - If so, which employees were involved? (Everyone? Only the executives? Only developers?) 13. Do you have any courses or educational activities for ethics or sustainable practices that the employees can take part in? 75 14. Do you have any internal guidelines about using Generative AI tools in your business? - Or is it included in any of policies, such as sustainability policy? 15. Do you think your Generative AI usage will increase in the future? - That you use it more/involve it in additional practices. - If yes, do you think you will have to make any changes in how you use it today? (stricter guidelines, better education, someone in charge etc) 16. Is there anything that external actors (policymakers, industry experts, established firms etc.) could do to make it easier for your startup to navigate ethically in your generative AI usage? 76