Managerial Use of Generative - AI Enablers and Constraints in a Consulting Context
Abstract
Generative AI tools like ChatGPT can transform how managers make decisions, yet adoption
at the managerial level remains limited. Why? Through interviews with nine managers at
Rejlers and validation from a generative AI transformation specialist, we identified three key
constraining areas. Trust develops through domain expertise, creating a paradox where
knowledge enables verification of generative AI outputs while simultaneously making
managers more critical of them. Managers with stronger expertise feel confident using AI
because they can evaluate accuracy, yet this heightens their scrutiny of AI-generated content.
Boundaries are established based on contextual understanding. Managers readily delegate
routine tasks to generative AI while reserving complex, relationship-sensitive decisions for
human judgment. AI's lack of organization-specific context limits its utility for company specific challenges. Practical barriers impede adoption, including insufficient integration with
existing systems, established work habits, and limited awareness of generative AI capabilities.
These factors often prevent adoption even when managers recognize potential benefits.
From these insights, we developed the Managerial AI Interaction Framework illustrating how
managers progress through four stages: awareness and experimentation, competence building,
workflow integration, and boundary-aware collaboration. Our research suggests that effective
human-AI collaboration requires addressing trust dynamics, establishing appropriate task
boundaries, and overcoming practical integration barriers rather than focusing solely on
technical capabilities.
Degree
Master 2-years
Collections
View/ Open
Date
2025-08-05Author
Wikner, Axel
Bårdén, Gustav
Series/Report no.
IIM 2025:12
Language
eng