• English
    • svenska
  • svenska 
    • English
    • svenska
  • Logga in
Redigera dokument 
  •   Startsida
  • Student essays / Studentuppsatser
  • Department of Computer Science and Engineering / Institutionen för data- och informationsteknik
  • Masteruppsatser
  • Redigera dokument
  •   Startsida
  • Student essays / Studentuppsatser
  • Department of Computer Science and Engineering / Institutionen för data- och informationsteknik
  • Masteruppsatser
  • Redigera dokument
JavaScript is disabled for your browser. Some features of this site may not work without it.

Leveraging Large Language Models to Generate Natural Language Explanations of AI Systems - A Framework for Natural Language Explanations

Sammanfattning
The acceleration of artificial intelligence (AI) deployment across various domains necessitates advancements in explainable AI (XAI) to enhance transparency and user interaction, including calibrating trust and reliance. This thesis introduces a framework leveraging large language models (LLMs) to generate free text natural language explanations (NLEs) of AI systems, without the need for human-annotated data. One of the aims of the framework is to make the explanations accessible and comprehensible to non-technical users. The framework integrates explainer models with LLMs to transform complex AI outputs into natural language. This thesis evaluates the framework’s effectiveness in generating faithful NLEs in a text classification task. Moreover, a user study examines how these explanations affect user satisfaction and reliance. The results demonstrate that while the framework can generate explanations faithful to the input from the explainer model, the satisfaction among users did not significantly differ from a traditional explanation method (LIME). However, the results indicate that NLEs can decrease over-reliance on AI systems. The thesis highlights critical considerations in selecting explainer models and tailoring explanations to the context and user expectations. It also opens avenues for future work, including enhancing interaction with explanations through conversational agents and the possibility to tailor explanations to the users.
Examinationsnivå
Student essay
URL:
https://hdl.handle.net/2077/83670
Samlingar
  • Masteruppsatser
Fil(er)
CSE 24-15 LH.pdf (2.157Mb)
Datum
2024-10-16
Författare
HOLMBERG, LInus
Nyckelord
Human-centered AI
explanation
explainable AI
large language models
natural language
Metadata
Visa fullständig post

Related items

Showing items related by title, author, creator and subject.

  • Why the pond is not outside the frog? Grounding in contextual representations by neural language models 

    Ghanimifard, Mehdi (2020-05-05)
    In this thesis, to build a multi-modal system for language generation and understanding, we study grounded neural language models. Literature in psychology informs us that spatial cognition involves different aspects of ...
  • Steg för steg. Naturvetenskapligt ämnesspråk som räknas 

    Ribeck, Judy (2015-11-13)
    In this work, I present a linguistic investigation of the language of Swedish textbooks in the natural sciences, i.e., biology, physics and chemistry. The textbooks, which are used in secondary and upper secondary school, ...
  • Proceedings of the 2022 CLASP Conference on (Dis)embodiment 

    Dobnik, Simon; Grove, Julian; Sayeed, Asad; Department of Philosophy, Linguistics and Theory of Science (FLoV); Centre for Linguistic Theory and Studies in Probability (CLASP) (The Association for Computational Linguistics, 2022-09-14)
    Dis)embodiment brings together researchers from several areas examining the role of grounding and embodiment in modelling human language and behaviour – or limits thereof. The conference covers areas such as machine learning, ...

DSpace software copyright © 2002-2016  DuraSpace
gup@ub.gu.se | Teknisk hjälp
Theme by 
Atmire NV
 

 

Visa

VisaSamlingarI datumordningFörfattareTitlarNyckelordDenna samlingI datumordningFörfattareTitlarNyckelord

Mitt konto

Logga inRegistrera dig

DSpace software copyright © 2002-2016  DuraSpace
gup@ub.gu.se | Teknisk hjälp
Theme by 
Atmire NV