• English
    • svenska
  • English 
    • English
    • svenska
  • Login
View Item 
  •   Home
  • Student essays / Studentuppsatser
  • Department of Philosophy,Lingustics and Theory of Science / Institutionen för filosofi, lingvistik och vetenskapsteori
  • Masteruppsatser / Master in Language Technology
  • View Item
  •   Home
  • Student essays / Studentuppsatser
  • Department of Philosophy,Lingustics and Theory of Science / Institutionen för filosofi, lingvistik och vetenskapsteori
  • Masteruppsatser / Master in Language Technology
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Expert in the Loop: LLM Assistance for Technical Documentation Writing Case Study at Saab AB

Expert in the Loop: LLM Assistance for Technical Documentation Writing Case Study at Saab AB

Abstract
Abstract This study explores the potential of LLMs in the technical writing process at Saab Aeronautics. The technical writing process is investigated by interviewing technical writers, collecting insights regarding the most challenging tasks and areas where AI-assistance could be beneficial. These experts are involved in this research project in several stages with the aim of investigating how an LLM could facilitate their tasks. A demonstration dataset is collected with the help of the experts. Additionally, a parallel corpus consisting of technical procedures is created. Supervised Instruction Fine Tuning (SIFT) method is implemented for the fine-tuning of an LLM (Mistral-7b-Instruct-v.02), combining Quantized Low-rank Adaptation (QLoRA) and Low-rank Adaptation (LoRA) in order to perform the fine-tuning memory-efficiently. Sampled generations are investigated qualitatively in addition to a small-scale hyperparameter search. Both the experts involved in the data collection as well as held-out experts are involved in the evaluation stage. The results show that the fine-tuned model’s outputs are preferred over the base model outputs 68% of the time. Analysis of the experts’ comments reveals that the fine-tuned model outperforms the base model specifically in terms of adhering to the Simplified Technical English (STE) writing standard and by containing fewer hallucinations. This study suggests potential in fine-tuning LLMs with small, but high-quality datasets. Additionally, this study highlights the significance in involving human expertise in such processes for domain-specific needs, such as those at Saab.
Degree
Student essay
URI
https://hdl.handle.net/2077/87918
Collections
  • Masteruppsatser / Master in Language Technology
View/Open
Master thesis (1.352Mb)
Date
2025-06-13
Author
Nieminen, Anni
Keywords
Technical Writing, Fine-tuning, LLM, Simplified Technical English, Instruction-tuning
Language
eng
Metadata
Show full item record

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV
 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV