Den 5/1-2026 kommer GUPEA att vara otillgängligt för alla under hela dagen.
Search
Now showing items 1-8 of 8
Training for the Unexpected Approaching Universal Phone Recognition for Computer-Assisted IPA Transcription of Low-Resource Languages
(2025-06-13)
Abstract
We set out to develop a language-agnostic ASR model for the phonetic transcription
of speech into the International Phonetic Alphabet (IPA). While NLP and Automatic-
Speech-Recognition (ASR) have made immense ...
Expert in the Loop: LLM Assistance for Technical Documentation Writing Case Study at Saab AB
(2025-06-13)
Abstract
This study explores the potential of LLMs in the technical writing process at Saab Aeronautics. The technical writing process is investigated by interviewing technical writers, collecting insights regarding the ...
Adaptive Game-Based Swedish Language Learning A Hybrid AI Approach to Content Generation
(2025-09-25)
Abstract
This thesis tests the performance of LLMs on pre-generated L2 Swedish learning content
across beginner to intermediate CEFR levels (A1–B2) by employing them into a
self-developed language learning game (including ...
Effect of prompt strategy on the results of Code Generation by LLMs
(2025-06-19)
Abstract
Large Language Models (LLMs) have made significant strides in automated code generation.
For example, Github Copilot based on the CodeX model, is the first to generate
complete functions directly from natural ...
Breaking Barriers: Enhancing Universal Dependency Parsing for Amharic Advancing NLP for A Low-Resource Language
(2025-06-19)
This study advances Amharic dependency parsing by expanding and refining the
existing Universal Dependencies (UD) Treebank (Seyoum, Miyao, and Mekonnen,
2018). As a morphologically rich and under-resourced language, ...
Enhancing NLU with Paraphrasing in Task-Oriented Dialogue Systems Toward Data-Efficient Generalization in Low-Resource Scenarios
(2025-10-14)
In task-oriented dialogue systems (TODS), Natural Language Understanding (NLU) is
fundamental to interpreting user intent and extracting semantic information. However,
training robust NLU models often requires large ...
Guided by Surprisal: Active Curriculum Language Modeling over a Hybrid Pre-training Method. Contributions to the BabyLM Challenge
(2025-10-30)
This study investigates language modeling on developmentally plausible corpora under
low-resource constraints, conducted within the scope of the BabyLM shared task,
specifically focusing on the Strict-small track. We ...
Drawing with a social robot: Evaluating a vision-language model on spatial prepositions in an L2 drawing-based learning task
(2025-11-06)
This study develops and evaluates a multimodal pipeline for studying interactive drawing
in a robot-assisted language learning scenario with human participants. A social
robot is paired with a vision-language model (VLM) ...