Computational Models of Language and Vision: Studies of Neural Models as Learners of Multi-Modal Knowledge
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis develops and evaluates computational models that generate natural language descriptions of visual content. We build and examine models of language and vision to gain a deeper understanding of how they reflect the relationship between the two modalities. This understanding is crucial for performing computational tasks. The first part of the thesis introduces three studies that inspect the role of self-attention in three different self-attention blocks of the object relation transformer model. We examine attention heatmaps to understand how the model connects different words, objects, and relations within the tasks of image captioning and image paragraph generation. We connect our interpretation of what the model learns in self-attention weights with insights from theories about human cognition, visual perception, and spatial language. The three studies in the second part of the thesis investigate how representations of images and texts can be applied and learned in task-specific models for image paragraph generation, embodied question answering, and variation in human object naming.The last two studies in the third part examine properties of human-generated texts that multi-modal models are expected to acquire in image paragraph generation as well as perceptual category description and interpretation tasks. We analyse discourse structure in image paragraphs produced with different decoding methods. We also inspect whether models of perceptual categories can abstract from visual representations and use this knowledge to generate descriptions that exhibit discriminativity levels important for the task. We show how automatic measures for evaluating text generation behave in a comparison of model-generated and human-generated image descriptions. This thesis presents several contributions. We illustrate that, under specific modelling conditions, self-attention can capture information about the relationship between objects and words. Our results emphasise that the specifics of the task determine the manner and context in which different modalities are processed, as well as the degree to which each modality contributes to the task. We demonstrate that while favoured by automatic evaluation metrics in different tasks, machine-generated image descriptions lack the discourse complexity and discriminative power that are often important for generating better, human-like image descriptions.
Description
Keywords
Citation
ISBN
978-91-8069-768-2 (PDF)
Articles
What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations. Nikolai Ilinykh and Simon Dobnik. 2021. Frontiers in Artificial Intelligence: Identifying, Analyzing, and Overcoming Challenges in Vision and Language Research, 4, 767971. http://dx.doi.org/10.3389/frai.2021.767971
Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer. Nikolai Ilinykh and Simon Dobnik. 2022. In Findings of the Association for Computational Linguistics: ACL 2022, pages 4062–4073, Dublin, Ireland. Association for Computational Linguistics. https://aclanthology.org/2022.findings-acl.320/
When an Image Tells a Story: The Role of Visual and Semantic Information for Generating Paragraph Descriptions. Nikolai Ilinykh and Simon Dobnik. 2020. In Proceedings of the 13th International Conference on Natural Language Generation, pages 338–348, Dublin, Ireland. Association for Computational Linguistics. https://aclanthology.org/2020.inlg-1.40/
Look and Answer the Question: On the Role of Vision in Embodied Question Answering. Nikolai Ilinykh, Yasmeen Emampoor, and Simon Dobnik. 2022. In Proceedings of the 15th International Conference on Natural Language Generation, pages 236–245, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics. https://aclanthology.org/2022.inlg-main.19/
Context matters: evaluation of target and context features on variation of object naming. Nikolai Ilinykh and Simon Dobnik. 2023. In Proceedings of the 1st Workshop on Linguistic Insights from and for Multimodal Language Processing, pages 12–24, Ingolstadt, Germany. Association for Computational Linguistics. https://aclanthology.org/2023.limo-1.3/
Do Decoding Algorithms Capture Discourse Structure in Multi-Modal Tasks? A Case Study of Image Paragraph Generation. Nikolai Ilinykh and Simon Dobnik. 2022. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 480–493, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. https://aclanthology.org/2022.gem-1.45/
Describe Me an Auklet: Generating Grounded Perceptual Category Descriptions. Bill Noble* and Nikolai Ilinykh*. 2023. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9330–9347, Singapore. Association for Computational Linguistics. *Equal contribution. https://aclanthology.org/2023.emnlp-main.580/