Show simple item record

dc.contributor.authorGhanimifard, Mehdi
dc.date.accessioned2020-05-05T08:29:25Z
dc.date.available2020-05-05T08:29:25Z
dc.date.issued2020-05-05
dc.identifier.isbn978-91-7833-917-4
dc.identifier.isbn978-91-7833-916-7
dc.identifier.urihttp://hdl.handle.net/2077/64095
dc.description.abstractIn this thesis, to build a multi-modal system for language generation and understanding, we study grounded neural language models. Literature in psychology informs us that spatial cognition involves different aspects of knowledge that include visual perception and human interaction with the world. This makes spatial descriptions a compelling case for the study of how spatial language is grounded in different kinds of knowledge. In seven studies, we investigate what and how neural language models (NLM) encode spatial knowledge. In the first study, we explore the traces of functional-geometric distinction of spatial relations in uni-modal NLM. This distinction is essential since the knowledge about object-specific relations is not grounded in the visible situation. Following that, in the second study, we inspect representations of spatial relations in a uni-modal NLM to understand how they capture the concept of space from the corpus. The predictability of grounding spatial relations from contextual embeddings is vital for the evaluation of grounding in multi-modal language models. On the argument for the geometric meaning, in the third study, we inspect the spectrum of bounding box annotations on image descriptions. We show that less geometrically biased spatial relations are more likely to deviate from the norm of their bounding box features. In the fourth study, we try to evaluate the degree of grounding in language and vision with adaptive attention. In the fifth study, we use adaptive attention to understand if and how additional bounding box geometric information could improve the generation of relational image descriptions. In the sixth study, we ask if the language model has an ability of systematic generalisation to learn the grounding on the unseen composition of representations. Then in the seventh study, we show the potentials in using uni-modal knowledge for detecting metaphors in adjective-nouns compositions. The primary argument of the thesis is built on the fact that spatial expressions in natural language are not always grounded in direct interpretations of the locations. We argue that distributional knowledge from corpora of language use and their association with visual features constitute grounding with neural language models. Therefore, in a joint model of vision and language, the neural language model provides spatial knowledge that is contextualising the visual representations about locations.sv
dc.language.isoengsv
dc.relation.haspartDobnik, S., Ghanimifard, M., & Kelleher, J. (2018). Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models. In Proceedings of the First International Workshop on Spatial Language Understanding (pp. 1-11). ::doi:: 10.18653/v1/W18-1401sv
dc.relation.haspartGhanimifard, M., & Dobnik, S. (2019). What a neural language model tells us about spatial relations. In Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) (pp. 71-81). ::doi:: 10.18653/v1/W19-1608sv
dc.relation.haspartDobnik, S., Ghanimifard, M. (2020). Spatial descriptions on a functional-geometric spectrum: the location of objects. Accepted in Spatial Cognition XII, Papers from 12th International Conference, Spatial Cognition 2020/21, Riga, Latvia.sv
dc.relation.haspartGhanimifard, M., & Dobnik, S. (2018). Knowing When to Look for What and Where: Evaluating Generation of Spatial Descriptions with Adaptive Attention. In European Conference on Computer Vision (pp. 153-161). Springer, Cham. ::doi:: 10.1007/978-3-030-11018-5_14sv
dc.relation.haspartGhanimifard, M., & Dobnik, S. (2019). What goes into a word: generating image descriptions with top-down spatial knowledge. In Proceedings of the 12th International Conference on Natural Language Generation (pp. 540-551). ::doi:: 10.18653/v1/W19-8668sv
dc.relation.haspartGhanimifard, M., & Dobnik, S. (2017). Learning to Compose Spatial Relations with Grounded Neural Language Models. In IWCS 2017-12th International Conference on Computational Semantics-Long papers.sv
dc.relation.haspartBizzoni, Y., Chatzikyriakidis, S., & Ghanimifard, M. (2017, September). “Deep” Learning: Detecting Metaphoricity in Adjective-Noun Pairs. In Proceedings of the Workshop on Stylistic Variation (pp. 43-52). ::doi:: 10.18653/v1/W17-4906sv
dc.subjectComputational linguisticssv
dc.subjectLanguage groundingsv
dc.subjectSpatial languagesv
dc.subjectDistributional semanticssv
dc.subjectComputer visionsv
dc.subjectLanguage modellingsv
dc.subjectVision and languagesv
dc.subjectNeural language modelsv
dc.subjectGrounded language modelsv
dc.titleWhy the pond is not outside the frog? Grounding in contextual representations by neural language modelssv
dc.title.alternativeWhy the pond is not outside the frog?sv
dc.title.alternativeGrounding in contextual representations by neural language modelssv
dc.typeText
dc.type.svepDoctoral thesiseng
dc.gup.mailmehdi.ghanimifard@gu.sesv
dc.gup.mailmehdi.ghanimifard@gmail.comsv
dc.gup.mailmmehdi.g@gmail.comsv
dc.type.degreeDoctor of Philosophysv
dc.gup.originGöteborgs universitet. Humanistiska fakultetenswe
dc.gup.originUniversity of Gothenburg. Faculty of Artseng
dc.gup.departmentDepartment of Philosophy, Linguistics and Theory of Science ; Institutionen för filosofi, lingvistik och vetenskapsteorisv
dc.gup.defenceplace27 maj 2020, kl 15:15, Lilla Hörsalen, C350, Humanisten, Renströmsgatan 6. https://gu-se.zoom.us/j/63108152441?pwd=UDV1NytSM1RuNXE4ZWFieHlyOURxQT09sv
dc.gup.defencedate2020-05-27
dc.gup.dissdb-fakultetHF


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record