From Words to Sentences & Back: Characterizing Context-dependent Meaning Representations in the Brain (2017)
Nora Aguirre-Celis, Manuel Valenzuela, and Risto Miikkulainen
Recent Machine Learning systems in vision and language processing have drawn attention to single-word vector spaces, where concepts are represented by a set of basic features or attributes based on textual and perceptual input. However, such representations are still shallow and fall short from symbol grounding. In contrast, Grounded Cognition theories such as CAR (Concept Attribute Representation; Binder et al., 2009) provide an intrinsic analysis of word meaning in terms of sensory, motor, spatial, temporal, affective and social features, as well as a mapping to corresponding brain networks. Building on this theory, this research aims to understand an intriguing effect of grounding, i.e. how word meaning changes depending on context. CAR representations of words are mapped to fMRI images of subjects reading different sentences, and the contributions of each word determined through Multiple Linear Regression and the FGREP nonlinear neural network. As a result, the FGREP model in particular identifies significant changes on the CARs for the same word used in different sentences, thus supporting the hypothesis that context adapts the meaning of words in the brain. In future work, such context-modified word vectors could be used as representations for a natural language processing system, making it more effective and robust.
View:
PDF
Citation:
In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK, July 2017.
Bibtex:

Nora E. Aguirre-Celis Ph.D. Alumni naguirre [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu