Understanding the Semantic Space: How Word Meanings Dynamically Adapt in the Context of a Sentence (2021)
How do people understand the meaning of the word small when used to describe a mosquito, a church, or a planet? While humans have a remarkable ability to form meanings by combining existing concepts, modeling this process is challenging. This paper addresses that challenge through CEREBRA (Context-dEpendent meaning REpresentations in the BRAin) neural network model. CEREBRA characterizes how word meanings dynamically adapt in the context of a sentence by decomposing sentence fMRI into words and words into embodied brain-based semantic features. It demonstrates that words in different contexts have different representations and the word meaning changes in a way that is meaningful to human subjects. CEREBRA’s context-based representations can potentially be used to make NLP applications more human-like.
View:
PDF
Citation:
In Proceedings of the Workshop on Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science , Groningen, Netherlands, June 2021.
Bibtex:

Nora E. Aguirre-Celis Ph.D. Alumni naguirre [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu