Natural Language Processing With Subsymbolic Neural Networks (1997)
Natural language processing appears on the surface to be a strongly symbolic activity. Words are symbols that stand for objects and concepts in the real world, and they are put together into sentences that obey well-specified grammar rules. It is no surprise that for several decades natural language processing research has been dominated by the symbolic approach. Linguists have focused on describing language systems based on versions of the Universal Grammar. Artificial Intelligence researchers have built large programs where linguistic and world knowledge is expressed in symbolic structures, usually in LISP. Relatively little attention has been paid to various cognitive effects in language processing. Human language users perform differently from their linguistic competence, that is, from their knowledge of how to communicate correctly using language. Some linguistic structures (such as deep embeddings) are harder to deal with than others. People make mistakes when they speak, but fortunately it is not that hard to understand language that is ungrammatical or cluttered with errors. Linguistic and symbolic artificial intelligence theories have little to say about where such effects come from. Yet if one wants to build machines that would communicate naturally with people, it is important to understand and model cognitive effects in natural language processing. The subsymbolic neural network approach holds a lot of promise for modeling the cognitive foundations of language processing. Instead of symbols, the approach is based on distributed representations that represent statistical regularities in language. Many cognitive effects arise naturally from such representations. In this chapter, the subsymbolic approach is first contrasted with the symbolic approach. Properties of distributed representations are illustrated, and examples of cognitive effects that arise are given. The achievements of the approach, in terms of subsymbolic systems that have been built so far, are reviewed and some remaining research issues outlined.
In Antony Browne, editors, Neural Network Perspectives on Cognition and Adaptive Robotics, 120-139, Bristol, UK; Philadelphia, PA, 1997. Institute of Physics Publishing.

Risto Miikkulainen Faculty risto [at] cs utexas edu