Representing And Learning Visual Schemas In Neural Networks For Scene Analysis (1994)
Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures.
In Proceedings of the Workshop on Neural Architectures and Distributed {AI}: {F}rom Schema Assemblages to Neural Networks, 35-40, Los Angeles, 1994. Center for Neural Engineering, University of Southern California.

Wee Kheng Leow Ph.D. Alumni leowwk [at] comp nus edu sg
Risto Miikkulainen Faculty risto [at] cs utexas edu