next up previous
Next: 3 Experiments Up: Tilt Aftereffects in a Previous: 1 Introduction

2 Architecture


 
Figure 2:  Architecture of the RF-LISSOM network.
A small RF-LISSOM network and retina are shown, along with connections to a single neuron (shown as a large circle). The input is an oriented Gaussian activity pattern on the retinal ganglion cells (shown by grayscale coding); the LGN is bypassed for simplicity. The afferent connections form a local anatomical receptive field (RF) on the simulated retina. Neighboring neurons have different but highly overlapping RFs. Each neuron computes an initial response as a scalar product of its receptive field and its afferent weight vector. The responses then repeatedly propagate within the cortex through the lateral connections and evolve into activity ``bubbles''. After the activity stabilizes, weights of the active neurons are adapted.
\begin{figure}
\centering
 
\resizebox {0.8\oneplotwidth}{!}{\includegraphics{ps/rf-lissom-architecture-bw.ps}}

 \end{figure}

The cortical architecture for the model has been simplified and reduced to the minimum necessary configuration to account for the observed phenomena. Because the focus is on the two-dimensional organization of the cortex, each ``neuron'' in the model cortex corresponds to a vertical column of cells through the six layers of the primate cortex. The cortical network is modeled with a sheet of interconnected neurons and the retina with a sheet of retinal ganglion cells (figure 2). Neurons receive afferent connections from broad overlapping circular patches on the retina. The N × N network is projected on to a central region of the R × R retinal ganglion cells, and each neuron is connected to ganglion cells in an area of radius r around the projections. Thus, neurons at a particular cortical location receive afferents from a corresponding location on the retina. Since the LGN accurately reproduces the receptive fields of the retina, it has been bypassed for simplicity.

Each neuron also has reciprocal excitatory and inhibitory lateral connections with itself and other neurons. Lateral excitatory connections are short-range, connecting each neuron with itself and its close neighbors. Lateral inhibitory connections run for comparatively long distances, but also include connections to the neuron itself and to its neighbors.[*]

The input to the model consists of 2-D ellipsoidal Gaussian patterns representing retinal ganglion cell activations (as in figures 2 and 3a ). For training, the orientations of the Gaussians are chosen randomly from the uniform distribution in the full range $[0,\pi)$, and the positions are chosen randomly within the retina. The elongated spots approximate natural visual stimuli after the edge detection and enhancement mechanisms in the retina. They can also be seen as a model of the intrinsic retinal activity waves that occur in late pre-natal development in mammals (Bednar and Miikkulainen, 1998; Shatz, 1990). The RF-LISSOM network models the self-organization of the visual cortex based on these natural sources of elongated features.

The afferent weights are initially set to random values, and the lateral weights are preset to a smooth Gaussian profile. The connections are organized through an unsupervised learning process. At each training step, neurons start out with zero activity. The initial response $\eta_{ij}$ of neuron (i,j) is calculated as a weighted sum of the retinal activations:

 \begin{displaymath}\eta_{ij} = \sigma \left( \sum_{a,b} \xi_{ab} \mu_{ij,ab} \right),
\end{displaymath}

where $\xi_{ab}$ is the activation of retinal ganglion (a,b) within the anatomical receptive field (RF) of the neuron, $\mu_{ij,ab}$ is the corresponding afferent weight, and $\sigma$ is a piecewise linear approximation of the sigmoid activation function. The response evolves over a very short time scale through lateral interaction. At each time step, the neuron combines the above afferent activation $\sum \xi \mu$ with lateral excitation and inhibition:  \begin{displaymath}\eta_{ij}(t) = \sigma \left( \sum \xi \mu +
\gamma_e \sum_{k...
...t-1) -
\gamma_i \sum_{k,l} I_{ij,kl} \eta_{kl}(t-1) \right) ,
\end{displaymath}
where Eij,kl is the excitatory lateral connection weight on the connection from neuron (k,l) to neuron (i,j) , Iij,kl is the inhibitory connection weight, and $\eta_{kl}(t-1)$ is the activity of neuron (k,l) during the previous time step. The scaling factors $\gamma_e$ and $\gamma_i$ determine the relative strengths of excitatory and inhibitory lateral interactions.

While the cortical response is settling, the retinal activity remains constant. The cortical activity pattern starts out diffuse and spread over a substantial part of the map (as in figure 3c ), but within a few iterations of equation 2, converges into a small number of stable focused patches of activity, or activity bubbles (figure 3d ). After the activity has settled, the connection weights of each neuron are modified. Both afferent and lateral weights adapt according to the same mechanism: the Hebb rule, normalized so that the sum of the weights is constant:

 \begin{displaymath}w_{ij,mn}(t+\delta t)=\frac{ w_{ij,mn}(t) + \alpha \eta_{ij} ...
...m_{mn} \left[ w_{ij,mn}(t) + \alpha \eta_{ij} X_{mn} \right]},
\end{displaymath}

where $\eta_{ij}$ stands for the activity of neuron (i,j) in the final activity bubble, wij,mn is the afferent or lateral connection weight ($\mu$, E or I ), $\alpha$ is the learning rate for each type of connection ($\alpha_A$ for afferent weights, $\alpha_E$ for excitatory, and $\alpha_I$ for inhibitory) and Xmn is the presynaptic activity ($\xi$ for afferent, $\eta$ for lateral). The larger the product of the pre- and post-synaptic activity $\eta_{ij} X_{mn}$, the larger the weight change. Therefore, when the pre- and post-synaptic neurons fire together frequently, the connection becomes stronger. Both excitatory and inhibitory connections strengthen by correlated activity; normalization then redistributes the changes so that the sum of each weight type for each neuron remains constant.

At long distances, very few neurons have correlated activity and therefore most long-range connections eventually become weak. The weak connections are eliminated periodically, resulting in patchy lateral connectivity similar to that observed in the visual cortex. The radius of the lateral excitatory interactions starts out large, but as self-organization progresses, it is decreased until it covers only the nearest neighbors. Such a decrease is one way of varying the balance between excitation and inhibition to ensure that global topographic order develops while the receptive fields become well-tuned at the same time (Miikkulainen et al., 1997; Sirosh, 1995). Similar effects can be achieved by changing the scaling factors $\gamma_e$ and $\gamma_i$over time, or by using connections whose sign depends upon the activation level of the neuron (as in Stemmler et al., 1995).



Footnotes

...neighbors.
For high-contrast inputs, long-range interactions must be inhibitory for proper self-organization to occur (Sirosh, 1995). Recent optical imaging and electrophysiological studies have indeed shown that long-range interactions in the cortex are inhibitory at high contrasts, even though individual lateral connections are primarily excitatory (Grinvald et al., 1994; Hata et al., 1993; Hirsch and Gilbert, 1991; Weliky et al., 1995). The model uses explicit inhibitory connections for simplicity since all inputs used are high-contrast, and since it is the high-contrast inputs that primarily drive adaptation in the Hebbian model (Bednar, 1997).


next up previous
Next: 3 Experiments Up: Tilt Aftereffects in a Previous: 1 Introduction
James A. Bednar
8/2/1999