Normalization model

Source: Wikipedia, the free encyclopedia.

The normalization model

primary visual cortex. David Heeger developed the model in the early 1990s,[2] and later refined it together with Matteo Carandini and J. Anthony Movshon.[3] The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast. Although the normalization model was initially developed to explain responses in the primary visual cortex, normalization is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions, including the representation of odors in the olfactory bulb,[4] the modulatory effects of visual attention, the encoding of value, and the integration of multisensory information. It has also been observed at subthreshold potentials in the hippocampus.[5] Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that normalization serves as a canonical neural computation.[1] Divisive normalization reduces the redundancy in natural stimulus statistics[6] and is sometimes viewed as an implementation of the efficient coding principle. Formally, divisive normalization is an information-maximizing code for stimuli following a multivariate Pareto distribution.[7]

References

  1. ^ .
  2. .
  3. .
  4. PMID 20435004.{{cite journal}}: CS1 maint: multiple names: authors list (link
    )
  5. PMID 31021319.{{cite journal}}: CS1 maint: multiple names: authors list (link
    )
  6. .
  7. .