V1 Saliency Hypothesis

Source: Wikipedia, the free encyclopedia.

The V1 Saliency Hypothesis, or V1SH (pronounced ‘vish’) is a theory[1][2] about V1, the primary visual cortex (V1). It proposes that the V1 in primates creates a saliency map of the visual field to guide visual attention or gaze shifts exogenously.

Importance

V1SH is the only theory so far to not only endow V1 a very important cognitive function, but also to have provided multiple non-trivial theoretical predictions that have been experimentally confirmed subsequently.[2][3] According to V1SH, V1 creates a saliency map from retinal inputs to guide visual attention or gaze shifts.[1] Anatomically, V1 is the gate for retinal visual inputs to enter neocortex, and is also the largest cortical area devoted to vision. In the 1960s, David Hubel and Torsten Wiesel discovered that V1 neurons are activated by tiny image patches that are large enough to depict a small bar [4] but not a discernible face. This work led to a Nobel prize,[5] and V1 has since been seen as merely serving a back-office function (of image processing) for the subsequent cognitive processing in the brain beyond V1. However, research progress to understand the subsequent processing has been much more difficult or slower than expected (by, e.g., Hubel and Wiesel[6]). Outside the box of the traditional views, V1SH is catalyzing a change of framework[7] to enable fresh progresses on understanding vision.

See

Primary Visual Cortex
Primary Visual Cortex

for where primary visual cortex is in the brain and relative to the eyes.

V1SH states that V1 transforms the visual inputs into a saliency map of the visual field to guide visual attention or direction of gaze.[2][1] Humans are essentially blind to visual inputs outside their window of attention. Therefore, attention gates visual perception and awareness, and theories of visual attention are cornerstones of theories of visual functions in the brain.

A saliency map is by definition computed from, or caused by, the external visual input rather than from internal factors such as animal’s expectations or goals (e.g., to read a book). Therefore, a saliency map is said to guide attention exogenously rather than endogenously. Accordingly, this saliency map is also called the bottom-up saliency map to guide reflexive or involuntary shifts of attention. For example, it guides our gaze shifts towards an insect flying in our peripheral visual field when we are reading a book. Note that this saliency map, which is constructed by a biological or natural brain, is not the same as the sort of saliency map that is engineered in artificial or computer vision, partly because the artificial saliency maps often include attentional guidance factors that are endogenous in nature.

In this (biological) saliency map of the visual field, each visual location has a saliency value. This value is defined as the strength of this location to attract attention exogenously.[2] So if location A has a higher saliency value than location B, then location A is more likely to attract visual attention or gaze shifts towards it than location B. In V1, each neuron can be activated only by visual inputs in a small region of the visual field. This region is called the receptive field of this neuron, and typically covers no more than the size of a coin at an arm’s length.[8] Neighbouring V1 neurons have neighbouring and overlapping receptive fields.[8] Hence, each visual location can simultaneously activate many V1 neurons. According to V1SH, the most activated neuron among these neurons signals the saliency value at this location by its neural activity.[1][2] A V1 neuron’s response to visual inputs within its receptive field is also influenced by visual inputs outside the receptive field.[9] Hence saliency value at each location depends on visual input context.[1][2] This is as it should be since saliency depends on context. For example, a vertical bar is salient in an image in which all the other visual items surrounding it are horizontal bars, but this same vertical bar is not salient if these other items are all vertical bars instead.

Neural mechanisms in V1 to generate the saliency map

Salience Map: represented by the map of maximum V1 neural responses to visual inputs, one maximum response per visual location

The figure above gives a schematics of the neural mechanisms in V1 to generate the saliency map. In this example, the retinal image has many purple bars, all uniformly oriented (right-tilted) except for one bar that is oriented uniquely (left-tilted). This orientation singleton is the most salient in this image, so it attracts attention or gaze, as observed in psychological experiments.[10] In V1, many neurons have their preferred orientations for visual inputs.[8] For example, a neuron's response to a bar in its receptive field is higher when this bar is oriented in its preferred orientation. Analogously, many V1 neurons have their preferred colours.[8] In this schematic, each input bar to the retina activates two (groups of) V1 neurons, one preferring its orientation and the other preferring its colour. The responses from neurons activated by their preferred orientations in their receptive fields are visualized in the schematics by the black dots in the plane representing the V1 neural responses. Similarly, responses from neurons activated by their preferred colours in their receptive fields are visualized by the purple dots. The sizes of the dots visualize the strengths of the V1 neural responses. In this example, the largest response comes from the neurons preferring and responding to the uniquely oriented bar. This is because of iso-orientation suppression: when two V1 neurons are near each other and have the same or similar preferred orientations, they tend to suppress each other’s activities.[9][11] Therefore, among the group of neurons that prefer and respond to the uniformly oriented background bars, each neuron receives iso-orientation suppression from other neurons of this group.[1][9] Meanwhile, the neuron responding to the orientation singleton does not belong to this group and thus escapes this suppression,[1] hence its response is higher than the other neural responses. Iso-colour suppression[12] is analogous to iso-orientation suppression, so all neurons preferring and responding to the purple colours of the input bars are under the iso-colour suppression. According to V1SH, the maximum response at each bar’s location represents the saliency value at each bar’s location.[1][2] This saliency value is thus highest at the location of the orientation singleton, and is represented by the response from neurons preferring and responding to the orientation of this singleton. These saliency values are sent to the superior colliculus,[13] a midbrain area, to execute gaze shifts to the receptive field of the most activated neuron responding to visual input space.[13] Hence, for this input image in the figure above, the orientation singleton, which evokes the highest V1 response to this image, attracts visual attention or gaze.

V1SH explains behavioral data on visual search/segmentation

V1SH can explain data on visual search, such as the short response times to find a uniquely red item among green items, or a uniquely vertical bar among horizontal bars, or an item uniquely moving to the right among items moving to the left. These kind of visual searches are called feature searches, when the search target is unique in a basic feature value like orientation, color, or motion direction.[10][14] The shortness of the search response time manifests a higher saliency value at the location of the search target to attract attention. V1SH also explains why it takes longer to find a unique red-vertical bar among red-horizontal bars and green-vertical bars. This is an example of conjunction searches when the search target is unique only by the conjunction of two features, each of which is present in the visual scene.[10]

Masking of a salient border between two textures by adding a uniform texture
Masking of a salient border between two textures by adding a uniform texture

Furthermore, V1SH explains data that are difficult to be explained by alternative frameworks.[10][15] The figure above illustrates an example: two neighboring textures in A, one made of uniformly left-tilted bars and another of uniformly right-tilted bars, are very easy to be segmented from each other by human vision. This is because the texture bars at the border between the two textures evoke the highest V1 neural responses (since they are least suppressed by iso-orientation suppression), therefore, the border bars are the most salient in the image to attract attention to the border. However, the segmentation becomes much more difficult if the texture in B is superposed on the original image in A (the result is depicted in C). This is because, at non-border texture locations, V1 neural responses to the horizontal and vertical bars (from B) are higher than those to the oblique bars (from A); these higher responses dictate and raise the saliency values at these non-border locations, making the border no longer as competitive for saliency.[16]

Impact

Gaze capture by an ocular singleton
Gaze capture by an ocular singleton

V1SH was proposed in late 1990's[17][18] by Li Zhaoping. It was uninfluential initially since for decades it has been believed that attentional guidance is essentially or only controlled by higher-level brain areas. These higher-level brain areas include the frontal eye field and parietal cortical areas[19] in the frontal and more anterior part of the brain, and they are believed to be intelligent for attentional and executive control. In addition, the primary visual cortex, V1, located in occipital lobe in the back or posterior part of the brain, has traditionally been thought of as a low-level visual area that plays mainly a supporting role to other brain areas for their more important visual functions.[8] Opinions started to change by a surprising piece of behavioral data: an item uniquely shown to one eye --- an ocular singleton --- among similarly appearing items shown to the other eye (using e.g. a pair of glasses for watching 3D movies) can attract gaze or attention automatically.[20][21] An example is illustrated in this figure. Here, an image containing a single letter 'X' is shown to the right eye, and another image containing an array of the same 'X's and a letter 'O' is shown to the left eye. In such a situation, human observers normally perceive an image resembling a superposition of the two monocular images, such that they see an array of all the 'X's and the single 'O'. The 'X' arising from the right-eye image will not appear distinctive. Nevertheless, even when they are doing a task to search (in their perceived image) for the unique and perceptually distinctive 'O' as quickly as possible, their gaze automatically or involuntarily shifts to the 'X' arising from the right-eye image, often before their gaze shifts to the 'O'. Attention capture by such an ocular singleton occurs even when observers fail to guess whether this singleton is present (if it were absent in this example figure, all 'X's and the single 'O' would be shown to the left eye only).[20] This observation was counter-intuitive,[22] was easily reproduced by other vision researchers, and was uniquely predicted by V1SH. Since V1 is the only visual cortical area with neurons tuned to eye of origin of visual inputs,[4] this observation strongly supports V1's role in guiding attention.

More experiments followed to further investigate V1SH,[2] and supporting data emerged from functional brain imaging,[23] visual psychophysics,[24][25] and from monkey electrophysiology[3][26][27][28] (although see some conflicting data[29]). V1SH has since become more popular.[30][31] V1 is now seen as one of the corner stones in the brain's network of attentional mechanisms,[32][33] and its functional role in guiding visual attention is appearing in handbooks[34][35] and textbooks.[36][37] Zhaoping argues that If V1SH is correct, the ideas[38][39] about how visual system works, and consequently questions to ask for future vision research, should be fundamentally changed.[7]

References

  1. ^
    S2CID 13411369
    .
  2. ^ .
  3. ^ .
  4. ^ .
  5. ^ "The Nobel Prize in Physiology or Medicine 1981". NobelPrize.org. Retrieved 2021-10-10.
  6. S2CID 12766897
    .
  7. ^ .
  8. ^ a b c d e "The Primary Visual Cortex by Matthew Schmolesky – Webvision". Retrieved 2020-07-05.
  9. ^
    PMID 1588394
    .
  10. ^ .
  11. .
  12. .
  13. ^ , retrieved 2020-07-05
  14. ^ Wolfe, Jeremy. "Visual Search". psycnet.apa.org. Retrieved 2020-07-11.
  15. S2CID 2329233
    .
  16. .
  17. .
  18. ^ Li, Zhaoping (1998). "Primary cortical dynamics for visual grouping " as a book chapter in "Theoretical Aspects of Neural Computation", Eds K.M. Wong, I. King, and D.Y. Yeung. Springer-verlag. pp. 155–164.
  19. PMID 7605061
    .
  20. ^ .
  21. .
  22. ^ Zhaoping, Li (2014-08-21). "Are we too "smart" to understand how we see?". OUPblog. Retrieved 2020-07-11.
  23. S2CID 9767861
    .
  24. .
  25. .
  26. .
  27. .
  28. .
  29. .
  30. ^ "Visual Perception meets Computational Neuroscience | www.ecvp.uni-bremen.de". www.ecvp.uni-bremen.de. Retrieved 2020-07-11.
  31. ^ "CNS 2020". www.cnsorg.org. Retrieved 2021-06-27.
  32. PMID 20192813
    .
  33. .
  34. .
  35. , retrieved 2021-06-24
  36. ^ "Visual Attention and Consciousness". Routledge & CRC Press. Retrieved 2021-06-24.
  37. .
  38. ^ "Webvision – The Organization of the Retina and Visual System". Retrieved 2020-07-05.
  39. . Retrieved 2020-07-05.