Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, NEUROSCIENCE (oxfordre.com/neuroscience). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 19 October 2019

Stereopsis and Depth Perception

Summary and Keywords

Humans and some animals can use their two eyes in cooperation to detect and discriminate parts of the visual scene based on depth. Owing to the horizontal separation of the eyes, each eye obtains a slightly different view of the scene in front of the head. These small differences are processed by the nervous system to generate a sense of binocular depth. As humans, we experience an impression of solidity that is fully three-dimensional; this impression is called stereopsis and is what we appreciate when we watch a 3D movie or look into a stereoscopic viewer. While the basic perceptual phenomena of stereoscopic vision have been known for some time, it is mainly within the last 50 years that we have gained an understanding of how the nervous system delivers this sense of depth. This period of research began with the identification of neuronal signals for binocular depth in the primary visual cortex. Building on that finding, subsequent work has traced the signaling pathways for binocular stereoscopic depth forward into extrastriate cortex and further on into cortical areas concerning with sensorimotor integration. Within these pathways, neurons acquire sensitivity to more complex, higher order aspects of stereoscopic depth. Signals relating to the relative depth of visual features can be identified in the extrastriate cortex, which is a form of selectivity not found in the primary visual cortex. Over the same time period, knowledge of the organization of binocular vision in animals that inhabit a wide diversity of ecological niches has substantially increased. The implications of these findings for developmental and adult plasticity of the visual nervous system and onset of the clinical condition of amblyopia are explored in this article. Amblyopic vision is associated with a cluster of different visual and oculomotor symptoms, but the loss of high-quality stereoscopic depth performance is one of the consistent clinical features. Understanding where and how those losses occur in the visual brain is an important goal of current research, for both scientific and clinical reasons.

Keywords: binocular vision, depth perception, stereopsis, primary visual cortex, extrastriate cortex, sensorimotor fusion, amblyopia

Fundamentals

Humans can use the two eyes in coordination to achieve perception of stereoscopic depth. Geometrically, this depends on the separation of the eyes in the head, which allows each eye to have a different view of the same objects in the visual scene in front of the person. The central nervous system is able to exploit the information available from comparing these two views to deliver a sense of depth and solid shape, a process that is called stereopsis (derived from Greek, meaning “seeing solid shape”).

The fact that each eye has a different view of a three-dimensional object in front of the person’s eyes has been known since the early days of scholarship. However, it was Wheatstone’s invention of the stereoscope in the 19th century that made it clear that images taken from these two viewpoints can be combined to give a sense of depth. The small differences in the exact positions of contours in the left and right eyes’ images are decoded by the brain, as if these differences had been generated by objects located at different depths from the head. Figure 1 shows a simple diagram of the two eyes regarded from a viewpoint above the person’s head as they look at two objects O and N located at different distances from the head. The angular differences between the projections of O and N at the left and right eyes are called binocular disparities.

Stereopsis and Depth PerceptionClick to view larger

Figure 1. Diagram showing how binocular disparities arise in viewing objects at different depths.

(A) The eyes are looking at point O. The angle between the line of fixation to O and the projection of point N is different as viewed with the left and right eyes, angles a and b respectively. N projects to a point in the right eye that is further from the fovea than its projection in the left eye. N therefore has a disparity that is the difference (b–a) between angle b and angle a, whereas O has a zero disparity.

(B) The eyes have shifted to a new fixation point. Now, both O and N have non-zero disparities. However, the relationship between O and N has not changed: angles a and b still describe the angular separation of O and N. It is useful to describe the relationship between O and N independently from where the eyes are directed for fixation. The quantity (b–a) is then termed the relative disparity between O and N.

Discoveries of the last 50 years have identified some of the neural mechanisms underlying this capability. This has required independent visual stimulation of the left and right eyes combined with the electrophysiological recording of single neurons in the primary visual cortex (also called area V1). Initially, in the anesthetized cat, and subsequently in awake monkeys trained to hold steady fixation on a target, it was shown that single neurons are selectively activated by particular binocular disparities (Barlow, Blakemore, & Pettigrew, 1967; Fischer, Poggio, & Lennie, 1979; Nikara, Bishop, & Pettigrew, 1968; Poggio, Gonzalez, & Krause, 1988; Poggio & Talbot, 1981). This corresponds to the neurons being selective for the presence of objects at different depths from the head.

Initial studies did not distinguish between the viewing conditions illustrated in Figure 1A and 1B. Subsequently, it has been shown that neurons in the extrastriate visual cortex are sensitive to the relative disparity between two visible contours, as in Figure 1B (Krug & Parker, 2011; Thomas, Cumming, & Parker, 2002), although this is not found in the primary visual cortex (Cumming & Parker, 1999).

This work provides a firm neural basis for the understanding of stereopsis and binocular depth perception. An important modern question is how the neural organization for binocular stereopsis relates to the 30 or more areas of the cerebral cortex that are now known to have direct involvement with vision. One classic view, based on ideas about localization of function, is that some of these cortical areas (or specific compartments within those areas) are specialized for binocular stereopsis and others are not (Hubel & Livingstone, 1987). The alternative view is that most cortical signaling after area V1 is binocular is some way or other and therefore there is little specialization.

Recent work has provided alternatives to this direct conflict. Firstly, in many of the areas beyond V1, the binocular signals conveyed by the neurons are intermingled with other signals about the visual scene. Motion (Nadler, Angelaki, & DeAngelis, 2008), shading (Taira, Nose, Inoue, & Tsutsui, 2001), and contrast envelope (Tanaka & Ohzawa, 2006) have all been explored. Secondly, some of the neurons in areas beyond V1 are sensitive to different aspects of stereo depth, not just absolute and relative disparities. There are neurons that signal the slope of a plane with respect to the observer’s vision (Sanada, Nguyenkim, & DeAngelis, 2012) and other higher-order neurons that respond to 3D shape defined by binocular depth (Janssen, Vogels, Liu, & Orban, 2001).

One reconciliation of the debate on localization of function is to propose that different visual areas exploit binocular depth information for different purposes (Parker, 2009, 2014). Thus, whether a particular cortical area appears to show a predominance for processing of binocular depth will depend a great deal on exactly how the stimulus and task have been arranged. This conclusion is also consistent with the general principle within neural processing systems that if there is useful information available to perform some specific functions, then that information will be exploited in terms of neural organization and connectivity.

The Challenge of Frontal Vision

The eyes of vertebrates are structured optically with a single image-forming system, comprising the cornea and lens, which act together to form an image that falls upon an array of photoreceptor cells. With this optical structure, a single eye is limited to sampling a field of view whose angular extent is at most 180 degrees and often less. In this regard, having both eyes pointing forward in the same direction is a considerable disadvantage. This structural arrangement leaves the animal with a large part of visual space that is not covered by either of the eyes.

Yet we know that having some degree of overlap between the visual fields of left and right eyes is absolutely required for the acquisition of depth by binocular vision and the vivid impression of stereopsis, which we take here to be the fully three-dimensional sense of depth and shape reported by human observers when using both eyes in coordination. The eyes of humans are of course almost completely frontal in their orientation and retracted into the eye sockets, with the result that almost half of the visual field is out of immediate sight. Like monkeys and apes that also have frontal binocular vision, humans have an elaborate motor system controlling the eyes and head, which can rapidly relocate binocular gaze to objects of interest. Nonetheless, the limitations are clear. Most of the other vertebrates (mammals, birds, reptiles, amphibia and fish) have laterally placed eyes, giving a much greater coverage of the entire visual field. There are some interesting exceptions to this generalization, which demonstrate important principles governing the evolutionary advantages of a binocular visual system with full stereopsis.

That there is a link between coordinated use of our two eyes and the sense of depth has been known since the early 19th century. Before that time, various scholars appreciated that the left and right eyes each acquire an image of the world from a slightly different viewpoint. The early history of this is covered in Howard’s and Rogers’ volumes (Howard 2012a, 2012b; Howard & Rogers, 2012). Wheatstone (1838, 1852) took the essential step of presenting each eye with a different image by arranging for independent optical channels for the left and right eyes in his stereoscope. Wheatstone presented accurate perspective pairs of line drawings of familiar objects in his apparatus, thereby demonstrating that the small differences in the images received by the left and right eyes are by themselves sufficient to provide a sense of depth. The technological development of the stereoscope underlies all the different imaging systems used nowadays in 3D cinema and television.

Before turning in more detail to the relationship between binocular viewing and the sense of depth, another potential benefit of frontal vision should be considered. This is simply that two eyes are better than one. This is true in one important respect, which is the possibility of summation of signals from two independent sensors that are looking at the same physical light source. Binocular summation of signals has been studied since the 19th century (Otto Roelofs & Zeeman, 1914).

Pirenne (1967) grasped the fundamental issue that, even if the signals from each eye reach different parts of the brain without ever converging into a single binocular signal, there will still be some advantage in having two independent sensors available to detect a target. This just follows from the laws of probability combinations: if a correct detection is defined to mean detection by either eye, there will inevitably be some occasions where one eye successfully captures the signal when the other eye is failing to do so, with the result that use of both eyes boosts the overall detection rate. For example, if X and Y are independent random variables with a Gaussian distribution, then the sum of X and Y has a mean value that is equal to the sum of the mean of X and the mean of Y. The variances of X and Y also add, with the result that the standard deviation of X+Y is the square root of the sum of the two variances. In the case where the means and variances of X and Y are equal, then the advantage of the combination is √2.

This summation has been called “probability summation” to distinguish it from true “physiological summation,” the latter involving a true addition of signals from left and right eyes at some binocular site in the brain. Sherrington’s flicker summation experiment is the classic exemplar of this (Sherrington, 1904). For the simple detection case, where the individual monocular detection thresholds are equal in left and right eyes, the so-called probability summation model predicts an improvement by a factor of √2, while physiological summation predicts improvement for binocular detection by a factor of 2.

Distinguishing these two possibilities has been difficult. The gap between these predictions is small and depends on the stimulus configuration. Nonetheless, there are clearly conditions in which full binocular summation takes place. As this represents only an improvement of √2 compared with detection offered by independent processing of signals from the eyes, for the most part this advantage “may be regarded as essentially an epiphenomenon of the more fundamental task of controlling the direction of gaze” (Blake & Fox, 1973). At very low light levels, the small advantage for light detection of true binocular summation may be worthwhile. As we shall describe below, in a crowded environment with many visible contours, the advantage of binocular vision for detection of stimuli is considerably greater than a √2 improvement, so it may be better to concentrate on that case rather than the simple case of detection of isolated features.

One provocative proposal is that nocturnal habitats and lifestyles are a driver for the development of binocular vision (Pettigrew, 1993). This is very hard to judge, as the advantages of panoramic vision for a prey animal are likely to be considerably fewer in a nocturnal environment. For example, one advantage of panoramic vision is to allow the animal to run for cover, an activity that could in itself be hazardous in deep darkness since it creates the opportunity for detection by auditory cues as well as the possibility of accidents. This means that the evolutionary pressures may not be symmetric. When an animal’s habitat changes such that species members with frontal vision are favored by selection pressures, this presumably means that frontal vision is being employed for depth perception and breaking of camouflage. If visual information is generally lacking, as it is in deep nocturnal environments, it is not clear that there is a strong selection pressure in favor of lateral placement of the eyes. There is perhaps little selection pressure either way. It could be for this reason, rather than any other, that it is possible to find examples of frontal-eyed animals that appear to have shifted habitat in recent evolutionary history from daytime to nighttime activity. The ancestors of these animals may well have possessed frontal vision for depth perception and breaking of camouflage, but there is weak or absent selection pressure toward reversion to lateral placement of the eyes when a frontal-eyed species moves to occupy a nighttime niche.

At the very least, it can be concluded that the use of binocular vision for depth perception is only part of the overall picture. Depth perception requires the accurate binocular alignment of high-acuity sensors onto single objects in the visual world. The fusion of left and right eye image information into a single binocular representation is part of this process. By comparison, some species exhibit behavior that suggests that maximizing binocular overlap without accurate binocular alignment for single vision is a distinct and separate control strategy (Rogers et al., 2010; Wallace et al., 2013).

Binocularity in the Visual Nervous System

To provide for binocular stereoscopic vision, the neuronal signals from the left and right eyes must be brought together at a single site. The anatomy and physiology of the mammalian visual system demonstrates that the first location where combination takes place is at the primary visual cortex (also referred to as cortical area V1). Early studies of neurological damage to V1 in humans showed that the cerebral cortex on the right side of the body receives information about visual space to the left side of the person’s head and the left cortex receives information about the right visual space. These studies also showed that information about visual space is organized in the form of topographic maps. Nearby points in visual space send information to nearby points in the cortex. The result is that, if one imagines a point C that is free to move continuously across the cortical surface, the point V in visual space that is served by that part of the cortex changes smoothly and continuously as the point C on the cortical surface moves around.

The organization differs between animals with laterally placed eyes and those with frontally placed eyes, although there are many aspects that are recognizably similar. A recent summary diagram highlights the major differences between rodents with lateral eyes and primates with frontal eyes.

Stereopsis and Depth PerceptionClick to view larger

Figure 2. Diagrams of the binocular visual pathways in two mammalian species with contrasting visual environments. In both cases, red shows right eye signals and blue shows left eye signals. For the rodent, the left eye dominates the left visual field because the eyes are laterally placed (and similarly the right eye dominates the right visual field). The binocular overlap region is shown in purple and is much greater in the primate compared with the rodent. The anatomical projections in the primate are much more precise, with the lamina of the lateral geniculate nucleus dominated by signals from a single eye and a pattern of ocular dominance stripes in the visual cortex, especially where axons arrive from the lateral geniculate nucleus.

Diagram from Priebe and McGee (2014).

Visual information leaves the left and right eyes through their optic nerves, which exit the globe of the eye at the optic nerve head and travel to the optic chiasm. The optic nerve axons are almost all myelinated; in humans, there are no unmyelinated fibers (Cohen, 1967). These axons reach a significant point for anatomical rearrangement at the optic chiasm, where the first difference between lateral- and frontal-eyed mammals emerges. For an animal with lateral eyes, a large portion of each eye’s visual field is not visible to the other eye, meaning that a binocular representation of that part of visual space is never possible. Optic nerve fibers serving the monocular portion of the visual space cross (decussate) at the optic chiasm and travel to the contralateral side of the brain (thus, fibers from the left eye’s monocular field travel to the right brain).

The binocular field is served by optic nerve fibers from both eyes. At the optic chiasm, fibers crossing from the contralateral eyes are joined by fibers that travel on the same side from the ipsilateral eye (see Figure 2). This arrangement is called a partial decussation because only a fraction of the arriving fibers cross to the contralateral side. In animals with lateral-pointing eyes, more fibers cross at the chiasm to travel contralaterally due to the greater extent of their monocular visual field, while in primates about equal numbers travel ipsilaterally and contralaterally. The extent of decussation at the optic chiasm of mammals is therefore related to the relative numbers of nerve fibers serving the binocular and monocular visual fields.

In the primate, fibers of the optic tract reach subdivisions of the lateral geniculate nucleus (LGN) that are specific to the eye of origin, with separate layers of LGN neurons devoted to ipsilateral and contralateral eyes (see Figure 2). The axons of LGN neurons travel to the primary visual cortex, at which point there is a combination of neuronal signals from right and left eyes to form neurons with binocular visual responses. The overall consequence of this anatomical organization in primates is that the right visual cortex processes information arising from both the monocular and binocular portions of the left part of visual space and the left visual cortex deals with right visual space. The division between right and left visual space is the vertical plane passing through the fovea of each eye, which is parallel to the vertical plane passing through the midline of the head when both eyes are looking straight ahead into the far distance.

Each eye has therefore a characteristic division between optic nerve fibers that travel ipsilaterally at the optic chiasm and those that travel contralaterally. The division between ipsilateral and contralateral fibers differs between species in its precision. In primates, the division is sharp such that a given region of the retina is exclusively dedicated to contralateral or ipsilateral signaling. In lateral-eyed mammals, the division is often less sharp, with a band of the retina providing both contralateral and ipsilateral fibers. The organization in birds with stereoscopic vision appears to be different. In the owl, all the fibers from each eye pass contralaterally at the optic chiasm itself, with no ipsilateral fibers, but the necessary rearrangement takes place at a higher stage after the visual fibers leave the thalamus (Pettigrew & Konishi, 1976).

Nerve fibers leave the LGN toward the primary visual cortex within bundles called the optic radiation. The separation of signals from the left and right eyes seen in the LGN is continued as these fibers arrive in the cortex. The terminals of the thalamic axons are segregated spatially, forming the basis of a columnar structure for ocular dominance: each small zone of afferent fibers (some 0.5 mm across) is exclusive to one eye with a neighboring region serving the other eye. The preference of the neurons in the columns, particularly neurons in layer 4 of the cortex, alternates from the left eye to the right eye and then again to the left eye across the cortical surface. It is clear that a number of cortical neurons are exclusively excited by stimulation through one eye and not through the other. Estimates of the number of such neurons vary (compare Hubel & Wiesel (1968); Prince, Pointon, Cumming, & Parker (2002)). It is also increasingly understood that some of these apparently monocular neurons actually receive input from the other eye through an inhibitory signaling network, which can only be revealed if true binocular stimulation is used. Nonetheless, there is general agreement that the primary visual cortex (V1) differs from other visual cortical areas, in that higher cortical areas have very few monocular neurons.

Within the primary visual cortex, the first major synaptic convergence of signals from the left and right eyes onto a single neuron takes place. This has been studied directly by intracellular recording from single cortical neurons in the cat (Ferster, 1990). Many neurons in V1 do not just receive binocular inputs, but are specifically sensitive to stereoscopic depth. This initial stage of sensitivity to depth in the nervous system has been tested and analyzed to the extent that there are firm computational models of this stage (DeAngelis, Ohzawa, & Freeman, 1991; Ohzawa, 1998; Ohzawa, Kato, Baba, & Sasaki, 2016). The fundamental steps are now understood to be:

  1. 1. An additive combination of signal inputs from the left and right eyes: these signal inputs are formed by passing the image data from each eye through a spatial filter that is selective for the orientation, spatial frequency, and spatial phase content of the image; the filter is normally modeled by a Gabor function (product of a sinusoidal waveform with a Gaussian function); sensitivity to depth is achieved by changing the exact position of Gabor function or the phase of its component sinusoid in visual coordinates for one eye with respect to the other.

  2. 2. Inputs are combined across a set of binocular combinations to cover different phase characteristics of the monocular inputs, while maintaining the depth sensitivity of each member of the set. This generalizes the behavior of the filter across different possible image content but retains the selectivity to disparity.

  3. 3. The signal from each binocular input within the set passes through a nonlinear output stage (a threshold followed by a positively accelerating output, sometimes called “half-squaring”) before the combination across the members of the set.

The overall model has been termed the binocular energy model because the effect of these combinations is to deliver a neural mechanism that is sensitive to the local correlation between left and right eye images within a specific range of disparities. Since calculation of a correlation requires a multiplication and the model includes a set of neurons with accelerating input–output relationships close to squaring, the affinity with the squaring operation in the calculation of the energy of an electrical waveform led to the term “energy model.”

The original version of the energy model specified that the spatial filters would be formed in quadrature pairs, so that, using the Gabor model, one of these filters would be in sine phase and the other in cosine phase. Owing to the threshold on the output stage of each member of the set of filters, the quadrature filters are represented by two members each, one for the positive-going part and the other for the negative-going part. More recent work has revealed some limitations of this model. It fails to give a consistent explanation of how binocular neurons respond when the brightness of a visual feature is increased in one eye and simultaneously decreased in the other eye, in comparison with the more naturally occurring case where there is a concordant increase or decrease in both eyes (Cavonius, 1979; Read & Cumming, 2003; Read, Parker, & Cumming, 2002). Secondly, the energy model uses a limited number of subunits, while more recent evidence (Cumming, 2002) points to the presence of multiple subunits in V1 neurons, a proposal that is closer to the alternative models originally considered by the proponents of the energy model (see figure 11 in Ohzawa & Freeman (1986)).

Although the ocular dominance structure of the macaque cortex, which is a striking and much studied anatomical and functional feature of V1, is an important structure for the organization of binocular vision, this column structure does not appear to have a direct relationship with sensitivity of neurons to stereoscopic depth. There appears to be no correlation between ocular dominance and either the strength or pattern of tuning for stereoscopic depth (Prince et al., 2002), and the range of connectivity required to assemble the range of stereoscopic depths encoded by V1 neurons stretches beyond the domain of single ocular dominance columns (Parker, Smith, & Krug, 2016). The significance of the ocular dominance structure appears to be related more to ensuring that the binocular portions of the visual cortex receive an evenly distributed and weighted contribution of signals from each eye. This is clearly an important prerequisite for the construction of neuronal receptive fields that are specifically sensitive to stereoscopic depth, but it is nonetheless one stage removed from that goal.

Beyond the primary visual cortex, the receptive fields of neurons in the extrastriate cortical areas are for the most part binocular, meaning specifically that these neurons can be equally well stimulated visually through either the left or the right eye. Moreover, the preferences of extrastriate neurons for particular visual features, with the exception of signals relating to stereoscopic depth, are the same when tested in the left and right eyes. Interestingly, there are a number of neurons in the extrastriate cortex that have been classed as “obligatory binocular neurons,” meaning that they required conjoint visual stimulation of both eyes to evoke any response from the neuron (Zeki & Mackay, 1979). Such neurons are clearly of great interest in fitting together a complete picture of how the cortex processes stereoscopic depth information. At this stage there is little information about how these obligatory binocular neurons process depth, in comparison with the role of other neurons that respond equally well to stimulation through either eye. Binocularity in one form or another seems to be a pervasive property of the extrastriate cortex, within both the dorsal and the ventral streams (Parker, 2007). There is little evidence for a strong form of modular cortical organization for binocular vision taken as a whole.

Depth Perception

While the presence of binocularity is an interesting property of visual neurons, attention must now turn toward the functional role of binocular combination. As noted in the preceding section, the neurons of the primary visual cortex (V1) in the macaque monkey respond to changes in the stereoscopic depth of visual features. Nonetheless, there is an important and growing body of evidence that V1 is only a preliminary stage. A number of lines of evidence indicate that the perception of stereoscopic depth relies upon the neural systems of the extrastriate cortex.

Binocular vision is used for a number of different functions by vertebrate animals. For animals that inhabit an arboreal environment, it assists locomotion. For predator animals, it allows for accurate striking at prey. For animals with free forearms and hands, it supports the acquisition and manipulation of 3D objects in the near workspace, as it does for humans. In human and nonhuman primates, there is a sophisticated control system for ensuring that the foveas of both eyes are aligned binocularly on a single feature of interest in the environment (see chapter 9 in Leigh & Zee (2015)). This system controls the convergence angle of the eyes by increasing convergence for nearby targets and decreasing the angle for more distant targets. This system is linked to the control of the focus of the lenses (accommodation) of the eyes so that the image of the objects fixated at different distances can be brought into sharp focus. Both accommodation and convergence systems contribute to the perception of the distance of objects relative to the location of the head.

Accurate retrieval of stereoscopic depth contributes to all of the above functions. The convergence angle between the two eyes can be adjusted by means of disparity alone (Erkelens & Collewijn, 1984; Rashbass & Westheimer, 1961). In the natural world, stereoscopic depth occurs in combination with other information about the 3D shape and distance of objects in the visual environment. Information such as shading, texture, relative motion, and outline contour all contribute. The original meaning of the word stereopsis was intended to signify the sense of solid shape that is delivered when any or all of these different sources of information are associated with the perception of a 3D object. Currently, it is useful to distinguish the use of these different information sources in signaling the 3D structure of objects and in providing information about the spatial layout of different objects in the scene before the person’s eyes.

Stereoscopic depth is a strong contributor to the sense of depth and solidity of 3D objects. However, stereoscopic depth by itself does not give a unique sense of depth and solid shape. The same pattern of binocular disparities can result in different appearances of solid shape, including both changes of size and depth profile, depending on the availability of other contextual information about the distance of the object (Johnston, 1991; Wheatstone, 1838). Combining stereo with other cues serves to generate a stable percept. The fundamental explanation of this is that different sources of information about depth have different reliabilities. For example, stereo disparities become smaller as viewing distance increases, and as an object approaches an infinite distance away, stereo disparities become zero all over the object. By comparison, the recovery of depth information from variations of shading or texture across the object’s surface does not depend on viewing distance. Another issue is that some sources of information are inherently ambiguous. Information about depth and 3D shape can be obtained from the relative motion of visual features and contours, which may be created either by the translation or rotation of the viewed object or by the motion of the observer relative to a static object (Rogers & Graham, 1982; Ullman, 1979).

Although relative motion is a strong source of information about 3D shape, motion has a peculiar limitation (Ullman, 1979). This is that the same pattern of motion is compatible with two interpretations of depth, one consistent with depth protruding out of the fixation plane and the other consistent with depth receding behind the fixation plane. (Strictly speaking, this is only true for the case of parallel projection of light rays from the object to the eye and not for perspective projections, but for moderate sized objects the difference between a parallel and perspective projection is negligible, except when the object is very near to the eye.) In the case of an observer actively initiating their own movements to create relative motion, the ambiguity can be resolved. Binocular disparities are, however, ideally suited to contributing this missing information.

This process of combining sources of information is often referred to as fusion. The case where two sources of information each provide something that is inherently unavailable from the other is called “strong fusion” and the case where information from one source substitutes interchangeably with another is termed “weak fusion” (Landy, Maloney, Johnston, & Young, 1995). Thus the case of combining stereo and texture mentioned earlier appears to be weak fusion, that is, a simple linear combination of the depth signaled by each source (Johnston, Cumming, & Parker, 1993), although the weighting of stereo in the combination is considerably stronger than that of texture.

Recently the question of combining information from different sources has been studied at the level of single neurons. Working with neurons from cortical area V5/MT, Sanada et al. (2012) have shown that single neurons can combine linearly information from stereo and motion, with a modest contribution from texture. Neurons in the inferotemporal visual cortex show similar capacities (Liu, Vogels, & Orban, 2004) and neurons in nearby visual areas show selectivity for perception of 3D shape (Janssen, Vogels, & Orban, 2000). The neurons recorded from V5/MT are in the dorsal visual stream and appear to be part of a significant pathway for the perception of depth, because targeted causal interventions in the form of weak electrical stimulation of small groups of neurons can change the animal’s reports of the depth percept (see Cicmil & Krug (2015) for a review). Thus far, similar causal evidence for the role of ventral stream areas is lacking.

Pattern Matching

Binocular vision presents a specific instance of the general problem of pattern matching in vision. The purpose of the binocular vision system is of course to bring together the signals from the two eyes into a single coherent neural representation that contains information about all three dimensions: azimuth, elevation, and depth. In all cases, whenever an object is recognized or compared with another, the fundamental requirement is to detect a correspondence between two neural representations. We may sometimes regard these as sensory representations, present within the recent or current sensory input, or one of these may be a memory representation recalled for comparison with the current sensory representation. For binocular vision, the correspondence must be made between the left eye’s sensory input and the right eye’s input.

Binocular vision is therefore not just the route to the understanding of depth perception and stereopsis, but it also provides significant insight into the more general and fundamental problem of pattern matching. Regarded in this way, the computational problem facing the nervous system for stereopsis is not so very different from that for other pattern matching problems. In all cases, we are trying to understand how the nervous system compares distinct neural representations with each other to decide whether there is a complete or partial correspondence or no correspondence at all.

One outcome of achieving a match between signals from the left and right eyes is a single binocular representation of objects in the visual world. Julesz expressed this by referring to a “cyclopean eye.” This term derives from the mythical Greek giant Cyclops, who was imagined to have only one eye in the middle of his forehead, thereby foregoing the benefits of stereopsis in exchange for a simple solution to the problem of binocular single vision! Until Julesz’s insights, the combination of signals from left and right eyes was conceived as a high-level process, dependent upon the earlier stages of feature and object recognition. Julesz’s demonstrations with random dot stereogram figures changed the focus of this completely. An example of one of these is shown in Figure 3. Julesz showed that binocular combination is adaptive but automatic, driven by low-level features in the image and not reliant on other visual recognition for its effective completion.

Neurophysiological experiments have shown that single neurons respond to the depth content of these random dot figures. In the primary visual cortex, V1, the neurons effectively signal an approximation to a cross-correlation between a circumscribed region within the left and right eyes’ images. One of the lines of evidence supporting this view is the response of V1 neurons when the contrast of the dots in one eye’s image is reversed: this means that Figure 3 would be altered so that every black dot in the one eye’s image is partnered by a white dot in the other eye’s image. The result is a stimulus that is binocularly anti-correlated. Many V1 neurons invert their disparity preference under stimulus anti-correlation: disparities that gave a strong response now give a minimal response, while the neuron may now respond strongly to disparities that previously gave no response (Cumming & Parker, 1997). Interestingly, these anti-correlated stimuli that excite V1 neurons in a disparity-specific manner may fail to deliver any sense of stereoscopic depth. This is one strong piece of evidence that the perception of stereoscopic depth depends upon neural processing beyond the stage of V1 (Janssen, Vogels, Liu, & Orban, 2003).

Inversion of the tuning curve under stimulus anti-correlation is characteristic of a neural mechanism that detects binocular correlation and is therefore also a prediction of the disparity energy model mentioned above. There are, however, some limitations on this conclusion. In general, neurons do not modulate their firing as much to anti-correlated stimuli as they modulate to correlated stimuli; secondly, some neurons do not just invert their disparity preference, showing a more complex set of changes. Both of these can potentially be accommodated with a neural model that includes some nonlinearities in the summation of signals between the left and right eyes (Read et al., 2002; Read & Cumming, 2003).

Stereopsis and Depth PerceptionClick to view larger

Figure 3. Recovery of binocular depth is based on the small differences in visual images between the left and right retinas. The figure is arranged for viewing with red–green stereo glasses, which will allow the viewer to experience the perception of depth and figure–ground segregation. Stereopsis cannot proceed to the highest accuracy without precise point-to-point matching of local patterns in the left and right eyes, which delivers the ability of stereo to break camouflage with a figure–ground separation in depth.

Binocular Enhancements of Stimulus Detection

A different route to enhanced binocular performance has been identified by the experiments on binocular masking level differences (Henning & Hertz, 1973; Schneider, Moraglia, & Jepson, 1989). The experimental observation is that, if a signal must be detected against a noisy background, both viewed binocularly, then the detectability of the signal is enhanced if its binocular disparity is different from that of the noise pattern. The enhancement is sometimes substantial, often a factor of 3x or 4x more than the √2 improvement identified above, which arises when moving from probability summation to physiological summation. It is important to note that this enhancement is specific to the case of binocular inspection of a signal embedded within a sample of noise. The left and right eyes therefore receive a common signal, as well as a sample of noise that is binocularly correlated between the left and right eyes.

The improved detection of signals fundamentally exploits this correlation structure rather than disparity detection as such. If we take a sample of noise for the left eye’s input and linearly add to this sample a spatially offset version of the same sample for the right eye’s input, then the resulting binocular pattern is a new sample that is “noise-like” but has some underlying spatial structure. If the shift between the left and right eyes is d, the disparity, then the spatial frequency content of the binocular pattern has phase cancellations for spatial frequencies of 1/2d, 3/2d, and similarly related values. These phase cancellations result in substantially unhindered transmission of the signal at critical points across the binocular image. These effects are complex and difficult to predict without complete specification of the signal and noise images. Consequently, although they do not amount to a generalized mechanism for breaking camouflage, it is likely that they provide a substantive beneficial contribution to this goal. This is not a mechanism that is guaranteed to work every time. However, for a survival advantage to the animal, particularly a predator, the available measurements suggest that when the mechanism is effective it provides a substantial improvement in detectability, which would presumably affect the success rate in predation. The limitation is that there is no evidence about how this might affect recognition as opposed to detection.

Flexibility of Matching in Stereopsis

Figure 3 demonstrates the ability of stereoscopic vision to achieve close element-to-element matching of features between the left and right eyes. The nervous system can also cope with image pairs that have some perturbation of the exact geometric positioning of the individual elements. For example, one eye’s image may be up to 10% bigger or rotated in orientation by up to six degrees with respect to the other (Julesz, 1960); the exact local positioning of the features in one eye’s image compared with the other may be altered (Parker, Johnston, Mansfield, & Yang, 1991; Parker & Yang, 1989); one eye’s image may be blurred with respect to the other (Julesz, 1960). These abilities suggest a flexible and efficient pattern matching system.

Binocular viewing of patterns with repeating similar elements presents a different kind of challenge. If a human observer uses both eyes to inspect a set of identical vertical features all lying in a single depth plane (for example, a row of vertical rods or a set of vertical lines depicted on a flat electronic display), the person reports the simple interpretation of features all located in a single depth plane. Noting that the features are themselves identical, it is evident that there are multiple possibilities for pairing up these features. In fact, with N features in each eye, there are N2 possible matches. Most of these possible matches do not form part of the stereoscopic percept when all N features are present in a single depth plane, although their capacity to reach perceptual levels of processing can easily be revealed by selective removal of neighboring features.

Binocular features of this kind that are not part of the current depth percept but have the capacity to become part of it are sometimes called “candidate matches.” The simplest version of multiple matching arises with Panum’s limiting case, in which a pair of closely spaced vertical lines is displayed to one eye and a single vertical line is displayed to the other eye. In binocular viewing, the observer reports that they see two lines, which are not only laterally separated but are also perceived at different depths. This suggests that a single line in one eye has the capacity to be stereoscopically matched with more than one line in the other eye’s image. This has been a contentious observation (Braddick, 1978; Westheimer, 1986), since it implies that the assumption of “uniqueness of matching” may be incorrect, something that was embedded in early computational models of stereo (Marr & Poggio, 1976). Westheimer (1986) developed this further to show that features visible to one eye only may alter the stereoscopic depth of binocular features, and conversely changes in the stereoscopic depth of a binocular feature may alter the apparent location of a monocular feature. This suggests a pattern recognition system with some interchangeability between the sense of location and the sense of stereoscopic depth.

From Disparity Detection to Binocular Depth Perception

The neural representation in the visual system that has the finest-grained representation of spatial detail is the primary visual cortex V1; this judgment is based on the high number and density of neurons available to analyze each portion of the visual field. In terms of establishing a single representation of binocular features, many neurons in V1 appear not to achieve this. Binocularly viewed patterns of regular features (sinusoidal gratings presented in a sharply defined circular window) result in activation of neurons to both the binocular pairing of features that form part of the percept and to binocular pairings that do not (Cumming & Parker, 2000). The activation of neurons to the so-called candidate matches is not substantially less than the activation to matches that form part of the stereoscopic percept.

In V2, there is some evidence for neurons that resolve the difference between actual and candidate matches (Bakin, Nakayama, & Gilbert, 2000), but ideally this needs further exploration in conjunction with measurements of perceptual reports of the animals from which the neurons have been recorded. It has also been found that V2 neurons are sensitive to both stereoscopic depth and the perceptual organization of surface structure (Qiu & von der Heydt, 2005). With a surface that contains repeating pattern features, this also implies a resolution of actual and candidate matches. On this basis, V2 represents a more advanced stage of neural processing, taking the neural signal away from just the detection of a binocular disparity at a point in the visual field and toward the perception of depth.

There are a number of findings that support the greater importance of V2 in the perception of binocular depth. An early example is that Cowey and Wilkinson (Cowey & Wilkinson, 1991) noted that surgical lesions of V2 were almost as detrimental for stereo vision as similar lesions applied to V1. More recently, it was found that V2 neurons are sensitive to the relative depth between spatially adjacent visual features (Thomas et al., 2002). This sensitivity of V2 neurons is significant because the perceptual systems of both monkeys and humans show this characteristic: psychophysical detection of small disparities is greatly improved by the presence of nearby features at a slightly different depth against which depth of the primary target can be assessed (Chopin, Levi, Knill, & Bavelier, 2016; Thomas et al., 2002; Westheimer, 1979). A separate line of evidence concerns the correlation of psychophysical choices about binocular depth with simultaneously recorded signals from single neurons recorded in V1 and V2: neuronal responses from V2 show correlations with perceptual decisions, but those in V1 do not (Nienborg & Cumming, 2006). Cortical area V2 also appears to have specific cortical architectures related to the processing of binocular depth. Livingstone and Hubel proposed that the thick stripe structures revealed by postmortem cytochrome oxidase staining are predominantly pathways for binocular depth signals (Hubel & Livingstone, 1987). Within this structure, Roe and her colleagues have used optical imaging of the cortical surface to demonstrate an ordered representation of binocular depths across the cortex (Hubel & Livingstone, 1987).

Impressive as this array of evidence is, pointing to a distinct difference between V1 and V2 in binocular depth processing, the current view is that V2 is simply a gateway to further cortical structures, many of which are involved in binocular depth processing. Evidence from human MRI studies indicates a widespread processing network, not just within the occipital cortex but stretching out into parietal areas. Presumably in these deeper areas, the binocular depth signal is not just coupled with other visual signals but becomes integrated with information about head and body orientation, as a basis for motor planning. The search for signals about relative disparity and how those signals interact with other visual sources of information about depth has yielded many insights into the question of how the nervous system disassembles and reassembles incoming sensory information into neural representations suitable for action and perception.

Plasticity

The classical view of plasticity in the visual nervous system has been formed by the pioneering work of Hubel and Wiesel on the binocular visual systems of cats and monkeys (Hubel & Wiesel, 1962, 1963, 1970; LeVay, Hubel, & Wiesel, 1975; Wiesel, Hubel, & Lam, 1974). Hubel and Wiesel established the classification of primary cortical neurons by the property of ocular dominance, which is the extent to which a neuron responds to stimulation through the left or right eye. They employed this classification of neurons to identify the ocular dominance columns, regions of the cortical surface of V1 where the visual activity is dominated either by the left eye or by the right. In the middle layers of the cortex (layer 4), the segregation of activity by ocular dominance is more marked, largely owing to the fact that the terminals of thalamic axons arriving from the lateral geniculate nucleus are also segregated spatially by eye of origin. Although there are binocular neurons in layer 4 (Hawken & Parker, 1984), the upper layers of the cortex have more extensive binocular zones that lie in between the ocular dominance columns. There are specific anatomical markers of the ocular dominance columns in the upper layers of the cortex: the distribution of staining for cytochrome oxidase shows a repeat pattern that aligns with the left and right eye ocular dominance columns.

Hubel and Wiesel showed that these ocular dominance columns are sensitive to visual experience early in life. Removal or significant attenuation of the input from one eye results in a visual cortex that is dominated by input from the active eye, with both physiological and anatomical changes (Hubel & Wiesel, 1970; Hubel, Wiesel, & LeVay, 1975, 1977). There is limited capacity to reverse these changes because there appears to be a restricted window of time during development when the cortex is sufficiently plastic to allow reversal of the effects of unilateral visual deprivation (Blakemore & Van Sluyters, 1974; Movshon, 1976). Where reversal can be achieved within the critical time window, the presence of binocularly correlated signals appears to be crucial for the most achievable restoration (Kind et al., 2002). More recent studies that have examined plasticity in rodents reveal some possible mechanisms for plasticity after the critical period, which suggests possible routes for intervention in adulthood (Levelt & Hübener, 2012).

These experimental observations provided the first tier of understanding of the clinical condition of amblyopia. This is a developmental disorder of human vision, affecting some 3% of the adult population, in which disruption to normal binocular inputs results in one eye becoming weakly or incorrectly connected to the visual cortex. There is almost always a substantial deficit in stereoscopic vision and the weaker eye often deviates from normal binocular gaze. Therapeutically, there is often a significant period of orthoptic treatment, which may be accompanied by surgical intervention to realign the deviating eye by detaching the extraocular muscles from the globe of the eye and suturing them to a new location that corrects the deviation. Treatment for amblyopia normally takes place in late infancy and childhood. Consistent with the idea of a critical period and the particular stimulus requirements for restoration of functional binocularity, the clinical outcome is often limited in its success. Realignment of the eyes is often achieved, but rescue or restoration of stereoscopic vision with depth perception has been unusual (Hess 2001; Levi, Knill, & Bavelier, 2015; Pugh, 1958).

The consequences for vision of the amblyopic eye are sometimes profound. The amblyopic eye may be neurally blind, with effectively minimal vision through that eye, even to the extent that accidental injury to the unaffected eye at a later stage in life can leave the person legally blind (Rahi, Logan, Timms, Russell-Eggitt, & Taylor, 2002). Contrasted to this outcome are recent reports that imply much greater plasticity in the visual system of at least some amblyopic individuals. Some of these reports are accompanied by vivid descriptions of the moment that stereoscopic vision was first experienced (Barry, 2009). Population-based intervention studies are now under way, most of them based upon the principle of creating circumstances that favor use of the two eyes in coordination (Levi et al., 2016). This is a marked alternative to earlier forms of intervention in amblyopia that favored patching of the stronger eye to allow the weaker eye an opportunity to acquire new neural connections into the visual cortex (Awan et al., 2010).

The other puzzle for treatment of amblyopia is the presence of perceptual distortions when viewing through the amblyopic eye (Barrett, Pacey, Bradley, Thibos, & Morrill, 2003; Pugh, 1958). The presumption is that disruption of binocular experience leads to more than just a proportionate weakening of input from the amblyopic eye, as if the contrast of visual features had been gradually reduced. The amblyopic eye seems to have neural connections into the visual cortex that are not just weaker but are somehow disordered or scrambled. Both human studies of amblyopia (Li, Dumoulin, Mansouri, & Hess, 2007) and animal models of amblyopia (Kiorpes, 2006) suggest that the dysfunctional connectivity is present not just in the primary visual cortex V1 but also in extrastriate visual areas.

It is difficult to give a completely consistent summary of all these experimental findings. The early work emphasized the importance of a critical period but focused almost entirely on the window of plasticity that exists in the primary visual cortex, V1. If there are processes beyond V1 that are important, we need to understand how these are affected by a critical period and whether these amount to adult plasticity of binocular vision. The counter-argument may be that there is no substantial adult plasticity for binocular vision as such: in this view, all the current demonstrations of acquisition of stereoscopic vision in adulthood are simply altering the weights assigned to binocular stereoscopic inputs when combining information from different sources about three-dimensional vision. These weights would presumably be the same as those proposed by Landy et al. (1995). New combined studies of brain imaging in humans and experimental analysis of animal models of amblyopia will be needed to investigate this further.

References

Awan, M., Proudlock, F. A., Grosvenor, D., Choudhuri, I. Sarvanananthan, N., & Gottlob, I. (2010). An audit of the outcome of amblyopia treatment: A retrospective analysis of 322 children. British Journal of Ophthalmology, 94(8), 1007–1011.Find this resource:

Bakin, J. S., Nakayama K., & Gilbert, C. D. (2000). Visual responses in monkey areas V1 and V2 to three-dimensional surface configurations. Journal of Neuroscience, 20(21), 8188–8198.Find this resource:

Barlow, H. B., Blakemore, C., & Pettigrew, J. D. (1967). The neural mechanism of binocular depth discrimination. Journal of Physiology (London), 193(2), 327–342.Find this resource:

Barrett, B. T., Pacey, I. E., Bradley, A., Thibos, L. N., & Morrill, P. (2003). Nonveridical visual perception in human amblyopia. Investigative Ophthalmology & Visual Science, 44(4), 1555–1567.Find this resource:

Barry, S. R. (2009). Fixing my gaze: A scientist’s journey into seeing in three dimensions. New York: Basic Books.Find this resource:

Blake, R., & Fox, R. (1973). The psychophysical inquiry into binocular summation. Perception & Psychophysics, 14(1), 161–185.Find this resource:

Blakemore, C., & Van Sluyters, R. C. (1974). Reversal of the physiological effects of monocular deprivation in kittens: Further evidence for a sensitive period. The Journal of Physiology, 237(1), 195–216.Find this resource:

Braddick, O. (1978). Multiple matching in stereopsis. MIT.Find this resource:

Cavonius, C. R. (1979). Binocular interactions in flicker. Quarterly Journal of Experimental Psychology, 31(2), 273–280.Find this resource:

Chopin, A., Levi, D., Knill, D., & Bavelier, D. (2016). The absolute disparity anomaly and the mechanism of relative disparities. Journal of Vision, 16(8), 2.Find this resource:

Cicmil, N., & Krug, K. (2015). Playing the electric light orchestra: How electrical stimulation of visual cortex elucidates the neural basis of perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1677).Find this resource:

Cohen, A. I. (1967). Ultrastructural aspects of the human optic nerve. Investigative Ophthalmology & Visual Science 6(3), 294–308.Find this resource:

Cowey, A., & Wilkinson, F. (1991). The role of the corpus-callosum and extra striate visual areas in stereoacuity in macaque monkeys. Neuropsychologia, 29(6), 465–479.Find this resource:

Cumming, B. G. (2002). An unexpected specialization for horizontal disparity in primate primary visual cortex. Nature, 418(6898), 633–636.Find this resource:

Cumming, B. G., & Parker, A. J. (1997). Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389(6648), 280–283.Find this resource:

Cumming, B. G., & Parker, A. J. (1999). Binocular neurons in V1 of awake monkeys are selective for absolute, not relative, disparity. Journal of Neuroscience, 19(13), 5602–5618.Find this resource:

Cumming, B. G., & Parker, A. J. (2000). Local disparity not perceived depth is signaled by binocular neurons in cortical area V1 of the macaque. Journal of Neuroscience, 20(12), 4758–4767.Find this resource:

DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1991). Depth is encoded in the visual-cortex by a specialized receptive-field structure. Nature, 352(6331), 156–159.Find this resource:

Erkelens, C. J., & Collewijn, H. (1984). Stereopsis, vergence and motion perception during dichoptic vision of moving random-dot stereograms. Experientia, 40(11), 1300–1301.Find this resource:

Ferster, D. (1990). Binocular convergence of excitatory and inhibitory synaptic pathways onto neurons of cat visual-cortex. Visual Neuroscience, 4(6), 625–629.Find this resource:

Fischer, B., Poggio, G. F., & Lennie, P. (1979). Depth sensitivity of binocular cortical neurons of behaving monkeys [and discussion].” Philosophical Transactions of the Royal Society B: Biological Sciences, 204(1157), 409–414.Find this resource:

Hawken, M. J., & Parker, A. J. (1984). Contrast sensitivity and orientation selectivity in lamina-iv of the striate cortex of old-world monkeys. Experimental Brain Research, 54(2), 367–372.Find this resource:

Henning, G. B., & Hertz, B. G. (1973). Binocular masking level differences in sinusoidal grating detection. Vision Research, 13(12), 2455–2463.Find this resource:

Hess, R. F. (2001). Amblyopia: Site unseen. Clinical and Experimental Optometry, 84(6), 321–336.Find this resource:

Howard, I. P. (2012a). Perceiving in depth, Vol. 1: Basic mechanisms. Oxford University Press.Find this resource:

Howard, I. P. (2012b). Perceiving in depth, Vol. 3: Other mechanisms of depth perception. Oxford University Press.Find this resource:

Howard, I. P., & Rogers, B. J. (2012). Perceiving in depth, Vol. 2: Stereoscopic vision. Oxford University Press.Find this resource:

Hubel, D. H., & Livingstone, M. S. (1987). Segregation of form, color, and stereopsis in primate area-18. Journal of Neuroscience, 7(11), 3378–3415.Find this resource:

Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology, 160(1), 106–154.Find this resource:

Hubel, D. H., & Wiesel, T. N. (1963). Shape and arrangement of columns in cat’s striate cortex. Journal of Physiology, 165(3), 559–568.Find this resource:

Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195(1), 215–243.Find this resource:

Hubel, D. H., & Wiesel, T. N. (1970). The period of susceptibility to the physiological effects of unilateral eye closure in kittens. Journal of Physiology, 206(2), 419–436.Find this resource:

Hubel, D. H., Wiesel, T. N., & Levay, S. (1975). Functional architecture of area-17 in normal and monocularly deprived macaque monkeys. Cold Spring Harbor Symposia on Quantitative Biology, 40, 581–589.Find this resource:

Hubel, D. H., Wiesel, T. N., & LeVay, S. (1977). Plasticity of Ocular Dominance Columns in Monkey Striate Cortex. Philosophical Transactions of the Royal Society B: Biological Sciences, 278(961), 377–409.Find this resource:

Janssen, P., Vogels, R., & Orban, G. A. (2000). Selectivity for 3D shape that reveals distinct areas within macaque inferior temporal cortex. Science, 288(5473), 2054–2056.Find this resource:

Janssen, P., Vogels, R., Liu, Y., & Orban, G. A. (2001). Macaque inferior temporal neurons are selective for three-dimensional boundaries and surfaces. Journal of Neuroscience, 21(23), 9419–9429.Find this resource:

Janssen, P., Vogels, R., Liu, Y., & Orban, G. A. (2003). At least at the level of inferior temporal cortex, the stereo correspondence problem is solved. Neuron, 37(4), 693–701.Find this resource:

Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31(7–8), 1351–1360.Find this resource:

Johnston, E. B., Cumming, B. G., & Parker, A. J. (1993). Integration of depth modules: Stereopsis and texture. Vision Research, 33(5–6), 813–826.Find this resource:

Julesz, B. (1960). Binocular depth perception of computer-generated patterns. Bell System Technical Journal, 39(5), 1125–1162.Find this resource:

Kind, P. C., Mitchell, D. E., Ahmed, B., Blakemore, C., Bonhoeffer, T., & Sengpiel, F. (2002). Correlated binocular activity guides recovery from monocular deprivation. Nature, 416(6879), 430–433.Find this resource:

Kiorpes, L. (2006). Visual processing in amblyopia: animal studies. Strabismus, 14(1), 3–10.Find this resource:

Krug, K., & Parker, A. J. (2011). Neurons in dorsal visual area v5/mt signal relative disparity. Journal of Neuroscience, 31(49), 17892–17904.Find this resource:

Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35(3), 389–412.Find this resource:

Leigh, R. J., & Zee, D. (2015). The neurology of eye movements. New York: Oxford University Press.Find this resource:

LeVay, S., Hubel, D. H., & Wiesel, T. N. (1975). Pattern of ocular dominance columns in macaque visual-cortex revealed by a reduced silver stain. Journal of Comparative Neurology, 159(4), 559–575.Find this resource:

Levelt, C. N., & Hübener, M. (2012). Critical-period plasticity in the visual cortex. Annual Review of Neuroscience, 35(1), 309–330.Find this resource:

Levi, D. M., Knill, D. C., & Bavelier, D. (2015). Stereopsis and amblyopia: A mini-review. Vision Research, 114, 17–30.Find this resource:

Levi, D. M., Vedamurthy, I., Knill, D., Huang, S., Yung, A., Ding, J., Kwon, O.-S., & and Bavelier, D. (2016). Recovering stereo vision by squashing virtual bugs in a virtual reality environment. Philosophical Transactions of the Royal Society B: Biological Sciences.Find this resource:

Li, X., Dumoulin, S. O., Mansouri, B., & Hess, R. F. (2007). Cortical deficits in human amblyopia: Their regional distribution and their relationship to the contrast detection deficit. Investigative Ophthalmology and Visual Science, 48(4), 1575–1591.Find this resource:

Liu, Y., Vogels, R., & Orban, G. A. (2004). Convergence of depth from texture and depth from disparity in macaque inferior temporal cortex. Journal of Neuroscience, 24(15), 3795–3800.Find this resource:

Marr, D., & Poggio, T. (1976). Cooperative computation of stereo disparity. Science, 194(4262), 283–287.Find this resource:

Movshon, J. A. (1976). Reversal of the physiological effects of monocular deprivation in the kitten’s visual cortex. Journal of Physiology, 261(1), 125–174.Find this resource:

Nadler, J. W., Angelaki, D. E., & DeAngelis, G. C. (2008). A neural representation of depth from motion parallax in macaque visual cortex. Nature, 452(7187), 642–645.Find this resource:

Nienborg, H., & Cumming, B. G. (2006). Macaque V2 neurons, but not V1 neurons, show choice-related activity. Journal of Neuroscience, 26(37), 9567–9578.Find this resource:

Nikara, T., Bishop, P. O., & Pettigrew, J. D. (1968). Analysis of retinal correspondence by studying receptive fields of binocular single units in cat striate cortex. Experimental Brain Research, 6(4), 353–372.Find this resource:

Ohzawa, I. (1998). Mechanisms of stereoscopic vision: The disparity energy model. Current Opinion in Neurobiology, 8(4), 509–515.Find this resource:

Ohzawa, I., & Freeman, R. D. (1986). The binocular organization of complex cells in the cats visual-cortex. Journal of Neurophysiology, 56(1), 243–259.Find this resource:

Ohzawa, I., Kato, D., Baba, M., & Sasaki, K. (2016). Effects of generalized pooling on binocular disparity selectivity of neurons in the early visual cortex. Philosophical Transactions of the Royal Society B: Biological Sciences.Find this resource:

Otto Roelofs, C., & Zeeman, W. P. C. (1914). Zur Frage der binokularen Helligkeit und der binokularen Schwellenwerte. Albrecht von Graefes Archiv für Ophthalmologie, 88(1), 1–27.Find this resource:

Parker, A. J. (2007). Binocular depth perception and the cerebral cortex. Nature Reviews Neuroscience, 8(5), 379–391.Find this resource:

Parker, A. J. (2009). Stereoscopic vision. In L. R. Squire (Ed.), Encyclopedia of neuroscience (pp. 411–417). Oxford, U.K.: Academic Press.Find this resource:

Parker, A. J. (2014). Cortical pathways for binocular depth. In L. Chalupa & J. S. Werner (Eds.), The New Visual Neurosciences. Cambridge, MA: MIT Press.Find this resource:

Parker, A. J., Johnston, E. B., Mansfield, J. S., & Yang, Y. (1991). Stereo, surfaces and shape. In M. Landy & J. A. Movshon (Eds.), Computational models of visual processing (pp. 359–381). MIT Press: A Bradford Book.Find this resource:

Parker, A. J., Smith, J. E. T., & Krug, K. (2016). Neural architectures for stereo vision. Philosophical Transactions of the Royal Society B: Biological Sciences.Find this resource:

Parker, A. J., & Yang, Y. (1989). Spatial properties of disparity pooling in human stereo vision. Vision Research, 29(11), 1525–1538.Find this resource:

Pettigrew, J. D. (1993). Is there a single, most efficient algorithm for stereopsis? In C. Blakemore (Ed.), Vision: Coding and efficiency. Cambridge University Press.Find this resource:

Pettigrew, J. D., & Konishi, M. (1976). Neurons selective for orientation and binocular disparity in the visual wulst of the barn owl (Tyto alba). Science, 193(4254), 675–678.Find this resource:

Pirenne, M. H. (1967). Vision and the eye. Chapman and Hall.Find this resource:

Poggio, G. F., Gonzalez, F., & Krause, F. (1988). Stereoscopic mechanisms in monkey visual-cortex-binocular correlation and disparity selectivity. Journal of Neuroscience, 8(12), 4531–4550.Find this resource:

Poggio, G. F., & Talbot, W. H. (1981). Mechanisms of static and dynamic stereopsis in foveal cortex of the rhesus-monkey. Journal of Physiology–London, 315, 469–492.Find this resource:

Priebe, N., & McGee, A. W. (2014). Mouse vision as a gateway for understanding how experience shapes neural circuits. Frontiers in Neural Circuits, 8.Find this resource:

Prince, S. J. D., Pointon, A. D., Cumming, B. G., & Parker, A. J. (2002). Quantitative analysis of the responses of V1 neurons to horizontal disparity in dynamic random-dot stereograms. Journal of Neurophysiology, 87(1), 191–208.Find this resource:

Pugh, M. (1958). Visual distortion in amblyopia. British Journal of Ophthalmology, 42(8), 449–460.Find this resource:

Qiu, F. T. T., & von der Heydt, R. (2005). Figure and ground in the visual cortex: V2 combines stereoscopic cues with Gestalt rules. Neuron, 47(1), 155–166.Find this resource:

Rahi, J. S., Logan, S., Timms, C., Russell-Eggitt, I., & Taylor, D. (2002). Risk, causes, and outcomes of visual impairment after loss of vision in the non-amblyopic eye: A population-based study. Lancet, 360(9333), 597–602.Find this resource:

Rashbass, C., & Westheimer, G. (1961). Disjunctive eye movements. Journal of Physiology, 159(2), 339–360.Find this resource:

Read, J. C. A., & Cumming, B. G. (2003). Testing quantitative models of binocular disparity selectivity in primary visual cortex. Journal of Neurophysiology, 90(5), 2795–2817.Find this resource:

Read, J. C. A., Parker, A. J., & Cumming, B. G. (2002). A simple model accounts for the response of disparity-tuned V1 neurons to anticorrelated images. Visual Neuroscience, 19(6), 735–753.Find this resource:

Rogers, B., & Graham, M. (1982). Similarities between motion parallax and stereopsis in human depth-perception. Vision Research, 22(2), 261–270.Find this resource:

Rogers, S. M., Harston, G. W. J., Kilburn-Toppin, F., Matheson, T., Burrows, M., Gabbiani, F., & Krapp, H. G. (2010). Spatiotemporal receptive field properties of a looming-sensitive neuron in solitarious and gregarious phases of the desert locust. Journal of Neurophysiology, 103(2), 779–792.Find this resource:

Sanada, T. M., Nguyenkim, J. D., & DeAngelis, G. C. (2012). Representation of 3-D surface orientation by velocity and disparity gradient cues in area MT. Journal of Neurophysiology, 107(8), 2109–2122.Find this resource:

Schneider, B., Moraglia, G., & Jepson, A. (1989). Binocular unmasking: An analog to binaural unmasking? Science, 243(4897), 1479–1481.Find this resource:

Sherrington, C. S. (1904). On binocular flicker and the correlation of activity of “corresponding” retinal points. British Journal of Psychology, 1, 26–60.Find this resource:

Taira, M., Nose, I., Inoue, K., & Tsutsui, K. (2001). Cortical areas related to attention to 3D surface structures based on shading: An fMRI study. Neuroimage, 14(5), 959–966.Find this resource:

Tanaka, H., & Ohzawa, I. (2006). Neural basis for stereopsis from second-order contrast cues. Journal of Neuroscience, 26(16), 4370–4382.Find this resource:

Thomas, O. M., Cumming, B. G., & Parker, A. J. (2002). A specialization for relative disparity in V2. Nature Neuroscience, 5(5), 472–478.Find this resource:

Ullman, S. (1979). The interpretation of structure from motion. Philosophical Transactions of the Royal Society B: Biological Sciences, 203(1153), 405–426.Find this resource:

Wallace, D. J., Greenberg, D. S., Sawinski, J., Rulla, S., Notaro, G., & Kerr, J. N. D. (2013). Rats maintain an overhead binocular field at the expense of constant fusion. Nature, 498(7452), 65–69.Find this resource:

Westheimer, G. (1979). Cooperative neural processes involved in stereoscopic acuity. Experimental Brain Research, 36(3), 585–597.Find this resource:

Westheimer, G. (1986). Panum’s phenomenon and the confluence of signals from the two eyes in stereoscopy. Philosophical Transactions of the Royal Society B: Biological Sciences, 228(1252), 289–305.Find this resource:

Wheatstone, C. (1838). Contributions to the physiology of vision: I. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society of London, 128, 371–394.Find this resource:

Wheatstone, C. (1852). The Bakerian lecture: Contributions to the physiology of vision. Part the second. On some remarkable, and hitherto unobserved, phenomena of binocular vision (continued). Philosophical Transactions of the Royal Society of London, 142, 1–17.Find this resource:

Wiesel, T. N., Hubel, D. H., & Lam, D. M. K. (1974). Autoradiographic demonstration of ocular-dominance columns in monkey striate cortex by means of transneuronal transport. Brain Research, 79(2), 273–279.Find this resource:

Zeki, S. M., & Mackay, D. M. (1979). Functional specialization and binocular interaction in the visual areas of rhesus monkey prestriate cortex [and discussion]. Philosophical Transactions of the Royal Society B: Biological Sciences, 204(1157), 379–513.Find this resource: