Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 30 March 2023

Spatial Vision for Actionfree

Spatial Vision for Actionfree

  • Eli BrennerEli BrennerVrije Universiteit
  •  and Jeroen B. J. SmeetsJeroen B. J. SmeetsVrije Universiteit

Summary

The way we see the world seems perfect, but it is not. What we see at any moment is based on a very limited part of the information that is available to us, and even details of that part are not always judged correctly. Moreover, perception is often inconsistent. There are persistent idiosyncratic discrepancies between visual and haptic spatial judgments. Even within the visual modality, related attributes such as size and position can be judged in a manner that is inconsistent with the physical relationship between them. People deal with all these differences and inconsistencies by selecting the best attributes to rely on for the task at hand and updating the information whenever possible. Doing so is presumably responsible for people’s proficiency in interacting with their environment, even when faced with the constantly changing spatial relationships with objects in the environment that result from using tools or that arise from the observer or the object moving. The best information to use depends not only on the goal of the action but also on how quickly and how reliably information can be acquired. This makes it complicated to make general claims about spatial vision for action, but it also provides unique opportunities to determine which attributes are used to guide our actions and evaluate why. Such opportunities can be used to identify the attributes that are used to perform a task, for instance revealing that judgments of position rather than size are used to determine how far to open one’s grip when grasping an object. They can also be used to determine how information guides ongoing movements, showing that judgments of position are continuously updated rather than inferred from judged motion. It is evident that we still have a lot to learn about how spatial vision guides action.

Subjects

  • Biological Foundations of Psychology

This article describes the way in which visual information about spatial properties of objects themselves and of their arrangement in the world (spatial vision) guides human actions. Spatial vision is used for much more than guiding movements: It is crucial for recognizing which strawberry is largest or for judging whether a colleague is angry. However, this article focuses on its role in guiding interactions: grasping the strawberry and veering away from the angry colleague. The first step is to discuss how spatial vision might be involved in guiding ongoing movements. Next, two common, compelling, but incorrect assumptions about spatial vision in general are discussed. The first assumption is that people constantly see everything that is within their field of view. The second assumption is that there is consistency in what people see; that errors in judging a certain attribute (such as an object’s speed) are necessarily accompanied by corresponding errors in judging physically equivalent attributes (changes in the object’s position).

The works of James J. Gibson (1966, 1979) introduced many scientists to the idea that certain aspects of visual spatial information could directly guide actions. Attempts to identify the information that guides actions include studies on how the pattern of motion that arises across the visual field as an observer moves (the instantaneous optic flow) might guide balance control (e.g., Lishman & Lee, 1973) or guide the speed (e.g., Salinas et al., 2017) and direction (e.g., Warren et al., 2001) of locomotion. They also include studies on how specific regularities in the changes of more localized features could guide actions such as running to catch a ball (Chapman, 1968) or rotating one’s body to walk through an opening (Warren & Whang, 1987). The overarching idea behind Gibson’s approach was that people continuously rely on very specific visual information to guide their actions, rather than planning the whole movement in advance on the basis of all the visual information that is available at that time.

Although the idea underlying Gibson’s approach is simple, actually searching for the information that guides specific human actions is not. Comparing measured movements with predictions based on certain information can sometimes be quite straightforward (e.g., Yilmaz & Warren, 1995), but how does one decide which predictions to consider? Redundancy in the available sources of information may make it advantageous to combine information from various sources (Rushton & Wann, 1999). Moreover, some aspects of the way movements unfold may be planned rather than emerging from continuous guidance, such as the digits’ paths curving during a grasping movement to ensure that the digits are moving more or less orthogonally to the surfaces when they make contact (Smeets & Brenner, 1999). Predictions have also often ignored constraints imposed by the resolution of sensory judgments and inevitable neuromuscular delays (Brenner & Smeets, 2018a). However, although we may not yet know exactly what information guides our movements, the overall idea that actions are guided by a selection of the information that is available makes sense for many reasons. Not only are some attributes more relevant for guiding certain actions than others, but some attributes may not be used simply because other attributes can be judged faster or more reliably (see Box 1 for an explanation of the broad use of the term judgment).

Box 1. Terminology

Judgment. For lack of a better word, we refer to extracting information about an attribute from the sensory input as judging the value of that attribute, irrespective of whether or not the person in question is aware of making a judgment.

Attuning. We refer to any adjustment to a movement to fit the circumstances as attuning the movement. This includes quick adjustments in response to feedback or perturbations but also the gradual adjustments to systematic mismatches that are often referred to as adaptation and mechanisms for selecting an appropriate movement under the prevailing circumstances on the basis of prior experience.

Goodale and Milner (1992) proposed that the visual processing required for guiding actions might be very different from that required for recognizing objects. They combined this idea with the abundant evidence that object identity is processed in different visual areas than object location (the what and where pathways; Haxby et al., 1991) to argue that the dorsal visual pathway is specialized in localization because it is responsible for the visual processing that guides actions. They proposed a distinction between a ventral visual pathway that is responsible for the perception of objects’ enduring properties and a dorsal visual pathway that is responsible for guiding actions on the basis of instantaneous spatial information. This distinction is in line with many neurological and neurophysiological findings (Milner & Goodale, 2006). A strict separation between visual processing for perception and action is unlikely to be correct (Rossetti et al., 2017), but it is evident that there is some relationship between visual attributes being likely to guide ongoing movements and being processed in the dorsal visual pathway. This discussion focuses on what we know about the attributes that guide actions rather than on the pathways that are involved.

An evident step in determining what attributes an action is likely to rely on is to consider what is likely to be relevant for the action at each instant. For instance, in order to select the biggest strawberry, one has to identify the strawberries and judge their sizes. To pick up the selected strawberry, one must judge the positions of relevant parts of the biggest strawberry’s surface: the parts by which one intends to pick it up. These two consecutive stages of the action both require spatial judgments about the biggest strawberry, but they are judgments about different attributes: its size and the locations of suitable positions on its surface. These attributes are physically related: the distance between the locations on opposite sides of the strawberry depends on its size. However, it is logical to judge the strawberry’s size differently than the locations to which one will move one’s fingers to grasp it: the strawberry’s size can be judged on the basis of the size of its retinal image and some judgment of distance, whereas judging the location of positions on its surface requires a combination of retinal location and eye orientation.

In general, any attribute that can be perceived can contribute to the selection and planning of actions. Which attributes are actually used depends on the task’s demands. A result of people using different attributes to achieve different goals is that it is difficult to predict performance on one task based on performance on another. On the bright side, if a judgment about a certain attribute can be associated with a specific error, the extent to which that judgment contributes to achieving a goal can be estimated by looking for that error. The specific error may exist naturally, but it can also be introduced experimentally.

For selecting the largest strawberry, it is evident that one must judge the strawberries’ sizes. There may be questions about what cues to use to judge the depth that is needed to interpret retinal image size and how to consider occluded parts, but the relevant attribute is evident. For selecting the positions on the strawberry’s surface at which one can best place one’s fingers to grasp the strawberry, it is evident that one must judge the strawberry’s location (Paulignan et al., 1997; Schot et al., 2010), shape (Blake, 1992; Goodale et al., 1994b), and the locations of other objects (Tresilian, 1998; Vaughan et al., 2001; Voudouris et al., 2012b). These locations, together with the surface orientation at the corresponding positions, are the attributes that are used to guide the final part of the digits’ movement before contact (Kleinholdermann et al., 2007). What is used to determine the extent to which the thumb and fingers move apart during the earlier parts of the grasp? This could be based on the same attributes (Smeets & Brenner, 1999), but many authors have assumed that it is based on the judged size of the strawberry (following Jeannerod, 1988). When trying to infer which attributes guide an observed movement, it is important not to make unjustified assumptions. One must keep in mind that only part of the available information might be considered and that there can be inconsistencies between judgments about related attributes.

Current knowledge about how spatial vision guides everyday actions is still quite limited, despite much excellent research on a variety of details. By combining many approaches, some consensus may be achieved about whether and how certain attributes are used. The emphasis here is on determining which spatial attributes are used to guide common ongoing movements. Sometimes it may be clear that certain attributes are not suitable for performing a task because they are altogether irrelevant for that task or take too long to process to be of any use in a task in which the situation is rapidly changing. Thus, the way in which ongoing movements are controlled may depend on details such as the speed of the movement. Before presenting two examples of the complexity of studying how spatial vision guides our actions, reaching out to grasp an object and intercepting a moving object, the aspects of spatial vision that are likely to be important for guiding actions are briefly introduced, and the evidence against the two assumptions that were mentioned in the first paragraph of this section are presented.

Introduction to Spatial Vision for Action

Spatial vision refers to extracting information about the layout of surroundings from the light entering the eyes. This information consists of a wide variety of spatial attributes, such as where surfaces are; their sizes, shapes, and orientations; and any changes in their positions, either relative to the observer or to other surfaces. It is concerned with spatial attributes as opposed to attributes such as the surfaces’ reflective properties, which are primarily relevant for identifying objects or judging the material of which they are made.

Most attributes cannot be extracted directly from the image on the retina. For instance, an object’s size cannot be determined from the retinal size of its image alone because the retinal image size also depends on the egocentric distance to the object or its relative distance with respect to other objects of known size. Judging egocentric distance in different ways could lead to different judgments of object size. Another attribute that can be judged in various ways is surface slant. Slant can be judged from binocular disparities, but if the surface is textured (either because there is some structure in the reflective properties of its reflectance or because it is not completely smooth) gradients in the pattern of its retinal image can help to extract information about the surface’s slant relative to the line of sight. In doing so, the implicit assumption is made that the true surface texture is uniform in some respect (see Figure 1A; Todd & Oomes, 2002). Similarly, a surface’s orientation at a given point can be extracted from the luminance at the corresponding point on the retina, but doing so involves making implicit assumptions about the surface’s reflectance and the illumination (Khang et al., 2007; Todd et al., 2014). Such assumptions are made automatically, so people rely on information based on such assumptions in their actions. For instance, they rely on texture isotropy when they interact with a slanted, textured surface (Knill, 2005).

Figure 1. How implicit assumptions influence spatial judgments. (A) The items are assumed to be identical and symmetrical, so the textured surface created by these gray disks is perceived as slanted. (B) The gray items that share a border with the black square are perceived as partially occluded disks. (C) Even if one of the disks is completely occluded one is likely to infer its presence, probably because it would be quite coincidental for precisely the disk that is completely hidden by the square to be missing. (D) This is even so when the square itself is inferred from occlusion (Kanizsa, 1976; van Lier, 1999). (E) The presence of partially occluded disks may not even be essential. (F) However, there must be some reason to believe that there might be an occluded disk.

The most direct spatial information about visually perceived structures is their egocentric direction. Each retinal image provides an ordered representation of directions with respect to the eye, which provides direct information about structures’ relative directions with respect to the observer’s eye. There is generally only one visible item in any direction (ignoring areas that are visible to one eye but not the other) because most surfaces occlude everything that is behind them. Therefore, people often have to infer properties of items that are partially or even completely hidden, and they do so quite readily (see Figure 1B–E). The capacity to infer properties of hidden surfaces is essential in the control of actions. When grasping objects, people do not hesitate to place their digits at positions on surfaces that are hidden from view (Voudouris et al., 2012a). They use memory of where things were to guide the movements of their eyes (Aivar et al., 2005) and hand (Brouwer & Knill, 2007, 2009) if doing so is opportune.

Irrespective of whether a target position is visible, inferred, or remembered, in order to guide one’s finger to that position the orientation of the eyes in the head and all the angles of the joints connecting the finger to the head must be considered. In some cases, it might seem advantageous to circumvent some of this complexity by not using egocentric positions, relying directly on the position of the finger relative to the target instead. This alternative attribute (relative position) is directly available from the retinal image. It might be especially advantageous to use the relative position when moving a cursor across a screen with a mouse, because there is no direct way to relate the position of the cursor on the screen to that of the hand holding the mouse through proprioception. However, studies in which a cursor had to quickly be moved to a target show that the cursor and target are localized independently rather than relative to each other (Brenner & Smeets, 2003; Crowe et al., 2021; Franklin et al., 2016).

Judging the distance to a surface is more complicated than judging directions. The most direct source of information about egocentric distance is the difference in direction with respect to the two eyes. However, the modest distance between the eyes limits how precisely distance can be judged in this manner, because for a modest distance between the eyes the difference between the orientations of the two eyes when fixating an object and the differences between the images in the two eyes are small. This is presumably why many other sources of information about egocentric distance are also considered (for reviews of depth cues and their resolutions at different distances, see Brenner & Smeets, 2018b; Cutting & Vishton, 1995). When one moves, the change in a static structure’s direction relative to oneself can be combined with knowledge about how one has moved to judge the structure’s distance (de la Malla et al., 2016). When an object is on a horizontal surface, the height in the visual field of the position at which the object touches the surface can be combined with knowledge about how far the surface is below one’s eyes to judge the object’s distance (Dixon et al., 2000). When one has some idea of an object’s size, its retinal image size can be combined with such knowledge to judge its distance (McIntosh & Lashley, 2008; Sousa et al., 2012). In line with the findings on direction in the previous paragraph, studies in which a cursor had to quickly be moved to a target in depth also suggest that the cursor and target are localized independently rather than relative to each other (Brenner & Smeets, 2006).

If egocentric directions and distances guide human actions, is the position of a target relative to other items in the environment completely irrelevant for guiding actions? There is some evidence that changing the positions of relevant items in a scene can influence actions to a remembered target position, but only if the other items are relevant and one does not notice that they have been moved (Fiehler et al., 2014; Lu & Fiehler, 2020). One way to interpret this is that the movement is still directed toward a remembered egocentric position and that moving relevant reference items influences where one remembers this position to have been.

The distinction between positions relative to oneself and positions relative to other objects is critical for the interpretation of the anatomical segregation of the primate cortex into two visual streams. The idea that ongoing actions are normally not guided by relative positions is fundamental to the interpretation of the classical distinction between a what and a where pathway (Haxby et al., 1991; Mishkin et al., 1983) as a distinction between pathways dedicated to extracting information that is constant (i.e., object identity) and information that is constantly changing (such as positions relative to oneself). In addition to nonspatial properties, such as surface reflectance, object identity includes spatial properties, such as the object’s size and shape (including the relative positions of the object’s components). The idea of segregating different kinds of spatial information is consistent with evidence that the processing of the instantaneous egocentric position and motion of relevant objects relative to the observer takes place in the dorsal visual cortical pathway, while most processing of attributes that are relevant for judging the objects’ identities takes place in the ventral visual pathway (Goodale et al., 2004; Milner & Goodale, 2008; Schenk, 2006).

Some have been tempted to take this distinction one step further, proposing that there is a fundamental distinction between judgments that are used to guide actions and those that are used to recognize objects (Milner & Goodale, 2008; Pisella et al., 2000). The latter distinction is often referred to as a distinction between action and perception (Goodale, 2011; Goodale & Milner, 1992). This step is not just a change in terminology. Attributes such as size and shape do not constantly change. Neither do the materials of which an object is made (and therefore its likely weight distribution). Nevertheless, an object’s size and shape, the materials of which it is made, and even knowing how the object is used if one recognizes it can all influence how an object is grasped (Feix et al., 2014; Klein et al., 2020; Paulun et al., 2016). One might argue that deciding where to place one’s fingers on an object in order to pick it up is not part of the action (e.g., because such positions are likely to have been chosen before the movement started), but there is evidence that people can adjust their choice of grasping points during an ongoing movement (van de Kamp et al., 2009; Voudouris et al., 2013) just as they can adjust the choice of a target while already moving (Brenner & Smeets, 2015; Resulaj et al., 2009). Thus, although the dorsal visual pathway presumably processes the information that is most extensively used to guide movements, a fundamental segregation between visual processing for perception and for action, where at least some spatial attributes are processed differently for perception than for action, is not very likely. Moreover, contrary to the idea that the dorsal visual pathway is immune to contextual information, experimental evidence has shown that illusions do influence the dorsal stream (Walter & Dassonville, 2008; Weidner & Fink, 2007) and thereby influence actions (de Brouwer et al., 2015; de la Malla et al., 2019; Somers et al., 2000). Thus, the anatomical segregation between the two visual pathways probably reflects a distinction between attributes that are constantly changing because they rely on spatial positions relative to oneself, and must presumably be combined with extra-retinal information, and attributes that rely on more persistent properties and relationships.

Ongoing movements may be guided by egocentric positions, orientations, and motion, but how about the role of spatial vision in the initiation of actions? How do people decide whether to try to jump across a stream or to walk through a narrow gap? An individual probably knows from experience how far they can easily jump, and therefore whether the stream is easily jumpable, possibly jumpable, or impossible to jump across (such understanding of one’s ability with respect to items in the environment is often referred to as perceiving an affordance; Gibson, 1979). Similarly, if a doorway is clearly wide enough to pass through, a person will just walk through it. If it is just passable they will rotate their shoulders to walk through. If it is clearly too narrow to pass through, they will usually not try (Fath & Fajen, 2011; Warren & Whang, 1987).

Judgments about action capabilities have also been studied by asking participants to indicate whether they could step onto (Warren, 1984), jump or step over (Day et al., 2015), pass through (Warren & Whang, 1987), or extend their arm to reach (Carello et al., 1989) something. It is easy to imagine how the ability to make such judgments could arise from experience, and such judgments are flexible in the sense that they change if the situation is changed, for instance by giving people a tool with which to reach for the object (Bourgeois et al., 2014). Since it is a bit trivial that one can reach further positions with a stick than without, attempts have been made to show that judgments about reachability actually influence judged distance and not just judgments about whether something is reachable. Some studies have found that it does (Bourgeois et al., 2014; Witt et al., 2005), while others have not (de Grave et al., 2011).

Returning to visually guiding ongoing movements, people cannot avoid adjusting their movement to a target perturbation when they know they should stop moving if the target is perturbed (Pisella et al., 2000). Similarly, they cannot avoid adjusting their grip to changes in grasping points’ positions before switching to new grasping points when an object that they are reaching out for is rotated (Voudouris et al., 2013). This is because it takes less time to adjust the ongoing movement to visual information about the original endpoint than to reconsider the endpoint or the movement altogether (Smeets et al., 2016). This is consistent with visual information about the egocentric positions of selected points on surfaces being processed in brain areas that are close to and well connected with motor areas (e.g., the dorsal cortical pathway, and possibly subcortical pathways involving areas such as the superior colliculus; Reynolds & Day, 2012).

Vision Is Very Selective

As mentioned previously, it is important to be aware of two common, compelling, but incorrect assumptions about spatial vision. The first assumption is that people constantly register everything that is within their field of view. Being aware that this is not true is an important step toward treating vision as an active process rather than as a passive accumulation of information. There is obviously some accumulation of registered information about some aspects of what we see (often referred to as memory), but this is only a tiny portion of what was visible at the time. One cannot simply recall a scene an hour later to recover details that one had initially missed. Contrary to our subjective experience, we do not register everything that is within our field of view; we only process what is directly relevant to the task at hand (Triesch et al., 2003), as will be explained in more detail in the rest of this section. This does not mean that we can always completely ignore everything else. It is evidently important not to ignore things that could warn us of dangerous situations. Consequently, it is almost impossible to ignore sudden changes of potential relevance (Mulckhuyse et al., 2008; Schreij et al., 2008). However, the visual processing that we perform is usually quite well tuned to the task.

Our ability to register precisely what we need is probably why so many things go by unnoticed. A very simple example is that we do not notice that our spatial resolution is quite poor except near where we are looking. The poor spatial resolution is simple to verify by trying to read this text while intentionally looking to the side. People normally do not notice how steeply their spatial resolution decreases with eccentricity; if they want to identify a structure they quickly direct their gaze toward it. A phenomenon known as change blindness, whereby people fail to detect obvious changes in a scene, also illustrates the selectivity of visual processing. If a single object suddenly changes, people immediately see the change (Movie 1); but if the change occurs while the image is moving across the retina (Schofield et al., 2006; Movie 2), as might happen during a saccade (Henderson & Hollingworth, 2003); is accompanied by other global (Blackmore et al., 1995; Movie 3) or even local (O’Regan et al., 1999; Movie 4) changes; or occurs very gradually rather than abruptly (Simons et al., 2000), it can take quite a long time to find the change. This is the case even though the task is to detect the change. Thus, the only reason that an isolated change is detected so easily is because the change itself reveals where the relevant information is to be found. Masking detection of the change by also having changes occur elsewhere removes this and means one has to actually search for the changing item.

Movie 1. It is easy to detect which of the 64 items is regularly changing color.

Movie 2. It is difficult to detect the change in color if all the items shift together when the item’s color changes.

Movie 3. It is also difficult to detect the change in color if the whole image briefly disappears when the item’s color changes.

Movie 4. It is even difficult to detect the change in color if the item is briefly occluded when its color changes.

Even extremely conspicuous events, such as a gorilla walking through a scene, can go by unnoticed if the observer is performing an unrelated task at the time, such as counting the number of times a ball is thrown (Simons & Chabris, 1999). Moreover, even changing a task-relevant property of an object that one is manipulating can go by unnoticed if the change does not take place at a moment at which that property is critical. This was shown in an experiment in which virtual objects had to be sorted by size by placing them on one of two virtual conveyer belts (Triesch et al., 2003). Participants were instructed to first move the large objects to one belt and then the small objects to another. The experiment was designed in such a way that participants looked at the object while reaching out to pick it up and then shifted their gaze to the conveyor belt to guide the placing movement. Near the time of the shift in gaze the object sometimes changed size. Participants often did not notice that the size had changed because the change took place during the saccade from the object to the conveyor belt. Interestingly, they placed the object on the wrong belt for its final size despite the fact that they were looking at the object again when placing it on the conveyor belt. Presumably they did not notice the new size because size determined which object was to be picked up, so the size had already been judged when reaching for the object. As size is a persistent property of objects, there was no reason to judge it again when executing the placing movement a fraction of a second later.

This interpretation is supported by a similar study in which the information that was relevant for placing the object could differ from the information that was relevant for picking it up; each could depend on the object’s height, width, color or texture (Droll & Hayhoe, 2007). Again, the most important trials were occasional ones in which the object’s appearance changed. On more than half of such trials the participant did not notice the change. When participants knew that the same information was relevant for picking up and placing the object, as in the former study, they mostly placed the object on the conveyor belt that corresponded with the object’s original appearance. When information about where to place the object was only provided once they had picked up the object, they mostly placed the object on the conveyor belt that corresponded with the object’s final appearance. This combination of findings nicely illustrates how details of the task determine not only what attributes are processed but also when they are processed.

The reason people believe that they see everything within their field of view despite not assessing many attributes is presumably that at any moment they can gain access to information about anything they are interested in, so they never notice missing any information (O’Regan & Noë, 2001). Actually, the whole idea that one could notice missing information is strange, because no information is really missing; the presence of the information is just not being processed.

Many of the limitations described here are about our conscious experience of what we see. Processing information to guide ongoing movements may not be limited in quite the same way. It has long been known that people adjust their goal-directed movements in response to changes in target locations that they do not notice (Goodale et al., 1986). In fact, people are generally not aware of continuously adjusting their ongoing movements on the basis of ever-changing judgments of egocentric positions (Brenner & Smeets, 2018a). It is therefore worthwhile to study the role of spatial vision in guiding ongoing movements and the conscious perception of space separately. Even if all the available information is always processed, and the reason that information is missed in change-blindness tasks is that only a selection reaches conscious awareness, it is likely that a similar selection from the available information guides each aspect of people’s actions.

Vision Is Not Always Consistent

The second common, compelling, but incorrect assumption about spatial vision is that related spatial attributes are judged in a manner that is consistent with the laws of physics. One generally has the impression of seeing a coherent world, in which an approaching car changes its position over time in accordance with its speed. However, the consistency between physically equivalent attributes is not always found for judgments of those attributes. This is because related attributes are judged separately, each considering components of the available information in relation to how relevant they are for correctly judging that attribute. For instance, the rate at which an approaching car’s retinal image is expanding provides useful information for assessing the car’s speed, but it is not very informative about its instantaneous position. And indeed, variations in the rate of retinal image expansion can influence judgments of a target’s velocity more than they do judgments of its position at corresponding moments (Brenner et al., 1996).

Systematic errors in judging the value of a specific attribute could arise from considering information that is specific to that attribute and is based on assumptions that are violated. For instance, using perspective cues to judge a surface’s slant is based on assuming that shapes are symmetrical or texture is isotropic, so one will misjudge the slant if these assumptions are violated. The extent to which the slant is misjudged will depend on how such incorrect information from perspective is combined with correct information from other cues. With no additional knowledge about the true value of the attribute in question, minimizing the uncertainty of the combined estimate is the best one can do. In general, estimates from conflicting cues are indeed combined in ways that maximize the overall precision (Ernst & Banks, 2002; Fetsch et al., 2010; Hillis et al., 2004; Knill & Saunders, 2003; Muller et al., 2009; Scarfe & Hibbard, 2011). However, if feedback reveals that combining the cues in this manner gives rise to judgments that are incorrect, despite being precise, the accuracy is also considered (i.e., cues providing information that is inconsistent with the feedback are subsequently given less weight; Cesanek et al., 2020; van Beers et al., 2011). With some cues only contributing to certain judgments or being given different weights for some judgments than for others, one should not be surprised to occasionally encounter situations in which judgments about related attributes are not consistent with each other (Brenner & van Damme, 1999).

Another reason errors might not always be consistent across attributes is that different attributes may make different use of information within different reference frames. Two identical parallel lines placed side by side (Figure 2A) appear to have about the same length and orientation, and the tops and bottoms of the two lines appear to be aligned vertically. This is a consistent set of judgments. When these same lines are superimposed on an image of a scene (Figure 2B), they no longer appear to have the same orientation or length, possibly because it is impossible to completely ignore the depth portrayed in the scene when judging their lengths and orientations (Gregory, 1963). However, if one decides to check whether the line on the left is really longer than the one on the right by checking the vertical alignment of the two lines’ endpoints, one will discover that the top and bottom of the two lines appear to be aligned vertically, which is inconsistent with the line on the left appearing to be both less slanted and longer. In accordance with a lack of consistency across judged attributes, attempts to define a single coherent visual space (e.g., Cuijpers et al., 2000) have often failed. If there is no guarantee of consistency across attributes, such a space cannot exist (Smeets et al., 2009).

Figure 2. How reference frames can influence spatial judgments. (A) Two identical slanted lines placed next to each other appear to have about the same length and orientation. That is not surprising because they are equally long and parallel. (B) Superimposing the same two lines on the image of a scene can change this. The line on the right now looks shorter and slanted further from vertical, probably because one cannot ignore the reference frame provided by the scene. When considered as part of the scene, the line on the left would really be longer and less slanted because it appears to be on the road so that it recedes in depth, whereas the line on the right appears to be parallel to the wall so that it does not recede in depth and is altogether nearer the observer.

Illusions provide a useful tool to study how spatial vision guides action because illusions sometimes influence certain attributes while leaving others unaffected, or at least not influenced to the same extent. For instance, variants of the Müller-Lyer illusion influence judged line length without having a similar influence on the judged positions of the lines’ endpoints (Gillam & Chambers, 1985; Smeets et al., 2002b). Another example arises after adapting to motion for some time: a static scene appears to move but does not appear to change position accordingly (Anstis et al., 1998). Specially designed stimuli can also give the impression of continuous motion without a corresponding change in position (Mather, 2006; Tse & Hsieh, 2006).

Despite the abundance of examples of inconsistencies between attributes in spatial vision, and the evidence that visual attributes are processed separately in distinct brain areas (e.g., Fattori et al., 2009; Grill-Spector & Malach, 2004), the myth of consistency perseveres. It remains surprising for the judged distance between two positions to not correspond with the distance between their judged positions (Smeets & Brenner, 2019) or for an object to appear to move faster than another but not to move farther in the same amount of time (Brenner et al., 1996; Smeets & Brenner, 1995; see Movie 5). A likely reason for the surprise is that we normally do not compare attributes in this manner (or try to combine them to construct a coherent global representation of the scene), but rather judge all the attributes that are relevant to the demands of the current task at the moment that we need them. Judging each attribute as precisely as possible will normally result in judgments that are consistent enough for whatever one is doing simply because the physical basis is consistent.

Movie 5. Watch the two gray dots move counterclockwise along the circular path. The dot on the right clearly moves faster than the one on the left, and yet they reach the red line that bisects the circular path at the same time.

If one embraces the lack of consistency between judgments about related attributes, one can manipulate images to primarily influence a specific judgment and use the resulting lack of consistency to evaluate what visual information is used to guide our actions, and maybe even how it is used. Unfortunately, despite the abundant evidence that judgments need not be consistent, many researchers still implicitly assume consistency when interpreting their data, leading to incorrect conclusions and, more importantly, to missed opportunities. Examples involving assumptions of consistency between judged positions and the judged distance between them and between judged motion and judged change in position are provided in the ‘Size and Grasping’ and ‘Interacting with moving objects’ sections, after discussing several other issues.

Differences Between Seen and Felt Space

One might worry that errors, biases, and inconsistencies in spatial vision not only make it difficult to characterize the geometrical properties of spatial vision (Cuijpers et al., 2000, 2002; Todd et al., 2001) but also make it impossible for spatial vision to reliably guide actions. However, to guide actions there is no need for geometrical consistency, as long as the brain can convert visual judgments about spatial attributes into appropriate motor commands (Braddick & Atkinson, 2013; Georgopoulos, 1994; Körding & Wolpert, 2006). Errors in judging a relevant attribute will influence the way a movement is made, but it need not prevent one from reaching one’s goal because actions are normally continuously controlled using the latest visual information about both the target and the moving limb (Brenner & Smeets, 2018a). Thus, for instance, systematically misjudging the direction to a target by 5° will make one start to move in the wrong direction, but even if the direction continues to be misjudged throughout the movement, adjusting the trajectory will ensure that one reaches the target, although the trajectory will be curved (Smeets & Brenner, 2004).

Having decided in which direction to move, one has to activate the correct muscles. Which motor commands will do the job depends on the position of the limbs (Sober & Sabes, 2003). Proprioception is therefore essential for this decision (Gordon et al., 1995). Since the movements of the limbs are also continuously controlled on the basis of proprioceptive feedback (Manning et al., 2012; Sittig et al., 1987), as is evident from fast adjustments to sudden perturbations of the unseen hand (Kurtzer et al., 2014; Marsden et al., 1977), felt position must be important for the relationship between spatial vision and motor control.

One might expect that one’s extensive experience with one’s own limbs would keep spatial vision aligned with one’s notion about the locations of parts of one’s body, and in particular of one’s hand (Henriques & Cressman, 2012; Shadmehr et al., 2010). However, it has been shown that there are systematic idiosyncratic differences between seen and felt space (Kuling et al., 2016; Rincon-Gonzalez et al., 2011; Smeets et al., 2006; van Beers et al., 1998). The direction and extent of this visuo–proprioceptive mismatch depend on the specifics of the task (Kuling et al., 2017), which might give the impression that the mismatch is a random artefact. However, the idiosyncratic mismatch is systematic: It does not change when the arm is held still for many minutes (Rana et al., 2020) and is even reproduced when the same task is repeated a month later (Kuling et al., 2016).

Normally, the large and persistent idiosyncratic mismatches between vision and proprioception are not a problem because we see our hand approach objects that we want to manipulate. Under such circumstances, seen and felt space appear to be attuned (see Box 1 for an explanation of the way we use the term attuned). This is presumably not because we use the relative positions of the hand and the target to control our movements; even when guiding a cursor to a target on a screen, relative positions appear to make a very small contribution (see “Introduction to Spatial Vision for Action”). It is probably because people use visual feedback to attune their movements to compensate for idiosyncratic errors. This attuning is similar to the way in which people adapt to experimentally imposed errors (van der Kooij et al., 2013), which has been studied extensively using the prism adaptation paradigm (Redding & Wallace, 2002). Such attuning is discussed in more detail in the next section. Importantly, when the hand is hidden from view and no feedback is provided about the relationship between seen and felt space, the attuning gradually disappears and the idiosyncratic matching errors reappear (Smeets et al., 2006).

Maybe the ability to quickly and temporarily attune vision with proprioception is what makes people so proficient at using tools; it makes it possible to consider the tool as an extension of the body (Maravita & Iriki, 2004). One might have ways of feeling the position of the tip of a tool when moving a physical object like a fork (Turvey & Carello, 2011), but people can also proficiently control the movement of a cursor on a computer screen as if it were part of their body (Brenner & Smeets, 2003) as long as certain relationships between directions are maintained (Brenner et al., 2020). Being able to feel the positions of relevant parts of tools would explain why people generally look at the target (or at obstacles; Johansson et al., 2001) rather than at their hand or even a cursor that they are moving to the target by moving their hand. However, although they seldom direct their gaze at their hand or the cursor, they do use visual information to guide the hand or cursor to the target, relying on information from peripheral vision to do so (Cámara et al., 2018). The lower resolution of peripheral vision is apparently not a problem (Cámara et al., 2020), which is in line with other observations that goal-directed movements do not necessarily require a high spatial resolution to be controlled precisely (Mann et al., 2007).

Attuning Behavior to Sensory Inconsistencies

It has long been known that people can attune their behavior to experimentally imposed changes in the relationship between the images on their retinas and the felt position of their body, such as occurs when people are forced to look at the world through prisms (Harris, 1965; Hay & Pick, 1966). Attuning to some unusual relationships is easier than attuning to others (Lillicrap et al., 2013). This is probably because some have logical anatomical interpretations (van den Dobbelsteen et al., 2003), but many details are not yet fully understood. The previously mentioned lack of permanent recalibration (Smeets et al., 2006) does not mean that exposure to a new relationship never has a long-lasting effect because relearning an unusual relationship sometimes progresses faster than the initial learning (Criscimagna-Hemminger & Shadmehr, 2008; Kitago et al., 2013; van der Kooij et al., 2016; Vaswani et al., 2015). However, even people who regularly switch between glasses and contact lenses (that give rise to different degrees of image magnification) do not immediately attune their behavior when they switch between the two (Schot et al., 2012).

Attuning one’s behavior to the circumstances occurs in many different ways. It can be intentional or automatic (Maresch et al., 2021). It can be a response to perceived spatial errors with respect to the target (van Beers, 2009) or with respect to the planned movement (Tseng et al., 2007). It can even be based on success rather than on perceived spatial errors (Kuling et al., 2019; Leow et al., 2018; Mehler et al., 2017; van der Kooij & Smeets, 2019). Changes across subsequent movements are often smaller if one can see the whole movement than if one can only see the outcome (Bernier et al., 2005), presumably because such changes are based on errors in the outcome of the movement and continuous vision of the hand reduces such errors. The many ways to attune behavior suggest that there are multiple mechanisms involved, probably each with its own learning rate and rate of decay, so as time passes performance may even sometimes shift to be governed by different mechanisms without an evident change in performance (McDougle et al., 2015; Smith et al., 2006).

Visual localization is unreliable near the time of saccades, partly because people cannot reliably synchronize retinal information with eye orientations (Dassonville et al., 1992; Maij et al., 2011). During saccades, the image shifts so quickly across the retina that details are blurred and masked by what is visible after the saccade (Duyck et al., 2018; Maij et al., 2012), and even the motion itself is too fast to see (Castet & Masson, 2000). There is a lot of interest in spatial vision near the moment of saccades, mostly driven by surprise that the world looks static despite its image shifting across the retina (Bisley et al., 2020; Burr & Morrone, 2012; Colby et al., 1995). However, if vision consists of actively selecting relevant information to process at each instant (see “Vision Is Very Selective”) and not of attempting to build a consistent internal representation of the outside world, changing gaze should simply be seen as one of the tools for selecting relevant information. It seems pointless to constantly shift representations of earlier retinal images to combine them into a single representation of the scene when information can be obtained from any position in the scene by simply looking there (O’Regan & Noë, 2001).

Saccades themselves, our most common movements, are often overlooked as actions because their goal is usually to help us to obtain information for other actions. Due to the poor localization around the moment of a saccade, a slight shift of a target during a saccade is not noticed. Despite not being noticed, such a shift obviously gives rise to an error in fixating the target. Repeatedly shifting the target to a position further (or closer) than its initial position during the saccade results in subsequent saccades being lengthened (or shortened) so that they end closer to the target (Pélisson et al., 2010). Such repeated shifts of target position during a saccade sometimes influence judgments of that position before the saccade, while the target is still in the periphery (Collins et al., 2007; Gremmler et al., 2014; Kröller et al., 1999), although they do not always do so (Hernandez et al., 2008; Schnier & Lappe, 2012). That precise information gathered after a saccade can calibrate judgments made before the saccade, when relying on peripheral vision, has also been found for judgments of shape (Herwig et al., 2015) and size. Increasing a target’s size during saccades toward the target, such that it was larger when viewed in central vision than when viewed in the periphery, led to its size being overestimated when viewed in the periphery (Bosco et al., 2015; Valsecchi & Gegenfurtner, 2016). In accordance with the notion that consistency between judgments is not imposed (see “Vision Is Not Always Consistent”), the relationship between changes in judged size and judged position were not consistent across adapting procedures (Bosco et al., 2020; Pressigout et al., 2020; Valsecchi et al., 2020). Thus, spatial vision can be attuned through experience (Zimmermann & Lappe, 2016).

Movements can be attuned in various ways, and doing so can even influence perceptual judgments. There appear to be many mechanisms involved, and at least some mechanisms return to specific nonveridical values when all feedback is removed, so presumably the mapping between the retina (spatial vision) and the limbs (action) has some intrinsic structure and cannot be arbitrarily reassigned.

Selecting Information to Rely On

Which spatial attributes can one best rely on to guide an action? When two related attributes provide redundant information, it is not necessary to use them both. Which should one use? Can one also totally neglect an attribute? Which attributes one relies on might depend on how quickly and reliably various attributes can be used to guide the action. It might also depend on how and where information about the attribute can be obtained. People direct their gaze at relevant items (Hayhoe & Ballard, 2005; Land, 2006) and move their eyes in a way that optimizes the processing of the retinal image (Intoy & Rucci, 2020; Rucci & Victor, 2015), but gaze can only be directed at one place at a time so one may need to choose what information to prioritize (whereby one presumably considers the extent to which judgments deteriorate with retinal eccentricity). Consequently, some attributes that might appear to be very useful may not be used to guide movements, or they may be judged so unreliably that the visual judgment has to be combined with expectations.

An example of a visual attribute that one might expect to be considered, but for which there is ample evidence that it is not, is acceleration (Lee et al., 1983). In the case of acceleration by gravity, people rely on experience rather than judging acceleration visually (Jörges & López-Moliner, 2017; McIntyre et al., 2001), presumably because prior experience provides a more reliable estimate than the instantaneous visual information. However, when faced with less familiar accelerations, the visual information is still neglected. It may simply not be worth considering visual judgments of acceleration because it takes so much time to obtain a reliable estimate that it is better to only consider the position and velocity (Brenner et al., 2016).

An example that illustrates combining visual information that is essential for guiding an action with expectations based on the actor’s recent experience is in intercepting moving targets, where performance relies not only on the target’s speed in the current trial but also on its speed in the previous trial (de Lussanet et al., 2001). Similar reliance on recent experience has also been found in grasping, where performance is sometimes influenced by the previously felt distance (Volcic & Domini, 2018) and shape (Cuijpers et al., 2008).

Knowing which attributes are used to guide various aspects of one’s actions may seem to be relevant for only a small selection of specialists in motor control. However, ignoring such selection can have far-reaching consequences for how studies are interpreted, especially if there are substantial inconsistencies between related attributes in those studies. Such substantial inconsistencies are likely to occur in studies involving visual illusions (see “Vision Is Not Always Consistent”). The next two sections provide examples of the importance of considering which attributes guide certain actions. The first combination of attribute and action is the use of judged size to guide reaching out to grasp a static object. The second is the use of judged motion when intercepting a moving target.

Size and Grasping

The reach-to-grasp movement provides a nice example of how misjudging the attributes that are used can lead to wrong conclusions (for a more extensive analysis of the literature on this issue, see Smeets et al., 2019). In an influential study, participants were presented with two objects, one of which was surrounded by small items and the other by large items (the Ebbinghaus illusion; Aglioti et al., 1995). When the two objects were matched in apparent size (rather than physical size), the maximal grip aperture when reaching out to grasp them was different. When the objects were matched in physical size, the effect of the surrounding items on the maximal grip aperture was about half the effect on the judged sizes (Aglioti et al., 1995, Figure 5). The authors considered these results to show that the attribute size is judged separately for perception and action. But is this what they show?

Various authors have discussed how experimental details such as the continuous visibility of hand and target may have influenced the result. Since the target object and the hand were continuously visible during the action, and maximum grip aperture occurs when the hand is close to the target, the effect of the illusion on the perceived size of the target might have decreased as the hand approached the target. This has been interpreted in terms of using different information for planning and control (Glover & Dixon, 2002; but see Smeets et al., 2003). Various authors have also discussed that one can expect an approximately 20% smaller effect of any illusion on maximum grip aperture than on perceptual judgments, because increasing object size generally only results in an increase in grip aperture of about 80% of the increase in size (Smeets & Brenner, 1999). Scaling the illusion effects to compensate for this can change the conclusion (Hesse et al., 2016).

The discussion about these issues has resulted in experiments being performed in which online control was prevented by blocking vision of both the hand and the target during the action. Comparing the influence of the Ebbinghaus illusion on size judgments with its influence on maximum grip aperture in such experiments and considering the change in grip aperture with target size in the analysis shows that the illusion can influence grip aperture in accordance with its effect on judged size (Kopiske et al., 2016). Thus, the idea that size is judged fundamentally differently for perceptual judgments than for planning grasping actions is wrong (also see e.g., Uccelli et al., 2019). This conclusion is of course only valid if the attribute that is being considered—size—is used to guide the opening of the hand to grasp the object.

A hint as to why it took so long to establish that illusions influence actions in the same way they influence perceptual judgments is the manner in which early studies using the Ponzo illusion, which was shown not to influence peak grip aperture (Brenner & Smeets, 1996; Jackson & Shaw, 2000), were received. These studies are frequently cited in support of the idea that action is not susceptible to illusions, although the same studies clearly showed that the build-up of the grip and lifting forces are affected by the illusion, even if grip aperture is not. An alternative interpretation is that judged object size does not determine how wide the hand is opened in grasping, so an illusion that influences judged size influences lifting forces but not the grip aperture when reaching to grasp the object. It is generally assumed that grip opening is guided by judged size (Jeannerod, 1988), but Smeets and Brenner (1999) formulated an alternative view on grasping in which the relevant attribute is the position of the contact points. That grip aperture is guided by judged positions rather than judged size is consistent with the variability in grip aperture being independent of the size (Ganel et al., 2008; Smeets & Brenner, 2008), because one would not expect the variability to scale with size (following Weber’s law) if people do not rely on judgments of size (but see Bruno et al., 2016, or Utz et al., 2015, for an alternative explanation).

That grip aperture is guided by judged positions rather than judged size is also the only way to explain why grip aperture when grasping an object changes in accordance with the anticipated aftereffects for the individual digits after adapting the digits independently to spatial offsets in opposite directions (prism adaptation) while making pointing movements (Schot et al., 2017). That reaching to grasp an object is determined by the movements of the individual digits, rather than by the movement of the hand and the closing of the grip, can also explain why the digits’ movements can depend on the orientation of an object’s surfaces even if the final grip is the same (see Figure 3A, B). The finding that individual digits move quite similarly when grasping and pushing if the constraints on their movements are similar (see Figure 3C, D) also suggests that grasping is controlled based on positions rather than size.

There is no clear neural evidence indicating whether size or positions are used in the control of grip aperture. Differences in brain activation have been found between grasping and similar actions that do not involve closing one’s grip (Di Bono et al., 2015; Fattori et al., 2010). Such differences are sometimes interpreted as evidence for separate neuronal processing of the grip, but those studies did not match the tasks that are compared in terms of the constraints on the movements of the digits.

Figure 3. Schematic representation of the paths of the index finger and thumb when grasping simple objects. When grasping trapezoidally shaped objects (A, B), the digits’ paths to the same positions depend on the object’s orientation, despite the fact that the final grip is identical (Kleinholdermann et al., 2007). This is presumably the result of it being advantageous to approach surfaces orthogonally (Smeets & Brenner, 1999). When grasping an object, each digit ultimately has to push onto the surface with which it makes contact (C). When asked to push the object away with one digit (D) that digit moves in much the same manner as it does during grasping (Smeets et al., 2010).

Guiding grip aperture by judged positions is not always possible. For instance, when one cannot see the object that is grasped, one might have to switch to a strategy in which grip aperture is based on judged size. Evidence that this happens has been found using the Müller-Lyer illusion: the illusion does not influence peak grip aperture when the hand and target are visible throughout the movement, but it does influence the aperture in open-loop grasping (Bruno & Franz, 2009; Foster et al., 2012). Presumably, real-time information about egocentric positions is preferable, but if such information is lacking because vision is blocked, one switches to remembered allocentric information. Another example of a task not being performed in the usual manner without direct visual feedback is forgetting to restore the original driving direction when asked to turn a steering wheel to change lanes when driving (Wallis et al., 2002). Similarly, the movements that are made in pantomimed grasping are very different from those of real grasping (Goodale et al., 1994a).

The fact that the Ebbinghaus illusion normally influences grasping to some extent (Aglioti et al., 1995; Pavani et al., 1999) might appear to suggest that judgments of size must be relevant for grasping. However, besides influencing judgments of size, the Ebbinghaus illusion also influences judged positions. It does so in accordance with the changes in judged size (Smeets & Brenner, 2019). For some reason, the Ebbinghaus illusion lacks the inconsistency between size and position that characterizes other illusions (Gillam & Chambers, 1985; Smeets et al., 2020). Importantly, the latter illusions do not influence the grip aperture when grasping normally (Smeets et al., 2020).

Interacting With Moving Objects

A second example of an inconsistency between attributes that has led to a lot of confusion is between judgments of motion and judgments of (changes in) position. An effective way to intentionally create such an inconsistency is with a moving Gabor stimulus. This stimulus influences judged motion to a much greater extent than it does judged displacement (de la Malla et al., 2018; Movie 6). It can therefore be used to study how judgments of motion guide people’s actions. This is obviously an artificial stimulus, but the idea of texture on an object moving differently than the object as a whole, and thus influencing its apparent speed, is less contrived than it may seem. It can happen when a textured ball rolls across a surface (de la Malla et al., 2017) or spins as it flies through the air (Casanova et al., 2015; Shapiro et al., 2010). Moreover, although the judgments about what is happening in the scene are inconsistent with a physical scene, the scene does not look “wrong” in any way.

Movie 6. Three Gabor patches are rotating together in a clockwise direction within a ring that is slowly contracting. The visible contraction of the ring does not prevent the Gabor patches from appearing to move away from each other.

Moving Gabor stimuli have been used to study how visual information guides the human eye or hand toward moving objects. One way in which this has been done is by taking a Gabor with a laterally moving vertical grating and moving it up and down along a path that is tilted so that the Gabor appears to move vertically when viewed while fixating off to the side (Figure 4; Lisi & Cavanagh, 2015; Shapiro et al., 2010). Despite appearing to move vertically, when the participants made saccades to the Gabor, the landing points followed the true position (Lisi & Cavanagh, 2015). This was interpreted as showing that actions are guided by instantaneous information while perceptual judgments integrate information across time. An alternative interpretation is that the direction of motion was misjudged, and that this judgment of direction was inconsistent with the judgments of position. The judged positions were systematically misjudged, but by a modest constant amount (see De Valois & De Valois, 1991). The authors were comparing reporting on a judged direction of motion with making saccades to an anticipated position.

Figure 4. Schematic representation of the study by Lisi and Cavanagh (2015), with two possible interpretations of the data. (A) A Gabor stimulus is a grating of which the contrast is modulated by a two-dimensional Gaussian profile. If one is fixating to the side (+), and the grating is moving to the right within the Gaussian, the Gaussian has to move leftward as it moved downward for the Gabor as a whole to appear to move straight down. (B) Combining a bias in the perceived position of the Gabor in the direction of the motion of the grating (red dot) with the Gabor’s apparent downward motion (blue arrow) to anticipate where it will be a short duration in the future (when the saccade ends) results in a saccade endpoint that shifts consistently with the true future position of the Gabor (black arrow). (C) Thus, when describing what they see, participants describe a Gabor moving up and down, presumably at the average horizontal position, but when making saccades to the Gabor the saccade endpoints follow the instantaneous horizontal position.

The alternative interpretation is consistent with several other studies that have been conducted with similar stimuli. In one case (Lisi & Cavanagh, 2017), the same stimulus was used except that a brief flash was superimposed on the Gabor when it reached one of the reversal points. Participants were asked to saccade or move their finger to where the flash had been. Saccades that had a short latency were hardly influenced by the misjudged direction of motion; presumably they were directed to the instantaneous position of the flash immediately after the flash. Misjudging the direction of motion did influence the finger, presumably because the finger had to be guided to the remembered position of the flash. Misjudging the motion probably influenced the remembered position because the latter was inconsistent with a combination of the judged motion and the instantaneous position at later moments. In accordance with this interpretation, misjudging the motion also influenced saccades that had a long latency to some extent (as it did saccades that were intentionally delayed; Massendari et al., 2018).

That the distinction between judgments of position and of motion is critical, rather than that between perception and action, is evident from yet another study using the same stimulus. In this study, the Gabor disappeared once a saccade was made and participants had to make perceptual judgments about the place from which it had disappeared (Nakayama & Holcombe, 2020). In accordance with the Gabor not being present to influence the remembered position, not only did saccades land near the actual position at which the target had been, but that position was indicated correctly. In accordance with positions also being influenced to some extent in these Gabor stimuli, both judged positions and saccadic endpoints were slightly shifted in the direction in which the grating was moving (Ueda et al., 2018).

In another experiment involving a moving Gabor, participants were asked to tap on the Gabor when it reached a specified position that participants were fixating with their eyes (de la Malla et al., 2018). In this case, the motion of the grating was either in the same direction as or in the opposite direction from the motion of the Gabor as a whole, so the Gabor appeared to move faster or more slowly toward the fixation point rather than appearing to move in a different direction. Separate experiments were used to determine how the moving grating influenced the judged velocity of the Gabor, and its judged position 100 ms before it reached the fixation point (movements are adjusted to the latest visual information until about 100 ms before they end; Brenner & Smeets, 1997, 2018b). In these experiments the Gabor disappeared 100 ms before it reached the fixation point, and judgments were made about the speed and position at that time. Combining these judgments gives a quantitative prediction of the anticipated tapping error. The actual tapping errors were very similar to these predictions. This illustrates the importance of considering not only which attributes of spatial vision are used to guide actions but also when and how they are used.

Beyond Visual Illusions

Visual illusions provide a useful tool for trying to identify the visual attributes that guide our actions, but they obviously do not constitute the only tool that one has for identifying the information that is used, and other tools are certainly more suitable for exploring how such information is used. Manipulating the available visual information at different times can help determine when information is used (Dessing et al., 2009; Droll & Hayhoe, 2007; López-Moliner et al., 2010; Whiting & Sharp, 1974) and how long it takes to use it (Veerman et al., 2008). Modeling how visual information guides movements, considering strategic choices and the relevant biomechanics, can prevent inadvertently attributing inevitable aspects of the way movements are executed to errors in spatial vision (Brenner & Smeets, 2018a; Dessing et al., 2002; de Lussanet et al., 2002; Smeets & Brenner, 1999). This is important because even systematic errors in performance do not necessarily reflect errors in either visual information or in the commands to the muscles that move the limbs. They may be tactical compromises (Harris, 1995; Liu & Todorov, 2007), such as considering issues like maintaining balance when walking (Barton et al., 2019). All these considerations imply that when relying on models one must keep in mind that if the model is wrong the interpretation in terms of spatial vision is likely to be wrong as well, because the model determines how the attribute contributes to the action (de Lussanet et al., 2004; Pinter et al., 2012; Smeets & Brenner, 2002; Smeets et al., 2002a, 2003). This may make relying on models less attractive, but it is probably impossible to even think of how spatial vision guides actions without having some model in mind.

When the question is not what attributes are used but how those attributes are judged, a popular method is to examine how people respond to cue conflicts. This technique has been used extensively in experiments involving depth. Contrary to suggestions that binocular depth cues have a special role in guiding movements (e.g., Bruggeman et al., 2007; Servos et al., 1992), studies using specially made stimuli containing cue conflicts have shown that the extent to which cues are used depends on their precision (Camponogara & Volcic, 2019; Knill, 2005; Louw et al., 2007), on whether the information that they provide is consistent with the haptic feedback (Cesanek & Domini, 2019; van Beers et al., 2011), and on how quickly they provide information (van Mierlo et al., 2009). Precision (Knill, 2005) and accuracy (Cesanek et al., 2020) also influence perceptual judgments. The influence of how quickly cues provide information is probably only important in guiding actions, where a small temporal advantage is likely to result in movements primarily being adjusted on the basis of the faster cue when there is no cue conflict.

Epilogue

Ironically, the very studies that led to the realization that it is worthwhile to consider what visual information is used to guide human actions also held back research on this topic to some extent. By identifying systematic differences between influences of damage to dorsal and ventral regions of the brain and how illusions influence tasks that were chosen to represent perception or action, it has become widely accepted that perception and action rely on separate processing of visual information (reviewed in Goodale, 2011; Milner & Goodale, 2008; but see Schenk & McIntosh, 2010). This work has introduced important distinctions, such as the fact that action is primarily guided by egocentric, instantaneous information about positions and motion. However, the overarching notion of separate processing of visual information for perception and for action has led many researchers to explain all discrepancies that they observe as a result of this dichotomy, rather than considering what one could learn from them. Abandoning both this dichotomy and the notion of consistency between spatial attributes provides the opportunity to use situations in which different attributes are affected in different ways to determine what attribute is used for a given task and how, rather than just considering differences as showing that not all tasks use the same information. This article described how doing so helped develop the idea that grip aperture is guided by positions rather than object size and explained how motion information is used to guide the hand when intercepting a moving target. Irrespective of whether this interpretation of these particular examples is correct, this approach of trying to understand how specific actions are guided by various kinds of (visual) information could be very fruitful.

People are reluctant to accept the compelling evidence that humans do not perceive the world around them correctly and in fine detail. The reason that people are so confident about the consistency of what they see is probably the constant reinforcement by the experience that they can rely on spatial vision to guide their actions. This ability is indeed amazing. Unless you have some disease, or the circumstances are very unusual (for instance you are in the dark or intoxicated, or for some reason need to look elsewhere), you probably have no difficulty placing your fingers at suitable positions on an object that you want to manipulate or placing your foot at suitable places when walking or running. This gives the impression that you see the world as it is and act on it. However, it is clearly not that simple. Spatial vision is used in very clever ways to make this possible, and there is still much to discover about this exciting topic.

References

  • Aglioti, S., DeSouza, J. F., & Goodale, M. A. (1995). Size-contrast illusions deceive the eye but not the hand. Current Biology, 5, 679–685.
  • Aivar, M. P., Hayhoe, M. M., Chizk, C. L., & Mruczek, R. E. (2005). Spatial memory and saccadic targeting in a natural task. Journal of Vision, 5, 177–193.
  • Anstis, S., Verstraten, F. A., & Mather, G. (1998). The motion aftereffect. Trends in Cognitive Sciences, 2, 111–117.
  • Barton, S. L., Matthis, J. S., & Fajen, B. R. (2019). Control strategies for rapid, visually guided adjustments of the foot during continuous walking. Experimental Brain Research, 237, 1673–1690.
  • Bernier, P., Chua, R., & Franks, I. M. (2005). Is proprioception calibrated during visually guided movements? Experimental Brain Research, 167, 292–296.
  • Bisley, J. W., Mirpour, K., & Alkan, Y. (2020). The functional roles of neural remapping in cortex. Journal of Vision, 20(9), 1–15.
  • Blackmore, S. J., Brelstaff, G., Nelson, K., & Trościanko, T. (1995). Is the richness of our visual world an illusion? Transsaccadic memory for complex scenes. Perception, 24, 1075–1081.
  • Blake, A. (1992). Computational modelling of hand–eye coordination. Philosophical Transactions of the Royal Society B, 337, 351–360.
  • Bosco, A., Lappe, M., & Fattori, P. (2015). Adaptation of saccades and perceived size after trans-saccadic changes of object size. Journal of Neuroscience, 35, 14448–14456.
  • Bosco, A., Rifai, K., Wahl, S., Fattori, P., & Lappe, M. (2020). Trans-saccadic adaptation of perceived size independent of saccadic adaptation. Journal of Vision, 20(7), Article 19.
  • Bourgeois, J., Farnè, A., & Coello, Y. (2014). Costs and benefits of tool use on the perception of reachable space. Acta Psychologica, 148, 91–95.
  • Braddick, O., & Atkinson, J. (2013). Visual control of manual actions: Brain mechanisms in typical development and developmental disorders. Developmental Medicine and Child Neurology, 55 (Suppl. 4), 13–18.
  • Brenner, E., Abalo, I., Estal, V., Schootemeijer, S., Mahieu, Y., Veerkamp, K., Zandbergen, M., van der Zee, T., & Smeets, J. B. J. (2016). How can people be so good at intercepting accelerating objects if they are so poor at visually judging acceleration? i-Perception, 7(1), 1–13.
  • Brenner, E., de Graaf, M. L., Stam, M. J., Schonwetter, M., Smeets, J. B. J., & van Beers, R. J. (2020). When is moving a cursor with a computer mouse intuitive? Perception, 49, 484–487.
  • Brenner, E., & Smeets, J. B. J. (1996). Size illusion influences how we lift but not how we grasp an object. Experimental Brain Research, 111, 473–476.
  • Brenner, E., & Smeets, J. B. J. (1997). Fast responses of the human hand to changes in target position. Journal of Motor Behavior, 29, 297–310.
  • Brenner, E., & Smeets, J. B. J. (2003). Fast corrections of movements with a computer mouse. Spatial Vision, 16, 365–376.
  • Brenner, E., & Smeets, J. B. J. (2006). Two eyes in action. Experimental Brain Research, 170, 302–311.
  • Brenner, E., & Smeets, J. B. J. (2015). Quickly making the correct choice. Vision Research, 113, 198–210.
  • Brenner, E., & Smeets, J. B. J. (2018a). Continuously updating one’s predictions underlies successful interception. Journal of Neurophysiology, 120, 3257–3274.
  • Brenner, E., & Smeets, J. B. J. (2018b). Depth perception. In J. T. Wixted (Ed.), Stevens’ handbook of experimental psychology and cognitive neuroscience (4th ed., pp. 1–30). John Wiley & Sons, Inc.
  • Brenner, E., & van Damme, W. J. (1999). Perceived distance, shape and size. Vision Research, 39, 975–986.
  • Brenner, E., van den Berg, A.V., & van Damme, W. J. (1996). Perceived motion in depth. Vision Research, 36, 699–706.
  • Brouwer, A. M., & Knill, D. C. (2007). The role of memory in visually guided reaching. Journal of Vision, 7(5), Article 6.
  • Brouwer, A. M., & Knill, D. C. (2009). Humans use visual and remembered information about object location to plan pointing movements. Journal of Vision, 9(1), Article 24.
  • Bruggeman, H., Yonas. A., & Konczak, J. (2007). The processing of linear perspective and binocular information for action and perception. Neuropsychologia, 45, 1420–1426.
  • Bruno, N., & Franz, V. H. (2009). When is grasping affected by the Müller-Lyer illusion? A quantitative review. Neuropsychologia, 47, 1421–1433.
  • Bruno, N., Uccelli, S., Viviani, E., & de’Sperati, C. (2016). Both vision-for-perception and vision-for-action follow Weber’s law at small object sizes, but violate it at larger sizes. Neuropsychologia, 91, 327–334.
  • Burr, D. C., & Morrone, M. C. (2012). Constructing stable spatial maps of the world. Perception, 41, 1355–1372.
  • Cámara, C., de la Malla, C., López-Moliner, J., & Brenner, E. (2018). Eye movements in interception with delayed visual feedback. Experimental Brain Research, 236, 1837–1847.
  • Cámara, C., López-Moliner, J., Brenner, E., & de la Malla, C. (2020). Looking away from a moving target does not disrupt the way in which the movement toward the target is guided. Journal of Vision, 20(5), Article 5.
  • Camponogara, I., & Volcic, R. (2019). Grasping movements toward seen and handheld objects. Scientific Reports, 9, Article 3665.
  • Carello, C., Grosofsky, A., Reichel, F. D., Solomon, H. Y., & Turvey, M. T. (1989). Visually perceiving what is reachable. Ecological Psychology, 1, 27–54.
  • Casanova, R., Borg, O., & Bootsma, R. J. (2015). Perception of spin and the interception of curved football trajectories. Journal of Sports Science, 33, 1822–1830.
  • Castet, E., & Masson, G. S. (2000). Motion perception during saccadic eye movements. Nature Neuroscience, 3, 177–183.
  • Cesanek, E., & Domini, F. (2019). Depth cue reweighting requires altered correlations with haptic feedback. Journal of Vision, 19(14), Article 3.
  • Cesanek, E., Taylor, J. A., & Domini, F. (2020). Sensorimotor adaptation and cue reweighting compensate for distorted 3D shape information, accounting for paradoxical perception–action dissociations. Journal of Neurophysiology, 123, 1407–1419.
  • Chapman, S. (1968). Catching a baseball. American Journal of Physics, 36, 868–870.
  • Colby, C. L., Duhamel, J. R., & Goldberg, M. E. (1995). Oculocentric spatial representation in parietal cortex. Cerebral Cortex, 5, 470–481.
  • Collins, T., Doré-Mazars, K., & Lappe, M. (2007). Motor space structures perceptual space: Evidence from human saccadic adaptation. Brain Research, 1172, 32–39.
  • Criscimagna-Hemminger, S. E., & Shadmehr, R. (2008). Consolidation patterns of human motor memory. Journal of Neuroscience, 28, 9610–9618.
  • Crowe, E. M., Bossard, M., & Brenner, E. (2021). Can ongoing movements be guided by allocentric visual information when the target is visible? Journal of Vision, 21(1), Article 6.
  • Cuijpers, R. H., Brenner, E., & Smeets, J. B. J. (2008). Consistent haptic feedback is required but it is not enough for natural reaching to virtual cylinders. Human Movement Science, 27, 857–872.
  • Cuijpers, R. H., Kappers, A. M., & Koenderink, J. J. (2000). Large systematic deviations in visual parallelism. Perception, 29, 1467–1482.
  • Cuijpers, R. H., Kappers, A. M., & Koenderink, J. J. (2002). Visual perception of collinearity. Perception and Psychophysics, 64, 392–404.
  • Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Roger (Eds.), Perception of space and motion (pp. 69–117). Academic Press.
  • Dassonville, P., Schlag, J., & Schlag-Rey, M. (1992). Oculomotor localization relies on a damped representation of saccadic eye displacement in human and nonhuman primates. Visual Neuroscience, 9, 261–269.
  • Day, B. M., Wagman, J. B., & Smith, P. J. (2015). Perception of maximum stepping and leaping distance: Stepping affordances as a special case of leaping affordances. Acta Psychologica, 158, 26–35.
  • de Brouwer, A. J., Smeets, J. B. J., Gutteling, T. P., Toni, I., & Medendorp, W. P. (2015). The Müller-Lyer illusion affects visuomotor updating in the dorsal visual stream. Neuropsychologia, 77, 119–127.
  • de Grave, D. D., Brenner, E., & Smeets, J. B. J. (2011). Using a stick does not necessarily alter judged distances or reachability. PLOS ONE, 6, Article e16697.
  • de la Malla, C., Brenner, E., de Haan, E. H. F., & Smeets, J. B. J. (2019). A visual illusion that influences perception and action through the dorsal pathway. Communications Biology, 2, Article 38.
  • de la Malla, C., Buiteman, S., Otters, W., Smeets, J. B. J., & Brenner, E. (2016). How various aspects of motion parallax influence distance judgments, even when we think we are standing still. Journal of Vision, 16(9), Article 8.
  • de la Malla, C., Smeets, J. B. J., & Brenner, E. (2017). Potential systematic interception errors are avoided when tracking the target with one’s eyes. Scientific Reports, 7, Article 10793.
  • de la Malla, C., Smeets, J. B. J., & Brenner, E. (2018). Errors in interception can be predicted from errors in perception. Cortex, 98, 49–59.
  • de Lussanet, M. H., Smeets, J. B. J., & Brenner, E. (2001). The effect of expectations on hitting moving targets: Influence of the preceding target’s speed. Experimental Brain Research, 137, 246–248.
  • de Lussanet, M. H., Smeets, J. B. J., & Brenner, E. (2002). Relative damping improves linear mass-spring models of goal-directed movements. Human Movement Science, 21, 85–100.
  • de Lussanet, M. H. E., Smeets, J. B. J., & Brenner, E. (2004). The quantitative use of velocity information in fast interception. Experimental Brain Research, 157, 181–196.
  • Dessing, J. C., Bullock, D., Peper, C. L., & Beek, P. J. (2002). Prospective control of manual interceptive actions: Comparative simulations of extant and new model constructs. Neural Networks, 15(2), 163–179.
  • Dessing, J. C., Oostwoud Wijdenes, L., Peper, C. L., & Beek, P. J. (2009). Adaptations of lateral hand movements to early and late visual occlusion in catching. Experimental Brain Research, 192(4), 669–682.
  • De Valois, R. L., & De Valois, K. K. (1991). Vernier acuity with stationary moving Gabors. Vision Research, 31, 1619–1626.
  • Di Bono, M. G., Begliomini, C., Castiello, U., & Zorzi, M. (2015). Probing the reaching-grasping network in humans through multivoxel pattern decoding. Brain Behavior, 5(11), Article e00412.
  • Dixon, M. W., Wraga, M., Proffitt, D. R., & Williams, G. C. (2000). Eye height scaling of absolute size in immersive and nonimmersive displays. Journal of Experimental Psychology: Human Perception and Performance, 26, 582–593.
  • Droll, J. A., & Hayhoe, M. M. (2007). Trade-offs between gaze and working memory use. Journal of Experimental Psychology: Human Perception and Performance, 33, 1352–1365.
  • Duyck, M., Wexler, M., Castet, E., & Collins, T. (2018). Motion masking by stationary objects: A study of simulated saccades. i-Perception, 9(3), 1–11.
  • Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433.
  • Fath, A. J., & Fajen, B. R. (2011). Static and dynamic visual information about the size and passability of an aperture. Perception, 40, 887–904.
  • Fattori, P., Pitzalis, S., & Galletti, C. (2009). The cortical visual area V6 in macaque and human brains. Journal of Physiology Paris, 103, 88–97.
  • Fattori, P., Raos, V., Breveglieri, R., Bosco, A., Marzocchi, N., & Galletti, C. (2010). The dorsomedial pathway is not just for reaching: Grasping neurons in the medial parieto-occipital cortex of the macaque monkey. Journal of Neuroscience, 30, 342–349.
  • Feix, T., Bullock, I. M., & Dollar, A. M. (2014). Analysis of human grasping behavior: Correlating tasks, objects and grasps. IEEE Transactions on Haptics, 7, 430–441.
  • Fetsch, C. R., Deangelis, G. C., & Angelaki, D. E. (2010). Visual–vestibular cue integration for heading perception: Applications of optimal cue integration theory. European Journal of Neuroscience, 31, 1721–1729.
  • Fiehler, K., Wolf, C., Klinghammer, M., & Blohm, G. (2014). Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Frontiers in Human Neuroscience, 8, Article 636.
  • Foster, R. M., Kleinholdermann, U., Leifheit, S., & Franz, V. H. (2012). Does bimanual grasping of the Müller-Lyer illusion provide evidence for a functional segregation of dorsal and ventral streams? Neuropsychologia, 50, 3392–3402.
  • Franklin, D. W., Reichenbach, A., Franklin, S., & Diedrichsen, J. (2016). Temporal evolution of spatial computations for visuomotor control. Journal of Neuroscience, 36, 2329–2341.
  • Ganel, T., Chajut, E., & Algom, D. (2008). Visual coding for action violates fundamental psychophysical principles. Current Biology, 18, R599-R601.
  • Georgopoulos, A. P. (1994). New concepts in generation of movement. Neuron, 13, 257–268.
  • Gibson, J. J. (1966). The senses considered as perceptual systems. Houghton Mifflin.
  • Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.
  • Gillam, B., & Chambers, D. (1985). Size and position are incongruous: Measurements on the Müller-Lyer figure. Perception and Psychophysics, 37, 549–556.
  • Glover, S., & Dixon, P. (2002). Dynamic effects of the Ebbinghaus illusion in grasping: Support for a planning/control model of action. Perception and Psychophysics, 64, 266–278.
  • Goodale, M. A. (2011). Transforming vision into action. Vision Research, 51, 1567–1587.
  • Goodale, M. A., Jakobson, L. S., & Keillor, J. M. (1994a). Differences in the visual control of pantomimed and natural grasping movements. Neuropsychologia, 32, 1159–1178.
  • Goodale, M. A., Meenan, J. P., Bülthoff, H. H., Nicolle, D. A., Murphy, K. J., & Racicot, C. I. (1994b). Separate neural pathways for the visual analysis of object shape in perception and prehension. Current Biology, 4, 604–610.
  • Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25.
  • Goodale, M. A., Pelisson, D., & Prablanc, C. (1986). Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement. Nature, 320, 748–750.
  • Goodale, M. A., Westwood, D. A., & Milner, A. D. (2004). Two distinct modes of control for object-directed action. Progressive Brain Research, 144, 131–144.
  • Gordon, J., Ghilardi, M.-F., & Ghez, C. (1995). Impairments of reaching movements in patients without proprioception. I. Spatial errors. Journal of Neurophysiology, 73, 347–360.
  • Gregory, R. L. (1963). Distortion of visual space as inappropriate constancy scaling. Nature, 199, 678–680.
  • Gremmler, S., Bosco, A., Fattori, P., & Lappe, M. (2014). Saccadic adaptation shapes visual space in macaques. Journal of Neurophysiology, 111, 1846–1851.
  • Grill-Spector, K., & Malach, R. (2004). The human visual cortex. Annual Review of Neuroscience, 27, 649–677.
  • Harris, C. M. (1995). Does saccadic undershoot minimize saccadic flight-time? A Monte-Carlo study. Vision Research, 35, 691–701.
  • Harris, C. S. (1965). Perceptual adaptation to inverted, reversed, and displaced vision. Psychological Review, 72, 419–444.
  • Haxby, J. V., Grady, C. L., Horwitz, B., Ungerleider, L. G., Mishkin, M., Carson, R. E., Herscovitch, P., Schapiro, M. B., & Rapoport, S. I. (1991). Dissociation of object and spatial visual processing pathways in human extrastriate cortex. Proceedings of the National Academy of Sciences, 88, 1621–1625.
  • Hay, J. C., & Pick, H. L. (1966). Visual and proprioceptive adaptation to optical displacement of the visual stimulus. Journal of Experimental Psychology, 71, 150–158.
  • Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9, 188–194.
  • Henderson, J. M., & Hollingworth, A. (2003). Global transsaccadic change blindness during scene perception. Psychological Science, 14, 493–497.
  • Henriques, D. Y., & Cressman, E. K. (2012). Visuomotor adaptation and proprioceptive recalibration. Journal of Motor Behavior, 44, 435–444.
  • Hernandez, T. D., Levitan. C. A., Banks, M. S., & Schor, C. M. (2008). How does saccade adaptation affect visual perception? Journal of Vision, 8, Article 3.
  • Herwig, A., Weiss, K., & Schneider, W. X. (2015). When circles become triangular: How transsaccadic predictions shape the perception of shape. Annals of the New York Academy of Science, 1339, 97–105.
  • Hesse, C., Franz, V. H., & Schenk, T. (2016). Pointing and antipointing in Muller-Lyer figures: Why illusion effects need to be scaled. Journal of Experimental Psychology: Human Perception and Performance, 42, 90–102.
  • Hillis, J. M., Watt, S. J., Landy, M. S., & Banks, M. S. (2004). Slant from texture and disparity cues: Optimal cue combination. Journal of Vision, 4(12), 967–992.
  • Intoy, J., & Rucci, M. (2020). Finely tuned eye movements enhance visual acuity. Nature Communications, 11(1), Article 795.
  • Jackson, S. R., & Shaw, A. (2000). The Ponzo illusion affects grip-force but not grip-aperture scaling during prehension movements. Journal of Experimental Psychology: Human Perception and Performance, 26, 418–423.
  • Jeannerod, M. (1988). The neural and behavioural organization of goal-directed movements. Clarendon Press.
  • Johansson, R. S., Westling, G., Backstrom, A., & Flanagan, J. (2001). Eye–hand coordination in object manipulation. Journal of Neuroscience, 21, 6917–6932.
  • Jörges, B., & López-Moliner, J. (2017). Gravity as a strong prior: Implications for perception and action. Frontiers in Human Neuroscience, 11, Article 203.
  • Kanizsa, G. (1976). Subjective contours. Scientific American, 234(4), 48–52.
  • Khang, B. G., Koenderink, J. J., & Kappers, A. M. (2007). Shape from shading from images rendered with various surface types and light fields. Perception, 36, 1191–1213.
  • Kitago, T., Ryan, S. L., Mazzoni, P., Krakauer, J. W., & Haith, A. M. (2013). Unlearning versus savings in visuomotor adaptation: Comparing effects of washout, passage of time, and removal of errors on motor memory. Frontiers in Human Neuroscience, 7, Article 307.
  • Klein, L. K., Maiello, G., Paulun, V. C., & Fleming, R. W. (2020). Predicting precision grip grasp locations on three-dimensional objects. PLOS Computational Biology, 16(8), Article e1008081.
  • Kleinholdermann, U., Brenner, E., Franz, V. H., & Smeets, J. B. J. (2007). Grasping trapezoidal objects. Experimental Brain Research, 180, 415–420.
  • Knill, D. C. (2005). Reaching for visual cues to depth: The brain combines depth cues differently for motor control and perception. Journal of Vision, 5(2), 103–115.
  • Knill, D. C., & Saunders, J. A. (2003). Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research, 43, 2539–2558.
  • Kopiske, K. K., Bruno, N., Hesse, C., Schenk, T., & Franz, V. H. (2016). The functional subdivision of the visual brain: Is there a real illusion effect on action? A multi-lab replication study. Cortex, 79, 130–152.
  • Körding, K. P., & Wolpert, D. M. (2006). Bayesian decision theory in sensorimotor control. Trends in Cognitive Science, 10, 319–326.
  • Kröller, J., De Graaf, J. B., Prablanc, C., & Pélisson, D. (1999). Effects of short-term adaptation of saccadic gaze amplitude on hand-pointing movements. Experimental Brain Research, 124, 351–362.
  • Kuling, I. A., Brenner, E., & Smeets, J. B. J. (2016). Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time. Acta Psychologica, 166, 31–36.
  • Kuling, I. A., de Brouwer, A. J., Smeets, J. B. J., & Flanagan, J. R. (2019). Correcting for natural visuo-proprioceptive matching errors based on reward as opposed to error feedback does not lead to higher retention. Experimental Brain Research, 237, 735–741.
  • Kuling, I. A., van der Graaff, M. C. W., Brenner, E., & Smeets, J. B. J. (2017). Matching locations is not just matching sensory representations. Experimental Brain Research, 235, 533–545.
  • Kurtzer, I., Crevecoeur, F., & Scott, S. H. (2014). Fast feedback control involves two independent processes utilizing knowledge of limb dynamics. Journal of Neurophysiology, 111, 1631–1645.
  • Land, M. F. (2006). Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research, 25, 296–324.
  • Lee, D. N., Young, D. S., Reddish, P. E., Lough, S., & Clayton, T. M. H. (1983). Visual timing in hitting an accelerating ball. Quarterly Journal of Experimental Psychology, 35, 333–346.
  • Leow, L. A., Marinovic, W., de Rugy, A., & Carroll, T. J. (2018). Task errors contribute to implicit aftereffects in sensorimotor adaptation. European Journal of Neuroscience, 48, 3397–3409.
  • Lillicrap, T. P., Moreno-Briseño, P., Diaz, R., Tweed, D. B., Troje, N. F., & Fernandez-Ruiz, J. (2013). Adapting to inversion of the visual field: A new twist on an old problem. Experimental Brain Research, 228, 327–339.
  • Lishman, J. R., & Lee, D. N. (1973). The autonomy of visual kinaesthesis. Perception, 2, 287–294.
  • Lisi, M., & Cavanagh, P. (2015). Dissociation between the perceptual and saccadic localization of moving objects. Current Biology, 25, 2535–2540.
  • Lisi, M., & Cavanagh, P. (2017). Different spatial representations guide eye and hand movements. Journal of Vision, 17(2), Article 12.
  • Liu, D., & Todorov, E. (2007). Evidence for the flexible sensorimotor strategies predicted by optimal feedback control. Journal of Neuroscience, 27, 9354–9368.
  • López-Moliner, J., Brenner, E., Louw, S., & Smeets, J. B. J. (2010). Catching a gently thrown ball. Experimental Brain Research, 206, 409–417.
  • Louw, S., Smeets, J. B. J., & Brenner, E. (2007). Judging surface slant for placing objects: A role for motion parallax. Experimental Brain Research, 183, 149–158.
  • Lu, Z., & Fiehler, K. (2020). Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex, 125, 203–214.
  • Maij, F., Brenner, E., & Smeets, J. B. J. (2011). Temporal uncertainty separates flashes from their background during saccades. Journal of Neuroscience, 31, 3708–3711.
  • Maij, F., Matziridi, M., Smeets, J. B. J., & Brenner, E. (2012). Luminance contrast in the background makes flashes harder to detect during saccades. Vision Research, 60, 22–27.
  • Mann, D. L., Ho, N. Y., De Souza, N. J., Watson, D. R., & Taylor, S. J. (2007). Is optimal vision required for the successful execution of an interceptive task? Human Movement Science, 26, 343–356.
  • Manning, C. D., Tolhurst, S. A., & Bawa, P. (2012). Proprioceptive reaction times and long-latency reflexes in humans. Experimental Brain Research, 221, 155–166.
  • Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in Cognitive Sciences, 8, 79–86.
  • Maresch, J., Werner, S., & Donchin, O. (2021). Methods matter: Your measures of explicit and implicit processes in visuomotor adaptation affect your results. European Journal of Neuroscience, 53(2), 504–518.
  • Marsden, C. D., Merton, P. A., & Morton, H. B. (1977). The sensory mechanism of servo action in human muscle. Journal of Physiology, 265, 521–535.
  • Massendari, D., Lisi, M., Collins, T., & Cavanagh, P. (2018). Memory-guided saccades show effect of a perceptual illusion whereas visually guided saccades do not. Journal of Neurophysiology, 119, 62–72.
  • Mather, G. (2006). Two-stroke: A new illusion of visual motion based on the time course of neural responses in the human visual system. Vision Research, 46, 2015–2018.
  • McDougle, S. D., Bond, K. M., & Taylor, J. A. (2015). Explicit and implicit processes constitute the fast and slow processes of sensorimotor learning. Journal of Neuroscience, 35, 9568–9579.
  • McIntosh, R. D., & Lashley, G. (2008). Matching boxes: Familiar size influences action programming. Neuropsychologia, 46, 2441–2444.
  • McIntyre, J., Zago, M., Berthoz, A., & Lacquaniti, F. (2001). Does the brain model Newton’s laws? Nature Neuroscience, 4, 693–694.
  • Mehler, D. M. A., Reichenbach, A., Klein, J., & Diedrichsen, J. (2017). Minimizing endpoint variability through reinforcement learning during reaching movements involving shoulder, elbow and wrist. PLOS ONE, 12(7), Article e0180803.
  • Milner, A. D., & Goodale, M. A. (2006). The visual brain in action (2nd ed.). Oxford University Press.
  • Milner, A. D., & Goodale, M. A. (2008). Two visual systems re-viewed. Neuropsychologia, 46, 774–785.
  • Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: Two cortical pathways. Trends in Neuroscience, 6, 414–417.
  • Mulckhuyse, M., van Zoest, W., & Theeuwes, J. (2008). Capture of the eyes by relevant and irrelevant onsets. Experimental Brain Research, 186, 225–235.
  • Muller, C. M., Brenner, E., & Smeets, J. B. J. (2009). Testing a counter-intuitive prediction of optimal cue combination. Vision Research, 49, 134–139.
  • Nakayama, R., & Holcombe, A. O. (2020). Attention updates the perceived position of moving objects. Journal of Vision, 20(4), Article 21.
  • O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24, 939–973.
  • O’Regan, J. K., Rensink, R. A., & Clark, J. J. (1999). Change-blindness as a result of “mudsplashes.” Nature, 398(6722), 34.
  • Paulignan, Y., Frak, V. G., Toni, I., & Jeannerod, M. (1997). Influence of object position and size on human prehension movements. Experimental Brain Research, 114, 226–234.
  • Paulun, V. C., Gegenfurtner, K. R., Goodale, M. A., & Fleming, R. W. (2016). Effects of material properties and object orientation on precision grip kinematics. Experimental Brain Research, 234, 2253–2265.
  • Pavani, F., Boscagli, I., Benvenuti, F., Rabuffetti, M., & Farnè, A. (1999). Are perception and action affected differently by the Titchener circles illusion? Experimental Brain Research, 127, 95–101.
  • Pélisson, D., Alahyane, N., Panouillères, M., & Tilikete, C. (2010). Sensorimotor adaptation of saccadic eye movements. Neuroscience and Biobehavioral Reviews, 34, 1103–1120.
  • Pinter, I. J., van Soest, A. J., Bobbert, M. F., & Smeets, J. B. J. (2012). Conclusions on motor control depend on the type of model used to represent the periphery. Biological Cybernetics, 106, 441–451.
  • Pisella, L., Gréa, H., Tilikete, C., Vighetto, A., Desmurget, M., Rode, G., Boisson, D., & Rossetti, Y. (2000). An “automatic pilot” for the hand in human posterior parietal cortex: Toward reinterpreting optic ataxia. Nature Neuroscience, 3, 729–736.
  • Pressigout, A., Paeye, C., & Doré-Mazars, K. (2020). Saccadic adaptation shapes perceived size: Common codes for action and perception. Attention, Perception and Psychophysics, 82, 3676–3685.
  • Rana, A., Butler, A. A., Gandevia, S. C., & Héroux, M. E. (2020). Judgements of hand location and hand spacing show minimal proprioceptive drift. Experimental Brain Research, 238, 1759–1767.
  • Redding, G. M., & Wallace, B. (2002). Strategic calibration and spatial alignment: A model from prism adaptation. Journal of Motor Behavior, 34, 126–138.
  • Resulaj, A., Kiani, R., Wolpert, D. M., & Shadlen, M. N. (2009). Changes of mind in decision-making. Nature, 461, 263–266.
  • Reynolds, R. F., & Day, B. L. (2012). Direct visuomotor mapping for fast visually evoked arm movements. Neuropsychologia, 50, 3169–3173.
  • Rincon-Gonzalez, L., Buneo, C. A., & Helms Tillery, S. I. (2011). The proprioceptive map of the arm is systematic and stable, but idiosyncratic. PLOS ONE, 6, Article e25214.
  • Rossetti, Y., Pisella, L., & McIntosh, R. D. (2017). Rise and fall of the two visual systems theory. Annals of Physical and rehabilitation Medicine, 60, 130–140.
  • Rucci, M., & Victor, J. D. (2015). The unsteady eye: An information-processing stage, not a bug. Trends in Neuroscience, 38, 195–206.
  • Rushton, S. K., & Wann, J. P. (1999). Weighted combination of size and disparity: A computational model for timing a ball catch. Nature Neuroscience, 2, 186–190.
  • Salinas, M. M., Wilken, J. M., & Dingwell, J. B. (2017). How humans use visual optic flow to regulate stepping during walking. Gait Posture, 57, 15–20.
  • Scarfe, P., & Hibbard, P. B. (2011). Statistically optimal integration of biased sensory estimates. Journal of Vision, 11(7), Article 12.
  • Schenk, T. (2006). An allocentric rather than perceptual deficit in patient DF. Nature Neuroscience, 9, 1369–1370.
  • Schenk, T., & McIntosh, R. D. (2010). Do we have independent visual streams for perception and action? Cognitive Neuroscience, 1, 52–62.
  • Schnier, F., & Lappe, M. (2012). Mislocalization of stationary and flashed bars after saccadic inward and outward adaptation of reactive saccades. Journal of Neurophysiology, 107, 3062–3070.
  • Schofield, A. J., Bishop, N. J., & Allan, J. (2006). Oscillatory motion induces change blindness. Acta Psychologica, 121, 249–274.
  • Schot, W. D., Brenner, E., & Smeets, J. B. J. (2010). Posture of the arm when grasping spheres to place them elsewhere. Experimental Brain Research, 204, 163–171.
  • Schot, W. D., Brenner, E., & Smeets, J. B. J. (2017). Unusual prism adaptation reveals how grasping is controlled. eLife, 6, Article e21440.
  • Schot, W. D., Brenner, E., Sousa, R., & Smeets, J. B. J. (2012). Are people adapted to their own glasses? Perception, 41, 991–993.
  • Schreij, D., Owens, C., & Theeuwes, J. (2008). Abrupt onsets capture attention independent of top-down control settings. Perception and Psychophysics, 70, 208–218.
  • Servos, P., Goodale, M. A., & Jakobson, L. S. (1992). The role of binocular vision in prehension: A kinematic analysis. Vision Research, 32, 1513–1521.
  • Shadmehr, R., Smith, M. A., & Krakauer, J. W. (2010). Error correction, sensory prediction, and adaptation in motor control. Annual Review of Neuroscience, 33, 89–108.
  • Shapiro, A., Lu, Z. L., Huang, C. B., Knight, E., & Ennis, R. (2010). Transitions between central and peripheral vision create spatial/temporal distortions: A hypothesis concerning the perceived break of the curveball. PLOS ONE, 5(10), Article e13296.
  • Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074.
  • Simons, D. J., Franconeri, S. L., & Reimer, R. L. (2000). Change blindness in the absence of a visual disruption. Perception, 29, 1143–1154.
  • Sittig, A. C., Denier van der Gon, J. J., & Gielen, C. C. A. M. (1987). The contribution of afferent information on position and velocity to the control of slow and fast human forearm movements. Experimental Brain Research, 67, 33–40.
  • Smeets, J. B. J., & Brenner, E. (1995). Perception and action are based on the same visual information: Distinction between position and velocity. Journal of Experimental Psychology: Human Perception and Performance, 21, 19–31.
  • Smeets, J. B. J., & Brenner, E. (1999). A new view on grasping. Motor Control, 3, 237–271.
  • Smeets, J. B. J., & Brenner, E. (2002). Does a complex model help to understand grasping? Experimental Brain Research, 144, 132–135.
  • Smeets, J. B. J., & Brenner, E. (2004). Curved movement paths and the Hering illusion: Positions or directions? Visual Cognition, 11, 255–274.
  • Smeets, J. B. J., & Brenner, E. (2008). Grasping Weber’s law. Current Biology, 18, R1089–R1090.
  • Smeets, J. B. J., & Brenner, E. (2019). Some illusions are more inconsistent than others. Perception, 48, 638–641.
  • Smeets, J. B. J., Brenner, E., & Biegstraaten, M. (2002a). Independent control of the digits predicts an apparent hierarchy of visuomotor channels in grasping. Behavioural Brain Research, 136, 427–432.
  • Smeets, J. B. J., Brenner, E., de Grave, D. D., & Cuijpers, R. H. (2002b). Illusions in action: Consequences of inconsistent processing of spatial attributes. Experimental Brain Research, 147, 135–144.
  • Smeets, J. B. J., Glover, S., & Brenner, E. (2003). Modeling the time-dependent effect of the Ebbinghaus illusion on grasping. Spatial Vision, 16, 311–324.
  • Smeets, J. B. J., Kleijn, E., van der Meijden, M., & Brenner, E. (2020). Why some size illusions affect grip aperture. Experimental Brain Research, 238, 969–979.
  • Smeets, J. B. J., Martin, J., & Brenner, E. (2010). Similarities between digits’ movements in grasping, touching and pushing. Experimental Brain Research, 203, 339–346.
  • Smeets, J. B. J., Oostwoud Wijdenes, L., & Brenner, E. (2016). Movement adjustments have short latencies because there is no need to detect anything. Motor Control, 20, 137–148.
  • Smeets, J. B. J., Sousa, R., & Brenner, E. (2009). Illusions can warp visual space. Perception, 38, 1467–1480.
  • Smeets, J. B. J., van den Dobbelsteen, J. J., de Grave, D. D., van Beers, R. J., & Brenner, E. (2006). Sensory integration does not lead to sensory calibration. Proceedings of the National Academy of Science, 103, 18781–18786.
  • Smeets, J. B. J., van der Kooij, K., & Brenner, E. (2019). A review of grasping as the movements of digits in space. Journal of Neurophysiology, 122, 1578–1597.
  • Smith, M. A., Ghazizadeh, A., & Shadmehr, R. (2006). Interacting adaptive processes with different timescales underlie short-term motor learning. PLOS Biology, 4(6), Article e179.
  • Sober, S. J., & Sabes, P. N. (2003). Multisensory integration during motor planning. Journal of Neuroscience, 23, 6982–6992.
  • Somers, J. T., Das, V. E., Dell’Osso, L. F., & Leigh, R. J. (2000). Saccades to sounds: Effects of tracking illusory visual stimuli. Journal of Neurophysiology, 84, 96–101.
  • Sousa, R., Smeets, J. B. J., & Brenner, E. (2012). Does size matter? Perception, 41, 1532–1534.
  • Todd, J. T., Egan, E. J., & Phillips, F. (2014). Is the perception of 3D shape from shading based on assumed reflectance and illumination? i-Perception, 5(6), 497–514.
  • Todd, J. T., & Oomes, A. H. (2002). Generic and non-generic conditions for the perception of surface shape from texture. Vision Research, 42, 837–850.
  • Todd, J. T., Oomes, A. H., Koenderink, J. J., & Kappers, A. M. (2001). On the affine structure of perceptual space. Psychological Science, 12, 191–196.
  • Tresilian, J. R. (1998). Attention in action or obstruction of movement? A kinematic analysis of avoidance behavior in prehension. Experimental Brain Research, 120, 352–368.
  • Triesch, J., Ballard, D. H., Hayhoe, M. M., & Sullivan, B. T. (2003). What you see is what you need. Journal of Vision, 3(1), 86–94.
  • Tse, P. U., & Hsieh, P. J. (2006). The infinite regress illusion reveals faulty integration of local and global motion signals. Vision Research, 46, 3881–3885.
  • Tseng, Y. W., Diedrichsen, J., Krakauer, J. W., Shadmehr, R., & Bastian, A. J. (2007). Sensory prediction errors drive cerebellum-dependent adaptation of reaching. Journal of Neurophysiology, 98, 54–62.
  • Turvey, M. T., & Carello, C. (2011). Obtaining information by dynamic (effortful) touching. Philosophical Transactions of the Royal Society B, 366, 3123–3132.
  • Uccelli, S., Pisu, V., Riggio, L., & Bruno, N. (2019). The Uznadze illusion reveals similar effects of relative size on perception and action. Experimental Brain Research, 237, 953–965.
  • Ueda, H., Abekawa, N., & Gomi, H. (2018). The faster you decide, the more accurate localization is possible: Position representation of “curveball illusion” in perception and eye movements. PLOS ONE, 13(8), Article e0201610.
  • Utz, K. S., Hesse, C., Aschenneller, N., & Schenk, T. (2015). Biomechanical factors may explain why grasping violates Weber’s law. Vision Research, 111, 22–30.
  • Valsecchi, M., Cassanello, C., Herwig, A., Rolfs, M., & Gegenfurtner, K. R. (2020). A comparison of the temporal and spatial properties of trans-saccadic perceptual recalibration and saccadic adaptation. Journal of Vision, 20(4), Article 2.
  • Valsecchi, M., & Gegenfurtner, K. R. (2016). Dynamic re-calibration of perceived size in fovea and periphery through predictable size changes. Current Biology, 26, 59–63.
  • van Beers, R. J. (2009). Motor learning is optimally tuned to the properties of motor noise. Neuron, 63, 406–417.
  • van Beers, R. J., Sittig, A. C., & Denier van der Gon, J. J. (1998). The precision of proprioceptive position sense. Experimental Brain Research, 122, 367–377.
  • van Beers, R. J., van Mierlo, C. M., Smeets, J. B. J., & Brenner, E. (2011). Reweighting visual cues by touch. Journal of Vision, 11(10), Article 20.
  • van de Kamp, C., Bongers, R. M., & Zaal, F. T. J. M. (2009). Effects of changing object size during prehension. Journal of Motor Behavior, 41, 427–435.
  • van den Dobbelsteen, J. J., Brenner, E., & Smeets, J. B. J. (2003). Adaptation of movement endpoints to perturbations of visual feedback. Experimental Brain Research, 148, 471–481.
  • van der Kooij, K., Brenner, E., van Beers, R. J., Schot, W. D., & Smeets, J. B. J. (2013). Alignment to natural and imposed mismatches between the senses. Journal of Neurophysiology, 109, 1890–1899.
  • van der Kooij, K., Overvliet, K. E., & Smeets, J. B. J. (2016). Temporally stable adaptation is robust, incomplete and specific. European Journal of Neuroscience, 44, 2708–2715.
  • van der Kooij, K., & Smeets, J. B. J. (2019). Reward-based motor adaptation can generalize across actions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45, 71–81.
  • van Lier, R. (1999). Investigating global effects in visual occlusion: From a partly occluded square to the back of a tree-trunk. Acta Psychologica, 102, 203–220.
  • van Mierlo, C. M., Louw, S., Smeets, J. B. J., & Brenner, E. (2009). Slant cues are processed with different latencies for the online control of movement. Journal of Vision, 9(3), Article 25.
  • Vaswani, P. A., Shmuelof, L., Haith, A. M., Delnicki, R. J., Huang, V. S., Mazzoni, P., Shadmehr, R., & Krakauer, J. W. (2015). Persistent residual errors in motor adaptation tasks: Reversion to baseline and exploratory escape. Journal of Neuroscience, 35(17), 6969–6977.
  • Vaughan, J., Rosenbaum, D. A., & Meulenbroek, R. G. J. (2001). Planning reaching and grasping movements: The problem of obstacle avoidance. Motor Control, 5, 116–135.
  • Veerman, M. M., Brenner, E., & Smeets, J. B. J. (2008). The latency for correcting a movement depends on the visual attribute that defines the target. Experimental Brain Research, 187, 219–228.
  • Volcic, R., & Domini, F. (2018). The endless visuomotor calibration of reach-to-grasp actions. Scientific Reports, 8(1), Article 14803.
  • Voudouris, D., Smeets, J. B. J., & Brenner, E. (2012a). Do humans prefer to see their grasping points? Journal of Motor Behavior, 44, 295–304.
  • Voudouris, D., Smeets, J. B. J., & Brenner, E. (2012b). Do obstacles affect the selection of grasping points? Human Movement Science, 31, 1090–1102.
  • Voudouris, D., Smeets, J. B. J., & Brenner, E. (2013). Ultra-fast selection of grasping points. Journal of Neurophysiology, 110, 1484–1489.
  • Wallis, G., Chatziastros, A., & Bülthoff, H. (2002). An unexpected role for visual feedback in vehicle steering control. Current Biology, 12, 295–299.
  • Walter, E., & Dassonville, P. (2008). Visuospatial contextual processing in the parietal cortex: An fMRI investigation of the induced Roelofs effect. Neuroimage, 42, 1686–1697.
  • Warren, W. H., Jr. (1984). Perceiving affordances: Visual guidance of stair climbing. Journal of Experimental Psychology: Human Perception and Performance, 10, 683–703.
  • Warren, W. H., Jr., Kay, B. A., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216.
  • Warren, W. H. Jr., & Whang, S. (1987). Visual guidance of walking through apertures: Body-scaled information for affordances. Journal of Experimental Psychology: Human Perception and Performance, 13, 371–383.
  • Weidner, R., & Fink, G. R. (2007). The neural mechanisms underlying the Muller-Lyer illusion and its interaction with visuospatial judgments. Cerebral Cortex, 17, 878–884.
  • Whiting, H. T. A., & Sharp, R. H. (1974). Visual occlusion factors in a discrete ball-catching task. Journal of Motor Behavior, 6, 11–16.
  • Witt, J. K., Proffitt, D. R., & Epstein, W. (2005). Tool use affects perceived distance, but only when you intend to use it. Journal of Experimental Psychology: Human Perception and Performance, 31, 880–888.
  • Yilmaz, E. H., & Warren, W. H., Jr. (1995). Visual control of braking: A test of the tau hypothesis. Journal of Experimental Psychology: Human Perception and Performance, 21, 996–1014.
  • Zimmermann, E., & Lappe, M. (2016). Visual space constructed by saccade motor maps. Frontiers in Human Neuroscience, 10, Article 225.