Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 02 October 2023

Own-Body Perceptionfree

Own-Body Perceptionfree

  • Dorothy CowieDorothy CowiePsychology Department, Durham University


It has long been known that there are topographic maps of the body in primary sensory and motor cortices. While these maps have greater representation of sensitive body parts, the fact that we do not feel these distortions in everyday sensory experience indicates that there are also higher-level corrective processes involved in tactile perception. Beyond perceptions on the body, one’s own body is perceived as distinct from external objects, and this perception gives rise to a feeling of ownership over the body—that my body is mine or belongs to me. This arises from both bottom-up and top-down sensory signals. In the rubber-hand illusion, stroking on a fake hand induces the participant to feel that it is their own. Therefore, the sight of a body, and the synchrony of visual and tactile signals on it, are important cues to body ownership. Other forms of multisensory synchrony, including movement and interoceptive signals, also contribute. Prior expectations of the body’s posture and form constrain the extent to which these sensory signals produce feelings of ownership. Since body ownership arises from a multiplicity of signals, it is subject to significant individual differences. There is also plasticity in body representation. This is demonstrated by neural reorganization in individuals with congenital limb loss and by developmental effects. While very young infants are sensitive to the multisensory signals that drive body ownership (e.g., visuotactile synchrony), it takes substantial experience for the tactile sensations of the body to be flexibly coded in appropriate reference frames; likewise, children up to 10 years old tend to embody an appropriately oriented hand more than adults. Understanding own-body representation has important applications, including for tool use, prosthetic design, and virtual reality.


  • Neuropsychology

Body Maps

One of the oldest findings in neuroscience is the existence of topographic maps of the body in the somatosensory and motor cortices. Following from work on nonhuman primates (Leyton & Sherrington, 1917), Penfield discovered that stimulating a particular region of the human precentral gyrus—the primary motor cortex—would elicit movement of a particular body part (Penfield & Boldrey, 1937). Likewise, stimulating a region of postcentral gyrus (primary somatosensory cortex) would elicit a feeling of tingling or touch on a particular body part. Systematically uncovering the relationship between brain area and body parts revealed parallel motor and tactile maps of the body, known as sensorimotor homunculi (Kaas et al., 1979; Merzenich et al., 1984). In these maps, highly sensitive and highly used parts of the body—such as the hands and mouth—are overrepresented, resulting in a distorted map. Given that touch signals inherently carry no spatial information, such a spatial framework is necessary to relate touches to the body structure and enable goal-directed actions.

Maps or representations of the body exist not only in primary somatosensory and motor cortex, but also in higher-level areas that likely mediate our conscious sense of body size and shape (Sereno & Huang, 2006). This is demonstrated by several tasks that contrast the gross distortions that one would expect from the primary somatosensory maps with subtle distortions that are actually found. For example, sensitivity to touch on various body parts can be measured by asking the participant to judge which of two tactile stimuli is larger. On highly sensitive body parts, such as hands, there is higher spatial resolution—but nowhere near as much as would be predicted from the primary maps (Longo & Haggard, 2011). Likewise, if a participant points to salient locations on their own covered hand, the resulting map is distorted so that the hand (Longo & Haggard, 2010)—or face (Mora et al., 2018)—is perceived as wider and shorter than it really is. Crucially, however, the magnitude of the distortions is far less than the magnitude of distortion found in the primary sensory maps (Taylor-Clarke et al., 2004). The existence of higher-level maps is inferred: these are believed to dampen the gross distortions arising from primary areas (the “reverse distortion” hypothesis; Linkenauger et al., 2015).

Many proposals have been put forward to describe such high-level body representations. A classic distinction is between a “body schema”—an online, unconscious representation of the body used for action planning—and a “body image”—a longer-term, conscious representation of the body (de Vignemont, 2018). Further proposed decompositions include body maps that serve somatosensory perception (Longo, 2015), those that describe the layout and structure of the body (Schwoebel & Coslett, 2005), and those that serve action (de Vignemont, 2018). Understanding the nature and interaction of these multiple body maps is a significant remaining challenge in the field.

Sensory Building Blocks of Body Ownership

The body is a special perceptual object. One perceives the relevant sensory signals (Figure 1) and makes judgments (“This is the touch of a soft object on my hand”). In addition, one feels a sense of ownership over the body (“This is my hand”). The existence of a distinct neural substrate for ownership is underlined by the phenomenon of somatoparaphrenia. This delusional state sometimes arises after right-hemisphere damage resulting in hemiplegia (Vallar & Ronchi, 2009). In this condition, the patient can see and feel their left arm, yet refuses to accept that it belongs to them, believing instead that it belongs to another person. Somatoparaphrenia is often used to illustrate that the sensory perception of a body part is distinct from the feeling that it belongs to one’s own body.

Figure 1. Sensory contributions to body ownership. In determining whether a body part is one’s own, the observer uses information from multiple senses. Here, an observer uses the sight of the hand, proprioceptive information from muscles and joints, and tactile signals on the hand. Additionally, temporal synchrony between the cues is a strong cue to body ownership.

How then does this feeling of body ownership arise, and how can we study it? Recent years have seen a burgeoning experimental approach to studying body ownership. The field grew following a classic paper on the rubber-hand illusion (Botvinick & Cohen, 1998), in which sensory inputs were used to make participants feel like a fake hand was their own hand. The assumption is that by exploring the circumstances under which participants can be made to accept illusory bodies as their own, we learn something about the sensory and cognitive factors that mediate the sense of body ownership in everyday life.

In the rubber-hand illusion, a participant has their own hand hidden from view while a fake hand is visible on the table in front of them. The hidden real hand and the visible fake hand are stroked in synchrony—that is, at the same times, and on equivalent fingers. When asked, around 75% of participants (Makin et al., 2008) feel that the fake hand is actually part of their own body. Two further effects are reported. First, the participant perceives the touch of the brush at the location of the fake hand. Second, when asked to point underneath their own finger, the participant instead points further toward the fake hand than at baseline. It is important to take into account each participant’s baseline pointing accuracy, which of course varies considerably across individuals. The change from baseline to post-illusion pointing is known as “proprioceptive drift.” When stroking on fake and real hands is not synchronous (visuotactile discrepancy exceeds 300 ms; Shimada et al., 2014), the illusory effects are not present. From this, it is argued that visuotactile correlations contribute strongly to a sense of body ownership (Tsakiris, 2010). Comprehensive analyses of the illusion (Gonzalez-Franco & Peck, 2018; Longo et al., 2008) consistently revealed these changes in ownership, touch referral, and perceived hand location as the core effects of the rubber-hand illusion.

A parsimonious model of the illusion was put forward by Makin and colleagues (2008). In their model, the sight of a hand in peripersonal space (the space immediately surrounding the body) activates a population of neurons in the posterior intraparietal sulcus, known from animal work (Rizzolatti et al., 1981) to encode stimuli in body-based coordinate frames of reference (the neurons fire when a stimulus is viewed near the hand, irrespective of the hand’s position in external space). The position of one’s own hand is consequently perceptually pulled toward the location of the fake hand. Further, the neurons begin to code incoming sensory stimuli in relation to the fake hand. Accordingly, the visually perceived brushstrokes near the fake hand become perceptually bound together with tactile sensations felt on one’s own hand and one feels the touch of the brush at the location of the fake hand. The touch of the brush on the fake hand triggers a feeling that it is the participant’s own hand, and this sense of ownership over the hand in turn reinforces and strengthens the visually perceived position of the hand, pulling it further toward the location of the fake hand. It is believed that the sense of ownership arises in premotor cortex, in which subjective ratings of ownership during the illusion correlate with the Blood-Oxygenation-Level-Dependent (BOLD) signal, an indirect index of neural activity (Ehrsson et al., 2004).

The involvement of perihand mechanisms (those processing peripersonal space around the hand) is supported by the finding that for the illusion to occur, the fake hand must be viewed in perihand space—ownership drops off exponentially as the distance between real and fake hands increases (Lloyd, 2007). Further, drift and ownership are separable. In the original paper on the rubber-hand illusion, which used a small sample of participants, proprioceptive drift and ownership ratings were correlated. As a result, proprioceptive drift has been used as a stand-alone index of illusory body ownership. However, large samples have found no correlation between the measures (Rohde et al., 2011). Drift and ownership also proceed on different time courses. Drift begins early, by around 5 s of exposure time (Lloyd, 2007). In fact, drift produced by merely viewing a hand is often comparable in magnitude to that produced following synchronous visuotactile stroking (Carey et al., 2018; Filippetti & Crucianelli, 2019). Ownership is reported later, at 30 to 60 s (Kalckert & Ehrsson, 2017; Keenaghan et al., 2020); drift also continues to build for up to 2 minutes (Rohde et al., 2011; Tsakiris & Haggard, 2005). Therefore, the classical difference between synchronous and asynchronous conditions in the rubber-hand illusion may derive largely from the fact that asynchronous stroking breaks the initial illusion, which is primarily elicited by the simple visual experience of seeing a handlike object (Hohwy & Paton, 2010).

Following this line of argument, it can be argued that classic rubber-hand illusion studies really show three things: that own-body perception is strongly driven by viewing a body part; that when multisensory information conflicts with its normal, synchronous state, it weakens a sense of ownership; and that own-body perception is not a perceptually coherent experience.

Since this early work, it has also become clear that several other sensory inputs can contribute to bodily perception. First, the body is of course typically experienced during movement. Vision of a moving body part is often coupled with kinesthetic signals regarding its movement. These visuomotor correlations can in fact be used to produce a moving version of the rubber-hand illusion. Here, the movement of a virtual hand—or indeed the full body—is tracked by sensitive motion-capture systems. With minimal delay, the real hand’s movement drives the movement of a virtual hand or body. This in turn evokes a feeling of ownership over the virtual hand (Kilteni et al., 2012; Kokkinara & Slater, 2014; Sanchez-Vives et al., 2010). The powerful social applications of such dynamic, embodied environments are discussed in the section “Applications of Understanding Own-Body Perception.”

Second, it has become clear that affective and interoceptive signals of internal bodily states contribute to body ownership. These signals from within the body may enable the special, pre-reflexive perception of one’s own body not merely as a visual object set against a background, but as a global, embodied self (Seth & Tsakiris, 2018). Indeed, de Vignemont (2018) proposed that it is the affective (or emotional) response to a body and its internal physiological state that signals its acceptance as one’s own. Thus, when inducing an illusion of body ownership, a threat to the real or fake body can produce heightened physiological arousal, such as skin conductance responses (Braithwaite et al., 2017; Ehrsson, 2007).

The evidence that interoceptive signals are used in body ownership comes from different sources. Affective components to bodily signals can affect ownership in the rubber-hand illusion—slow touches, which activate C-tactile afferents, are more effective than punctate touches (Crucianelli et al., 2018). The heartbeat—registered in the brain as the heart-evoked response (Schandry & Montoya, 1996)—affects perceptual processing of the body. When the outline of a viewed virtual body flashes in time with the participant’s heartbeat, it enhances the feeling that the virtual body is one’s own (Aspell et al., 2013; Suzuki et al., 2013). Further, conscious perception of one’s own heartbeat can also be measured by asking participants to count the number of their own heartbeats felt in a fixed period of time (Schandry, 1981). Those with lower performance on this task are more susceptible to the rubber-hand illusion, showing more drift and ownership than those with high interoception (Tsakiris et al., 2011). Thus, individuals with high interoceptive awareness have lower sensitivity to external signals for bodily self-consciousness.

This work also underlines that the weighting of the multisensory signals used to perceive one’s own body may vary markedly across individuals, with some trade-off between internal and external signals. Understanding individual differences in own-body perception is crucial for future work, since, for example, only 75% of people experience the rubber-hand illusion (Makin et al., 2008).

Constraints on Body Ownership

A wealth of evidence therefore indicates that both sensory and multisensory information are crucial for providing a sense of bodily self. Early models suggested that sensory information was sufficient for embodiment: that any object might be embodied, provided that it is presented in the appropriate sensory context. Indeed, an early study showed that ownership ratings were higher for a table stroked synchronously with one’s own hidden hand than a table stroked with the hand visible (Armel & Ramachandran, 2003). However, many subsequent studies have shown that the perception of an object as one’s own body is weak—if at all present—for objects that do not resemble a human body part. Further, the particular view of the body, its posture, and the perspective of the participant are important. Here, the constraints of posture, perspective, and form are discussed in turn.

In very early studies of the rubber-hand illusion, it was shown that viewing a fake hand oriented at 90° or 180° to one’s own produced negligible illusory experiences (Ehrsson et al., 2004; Tsakiris & Haggard, 2005). This suggests that one experiences ownership over a body part only if it is viewed in a typical posture, and that anatomically impossible postures break the sense of ownership. More detailed work has shown that there is careful comparison of the current proprioceptively perceived posture of one’s own body and the viewed posture of the fake hand. If the hands are misaligned by 10°, the sense of ownership drops significantly. Further, for a strong illusion to occur, the viewed and felt orientation of the brushstrokes must match, and the comparison is made within a hand-centered frame of reference (Costantini & Haggard, 2007). The posture of the body is therefore a crucial component of body ownership.

Relatedly, the body must be observed from a plausible perspective: incorrect posture may in fact break the illusion precisely because it breaks the usual perspective on the body. Comparison of a first-person perspective on the body (looking down on it as one normally would) and a third-person perspective (looking at it from a different angle) consistently shows higher feelings of body ownership for the first-person view (Maselli & Slater, 2013). Such studies have typically used a full-body illusion, where the participant views a full body (comprised of trunk and limbs) through a headset (Petkova et al., 2011). If the body is viewed with first-person perspective, the participant feels a sense of ownership over it, to which additional visuotactile or visuomotor signals add little (Carey et al., 2018; Maselli & Slater, 2013). Thus, first-person perspective alone is sufficient for a bodily illusion to occur. This mirrors the rubber-hand illusion, in which visual capture alone is important. A first-person perspective on a body can also induce changes in self-location. When participants view a trunk and legs in the same manner that one would normally experience (i.e., looking down), but located at a different spot in the room from the real body’s location, hippocampal and related mechanisms support a recalibration of the felt spatial location within the environment so that the participant feels they are viewing their own body in a different place within the room (Guterstam et al., 2015). Interestingly, showing somatoparaphrenic patients their “disowned” hand in a mirror view enables them to feel ownership over it, while returning to a first-person view of the body elicits feelings of “disownership” once again (Fotopoulou et al., 2011). Thus, third- and first-person views of the body are dissociable.

While a first-person perspective on a body may be sufficient for eliciting a feeling of ownership, there is some debate over whether it is necessary in healthy participants (Serino et al., 2013). To address this question, several paradigms have examined cases where there is a third-person perspective on the fake body, with congruent visuotactile stimulation between the real and fake body. Participants viewing a body from a slightly misoriented perspective give neutral ratings of ownership; this is difficult to interpret (Maselli & Slater, 2013). In another paradigm, participants see brushstrokes applied to the back of a virtual body, while temporally congruent strokes are felt on their real body (Lenggenhager et al., 2007). There is some debate about whether this elicits a sense of ownership over the viewed body (Maselli & Slater, 2014). It clearly affects perceived self-location: when the participant is moved a little and is asked to return to their place, they in fact walk toward the fake body (Lenggenhager et al., 2007, 2009). Further, touch is remapped into the spatial reference frame of the fake body (Aspell et al., 2009; Maselli & Slater, 2014). Such out-of-body spatial experiences are thought to arise in the temporoparietal junction, an area that integrates vestibular, visual, proprioceptive, and tactile signals, in which neurological damage can result in bodily hallucinations (Blanke & Mohr, 2005; Ionta et al., 2011) and direct stimulation can induce out-of-body experiences (Blanke et al., 2002). Therefore, first-person perspective may not be necessary for at least some aspects of own-body perception.

In addition to constraints of posture and perspective, many studies have shown that the form of the body is key to driving a sense of ownership. Specifically, there seems to be a minimum of corporeality that is needed for something to feel part of one’s own body. Early work showed reduced ownership in the rubber-hand illusion when the viewed object was a stick (Tsakiris & Haggard, 2005), a block (Tsakiris et al., 2010), a wire hand (Bertamini & O’Sullivan, 2014), a hand-shaped block with wooden texture (Tsakiris et al., 2010), or a sheet with skin-like texture (Haans et al., 2008). Likewise, a full body illusion was reduced for a block (Lenggenhager et al., 2007; Petkova & Ehrsson, 2008). The form and texture of a body are often confounded into a single factor termed “corporeality.” A study by Haans et al. (2008) examined the roles of shape and texture in a factorial design and found that for subjective ratings (although not drift), shape was the most crucial aspect: texture contributed only when the object was already hand-shaped. One promising recent theory suggests that the minimum corporeality needed for embodiment might be determined by functionality, rather than shape or texture (Aymerich-Franch & Ganesh, 2016; Aymerich-Franch et al., 2017). Testing this theory fully requires decoupling these factors further (Schwind et al., 2017). We must also bear in mind that effects may not be linear. In the “uncanny valley” effect, a hand that is almost humanlike, but not quite, is judged as significantly more eerie than a hand that is either perfectly human or very mechanical (Poliakoff et al., 2013).

While there is clearly a minimum threshold of form needed for embodiment, one can conversely ask whether a “superhuman” body can be embodied. Can one feel a sense of ownership—or learn to control—a body with more functionality and/or parts than one’s own? This is the focus of some pioneering computer science work on “homuncular flexibility” (Won et al., 2015)—the idea that we can learn to control nonhuman body forms. Results so far show that humans can learn to control a hand with six fingers (Hoyet et al., 2016), an elongated arm (Kilteni et al., 2012), a body with three arms (Schaefer et al., 2009; Won et al., 2015), or a tail (Steptoe et al., 2013). However, competently controlling such a body is not necessarily coupled to feeling ownership over it (Rohde et al., 2011), and this is important to bear in mind for understanding the “upper bounds” of embodiment.

Finally, there is mixed evidence regarding embodiment of differently sized bodies. While some studies suggest that adults will accept a larger, but not a smaller, hand than their own (Marino et al., 2010; Pavani & Zampini, 2007), others show that adults will accept as their own a smaller hand (Bruno & Bertamini, 2010) or a doll-sized body (van der Hoort et al., 2011).

How can we understand this network of constraints in bodily illusions, and what does it tell us about own-body perception? In a classic model (Tsakiris, 2010), incoming sensory information must pass through a series of “gates.” First, the form of the viewed body part is compared with one’s own form; next, their postures are compared; finally, the temporal and spatial alignments of viewed and felt brushstrokes are compared. Only after all comparisons are successfully completed is a body accepted as one’s own. If at any stage in this process the viewed body part does not match the top-down information about one’s own body form, posture, or felt touch, then the body part will not be accepted as one’s own. Another formulation (Apps & Tsakiris, 2014) takes a predictive coding approach, which specifies further the kind of top-down information that might inform such comparisons. This model proposes that prior expectations or knowledge about one’s own body are probabilistically encoded and are dynamically updated to minimize the discrepancy with incoming sensory information. This allows for plasticity in embodiment (see section “Plasticity and Development of Own-Body Perception”). Similarly, in Bayesian causal inference models (Kilteni et al., 2015), the brain uses multisensory information as well as prior knowledge to infer the likelihood that incoming sensory information arises from two separate locations (the fake and real hands) or merely one (in which case, the real hand and the fake hand are perceived to be one).

Plasticity and Development of Own-Body Perception

Inherent to many of the models already discussed is that body-part perception changes over time (Kalckert & Ehrsson, 2017; Keenaghan et al., 2020; Tsakiris & Haggard, 2005). This is consistent with predictive coding models of body ownership where one might gradually adapt to a nonhuman body part. Curiously, after exposure to a regular rubber-hand illusion, participants could embody a simple piece of card (Hohwy & Paton, 2010). Similarly, congruent visuotactile stimulation with another’s face can result in changes in the perception of one’s own face (Tajadura-Jiménez et al., 2012), and prolonged experience can allow control over nonhuman bodies (Won et al., 2015).

Longer timescales and more natural changes to body perception occur during childhood development. As the body grows, the child must necessarily learn to accept it as their own and update their expectations accordingly. Newborn monkeys have detailed somatotopic maps, with clear differentiation of function, as seen in the adult brain (Krubitzer & Kaas, 1988). Similarly, recent neuroimaging studies reveal basic somatosensory maps in the human infant brain (Marshall & Meltzoff, 2015). Yet a substantial body of work comparing acquired and congenital limb loss also reveals plasticity in body representation. (Readers interested in the potential maladaptive consequences of this, such as phantom limb pain, are referred to both Flor et al., 1995; Makin et al., 2015). Notably, plasticity appears to be highest in childhood. Consider the area of somatosensory cortex that would usually process inputs from a missing hand. In individuals with congenital limb loss, this area responds to touches to a variety of other effectors, such as the foot or mouth (Hahamy et al., 2017), which may be used to compensate for the loss of the hand. In contrast, after acquired limb loss, the area persists in representing the hand: 15 years after amputation, an individual can imagine moving phantom fingers, with each topographically mapped in the cortex as they were before amputation (Kikkert et al., 2016). This strongly suggests that own-body representation is to some extent malleable in the early years but becomes set later in life.

In support of this, investigations into own-body perception in infancy and childhood reveal early competencies followed by lengthy processes of development. The basic ability to localize a touch to the body is present in early infancy (DiMercurio et al., 2018; Leed et al., 2019), but touches are initially localized only to the body part in isolation, without a reference to the external visual world. The ability rapidly develops, and by 4 months, infants show a crossed-feet deficit (Figure 2; Begum Ali et al., 2015). That is, when a touch is applied to the foot, the infant is slower to respond (e.g., move the touched foot) when the feet are crossed at the midline compared to when they are in the usual position. The crossed-feet deficit indicates that infants automatically expect touch to occur in the usual visual reference frame and incur a cost if it must be located with reference to the body only. It is only by 10 months of age that infants are more flexibly able to code touches relative to the body (Bremner et al., 2008; Rigato et al., 2014). Patterns of response in blind participants indicate that visual experience in childhood is necessary for this tactile remapping to occur (Röder et al., 2007).

Figure 2. A touch can always be localized in a body-centered frame of reference, but its relation to an external frame of reference may change with posture. With feet in the usual uncrossed posture (left panel), the baby is quick to localize a touch to the right foot (blue circle), which rests in the usual right side of space. In the crossed posture however, the right foot rests in the left side of space, an atypical position with regard to an external space. In this case, the baby sometimes responds with the wrong foot.

Another early competency is that infants can make the kinds of perceptual discriminations between synchronous and asynchronous multisensory information that would facilitate own-body perception. Thus, 3-month-old infants preferentially look to a screen showing legs that move out of temporal synchrony with their own, compared to synchronous movement (Rochat & Morgan, 1995), while 4-month-olds preferentially look to colocated visual and tactile targets (Begum Ali et al., 2021), and 10-month-olds look to legs stroked in temporal and spatial synchrony with their own rather than an asynchronous comparison (Zmyj et al., 2011). In addition to their sensitivity to these visuotactile and visuomotor cues, infants are sensitive to cardiovisual synchrony: a display moving asynchronously with the infant’s heartbeat elicits increased looking time in comparison to a synchronous stimulus (Maister et al., 2017). There is, therefore, an early sensitivity to multisensory synchrony. The resolution of these perceptual systems improves with age, as measured by the “temporal binding window” within which multisensory events are perceptually bound together. Such a window generally narrows with age (Hillock-Dunn & Wallace, 2012). For visual and tactile modalities together, 7-year-olds perceive visuotactile asynchronies of almost 400 ms to be simultaneous on 50% of trials, while for 11-year-olds or adults, the window is closer to 220 ms (Chen et al., 2018).

These perceptual capacities undoubtedly serve as an early basis of own-body perception. Mirroring the results found in studies of adults, infants’ processing of multisensory information also appears to be gated by prior constraints. Thus, newborns look to a face stroked in synchrony with their own only if it is upright; older infants look to visuotactile or visuomotor events on legs that resemble their own, but not those that are wooden (Rochat & Morgan, 1995; Zmyj et al., 2011) or mirror-reversed (Rochat & Morgan, 1995). Yet these perceptual processes do not speak to what is felt as a part of one’s own body. Indeed, in the preferential looking paradigms above, the legs displayed on multiple distant, flat screens are unlikely to be felt as part of one’s body (Bahrick, 2013; Bremner & Cowie, 2013). Own-body perception may be studied only when it can be reported verbally or when it is possible to take proxy measures, such as proprioceptive drift or neuroimaging signals.

In this vein, work using the traditional rubber-hand illusion with children reveals a longer period of development following the early competencies of infancy. At age 4 years, visuotactile (Cowie et al., 2013) or visuomotor (Dewe et al., 2021) synchrony drives a sense of touch on the fake hand and a sense of ownership over it. Yet there is an additional, parallel contribution of visual capture (Filippetti & Crucianelli, 2019), which is higher in 4- to 9-year-olds than in older children or adults (Cowie et al., 2016) and specifically feeds into the sense of hand location as indexed by proprioceptive drift. As in adults, children’s embodiment of a hand is constrained by comparisons with current body posture (Gottwald et al., 2021). In the case of the whole body, children’s ownership ratings become more sensitive to visuotactile synchrony with age (Cowie et al., 2018). Understanding the development of these processes may further our understanding of neurodevelopmental disorders, such as autism spectrum disorder, where a larger temporal binding window may contribute to difficulties in body representation (Ropar et al., 2018).

Applications of Understanding Own-Body Perception

An understanding of own-body representation may facilitate work in several applied areas, including tool and prosthetic use, as well as embodied virtual environments. Each is briefly considered in turn here.

One can feel an intimate degree of control over a tool without feeling that it is in fact part of one’s own body. During tool use, attention and sensory processing may become focused toward the end of the tool for effective use or navigation around the environment. For example, visual flashes presented at the end of a long tool are processed more quickly than those presented in external space or those presented along the length of the tool (Holmes, 2012; although see Miller et al., 2018). Even after tool use, movement patterns are made as if the tool is still there—for example, after using a rake, the peak velocity is lower and is reached later (Cardinali et al., 2009). Thus, experience with a tool may temporarily alter one’s actions and sensory processing without it engendering a feeling of body ownership or altering body representation. This issue is pertinent when considering prosthetic limbs. New generations of myoelectric prostheses offer individuals with congenital or acquired limb loss the chance to have two arms again. Yet even in heavy users, the prostheses occupy a representational space of their own and are represented neither as body parts nor as tools (Maimon-Mor & Makin, 2020). Tools and prostheses therefore demonstrate the complex nature of own-body perception and the need to consider the many ways in which one may feel a strong sense of attentional pull or control over an object without fully perceiving it as part of one’s own body.

Much recent work has studied own-body perception using immersive and dynamic environments where users are able to embody a variety of human bodies, varying in socially relevant dimensions. In this way, body ownership of an illusory body can be used as a foundation for social psychological experiments. Work has shown that users tend to behave like their avatar (“the Proteus effect”; Yee & Bailenson, 2007), and that such effects may persist after the virtual reality experience. Thus, users may change racial biases upon embodying different-skinned avatars (Maister et al., 2015; Peck et al., 2013), users may change age-related attitudes upon embodying smaller, child-sized bodies (Banakou et al., 2013), or users may change their attitudes toward their own psychological state upon embodying a counsellor (Maister et al., 2015). Likewise, users embodying a cow or coral (Ahn et al., 2016) have reported a change in attitudes to animal rights or environmental issues. Altering one’s own body representation in these ways can have potentially profound influences on one’s social and psychological state. Understanding the time course over which these effects may persist or fade is again crucial for understanding the broad-reaching consequences of own-body perception.

While this article focuses on whether one accepts a body as one’s own, there are of course important affective aspects of body perception. A degree of dissatisfaction with one’s own body is common, and in extreme cases it can lead to eating disorders (Stice & Shaw, 2002). There is growing evidence that these issues may be worsened by media exposure (Boothroyd et al., 2020). Interested readers are referred to several excellent reviews on this topic (Alleva & Tylka, 2021; Guest et al., 2019; Holland & Tiggemann, 2016).


Own-body perception is a complex system, underpinned by interactions between multisensory information and prior knowledge regarding the body. The neural and behavioral characteristics of some aspects are well known, but higher-level properties remain to be understood.