Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 08 February 2023

The Psychology of Hearing Lossfree

The Psychology of Hearing Lossfree

  • Christopher J. PlackChristopher J. PlackUniversity of Manchester
  •  and Hannah H. GuestHannah H. GuestUniversity of Manchester


The psychology of hearing loss brings together many different subdisciplines of psychology, including neurophysiology, perception, cognition, and mental health. Hearing loss is defined clinically in terms of pure-tone audiometric thresholds: the lowest sound pressure levels that an individual can detect when listening for pure tones at various frequencies. Audiometric thresholds can be elevated by damage to the sensitive hair cells of the cochlea (the hearing part of the inner ear) caused by aging, ototoxic drugs, noise exposure, or disease. This damage can also cause reductions in frequency selectivity (the ability of the ear to separate out the different frequency components of sounds) and abnormally rapid growth of loudness with sound level. However, hearing loss is a heterogeneous condition and audiometric thresholds are relatively insensitive to many of the disorders that affect real-world listening ability. Hair cell loss and damage to the auditory nerve can occur before audiometric thresholds are affected. Dysfunction of neurons in the auditory brainstem as a consequence of aging is associated with deficits in processing the rapid temporal fluctuations in sounds, causing difficulties in sound localization and in speech and music perception. The impact of hearing loss on an individual can be profound and includes problems in communication (particularly in noisy environments), social isolation, and depression. Hearing loss may also be an important contributor to age-related cognitive decline and dementia.


  • Cognitive Psychology/Neuroscience


It is estimated that hearing loss affects more than 1.5 billion people globally, particularly older adults (World Health Organization [WHO], 2021). This statistic refers to clinical hearing loss, reflecting reduced sensitivity to pure tones. However, as shall be seen, clinical hearing tests tell only part of the story. Many people with clinically normal hearing have listening difficulties, sometimes due to auditory dysfunction that is undetected by standard clinical tests. Even for those with a diagnosed hearing loss, the clinical test results are often a poor predictor of listening ability in real-world situations. This article will consider the physiological bases, causes, and often far-reaching consequences of hearing loss across its different manifestations.

What is Hearing Loss?

Hearing loss can be defined broadly as a reduction in the ability to perceive sounds. However, the clinical definition of hearing loss refers to a more specific reduction in the ability to detect very simple sounds—pure tones—presented at low sound levels. If the sound level required to just hear a soft pure tone, in a silent environment, is higher than normal by a certain criterion amount, the individual is said to have a hearing loss. On examination, this may appear a little strange. Clinical diagnosis of visual disorders does not often involve measuring the least intense light flash that can be detected in a completely dark room. Instead, it is common to test visual acuity, the ability to discriminate small features in visual objects that are presented using luminosities well above the absolute threshold for detection. This is a sensible test of “real-world” vision: people are often required to detect or discriminate visual objects based on small differences in features (obvious example: reading). Similarly, real-world hearing often involves detecting small differences in sounds (for example, speech sounds) presented well above the threshold level for detection. Yet the standard clinical test of hearing, pure-tone audiometry, does not measure this directly. For the purposes of this article, hearing loss is defined in the broad sense, to include both clinical hearing loss and hearing loss that is not detected by standard clinical tests.

Several different types of hearing loss have been defined and these different types will be discussed throughout this article. Some of these definitions are precise, some less so. It is common to divide hearing loss broadly into conductive hearing loss, which is caused by a reduction in the efficiency of transmission of sound to the cochlea in the inner ear, and sensorineural hearing loss, which refers to a hearing loss caused by dysfunction of the cochlea (for example, due to damage to the sensory hair cells) or the auditory neural pathways. Sensorineural hearing loss is the most prevalent type of hearing loss for adults. Sensorineural hearing loss can affect our ability to detect quiet sounds, as reflected in elevated audiometric thresholds. Importantly, damage to the cochlea or the auditory nervous system also affects the processing of sounds. Hence, sensorineural hearing loss is not equivalent to a simple attenuation and cannot be fully corrected by simple amplification using a hearing aid. Sensorineural hearing loss will be the main focus of this article. A mixed hearing loss is simply a combination of conductive and sensorineural loss.

Pure-Tone Audiometry

Most reporting in this area—for example, with respect to prevalence, risk factors, or relation to other disorders—has defined hearing loss with reference to the standard clinical measure, so it is worth spending some time explaining what this is. The standard clinical hearing test, pure-tone audiometry, involves presenting pure tones (single sinusoidal sound waves) at various frequencies over headphones in a soundproof booth. The audiologist varies the sound level of each pure tone until the patient can only just hear it (the tone is only just audible). The audiogram is a plot of the lowest detectable sound level as a function of the frequency of the pure tone, measured separately for each ear. Normal hearing is defined as 0 dB HL (hearing level) and thresholds are usually plotted with increasing hearing loss in the downward direction. Hearing loss is often categorized into different ranges of threshold elevation; for example, mild (21–40 dB HL), moderate (41–70 dB HL), severe (71–95 dB HL), and profound (above 95 dB HL) (British Society of Audiology, 2018). These categories can refer to hearing loss at individual frequencies but can also be applied to averages across a range of frequencies (for example, 0.5, 1, 2, and 4 kHz). Figure 1 shows example audiograms for an ear with hearing in the normal range and for ears with different degrees of hearing loss.

Figure 1. Example audiograms for ears with different degrees of hearing loss. The descriptions by each audiogram reflect the spread of hearing thresholds across frequency according to the British Society of Audiology categories of hearing loss.

Source: BSA, 2018.

Despite its widespread use, the clinical audiogram is a blunt instrument and is insensitive to many deficits in auditory function. Subclinical hearing loss is a sensorineural hearing loss that is not revealed by the clinical audiogram. This may include suprathreshold deficits that affect sound processing above hearing threshold, such as the ability to identify sounds or to separate sounds of interest from background sounds.

Subclinical hearing loss also includes raised audiometric thresholds at frequencies that are not currently included in the standard clinical audiogram. The standard clinical range is considered to be 0.25 kHz to 8 kHz. However, human hearing sensitivity, at least for children and younger adults, extends to 20 kHz. Audiometry in the “extended high-frequency” range, above 8 kHz, is not currently routine, but may well be in the future. In particular, it is recognized that the first signs of hearing loss are often observed in the extended high-frequency range.

Conductive Hearing Loss

Figure 2 shows the anatomy of the peripheral auditory system. Sound waves enter the ear canal (part of the outer ear) and cause the eardrum (or tympanic membrane) to vibrate. These vibrations are carried to the cochlea (part of the inner ear) via three tiny bones in the middle ear: the malleus, incus, and stapes (collectively called the ossicles). This system ensures that the vibrations are transmitted efficiently into the fluid-filled cochlea, by increasing the pressure at the oval window in the cochlea to which the stapes attaches. Small muscles are attached to the malleus and stapes, and these muscles contract reflexively at high sound levels (above about 75 dB SPL), increasing the stiffness of the chain of ossicles and reducing the magnitude of the vibrations transmitted to the cochlea (particularly for frequencies below 1 kHz).

Figure 2. The peripheral auditory system.

Conductive hearing loss can be caused by obstructions in the ear canal (commonly earwax), damage to the eardrum, or disruption of the transmission from the eardrum to the cochlea via the ossicles. Conductive hearing loss is associated, unsurprisingly, with an elevation in audiometric thresholds. Children frequently suffer from otitis media (inflammation of the middle ear), which is associated with a build-up of fluid in the middle ear, reducing transmission. Although temporary, hearing loss due to otitis media may affect language development (Brennan-Jones et al., 2020). More common in adults than children is otosclerosis: an abnormal bone growth around, or onto, the stapes. This can reduce the movement of the stapes and affect sound transmission. Most people affected first notice hearing problems when they are in their 20s or 30s and clinical otosclerosis occurs twice as often in females as in males (Schrauwen & van Camp, 2010).

Cochlear Hearing Loss

The cochlea is where the acoustic vibrations are transduced into electrical neural impulses. The cochlea is essentially a thin fluid-filled tube curled up into a spiral. The cochlea is divided into three compartments by two membranes which run along the length of the cochlea: the basilar membrane and Reissner’s membrane (Figure 3). The scala media, between Reissner’s membrane and the basilar membrane, contains a special fluid (endolymph) that has an electric potential (the endocochlear potential) of about 80 mV relative to blood plasma. A structure on the lateral wall of the cochlea, the stria vascularis, pumps positively charged potassium ions into the scala media to maintain the high concentration of potassium and the positive electric potential, which are necessary for the functioning of the sensory hair cells (located in the organ of Corti).

Figure 3. A cross section of the cochlea.

The basilar membrane is “tuned,” such that each place on the membrane vibrates most strongly in response to a particular frequency of sound. The basilar membrane near the base of the spiral is thin and stiff and is tuned to high sound frequencies. Toward the apex of the spiral the membrane becomes progressively heavier and less stiff, so that the best frequency of each place on the membrane decreases progressively from base to apex. In this way, the basilar membrane separates out the different frequencies of sounds entering the ear to different places on the membrane (Figure 4). This process is crucial to hearing, allowing us to analyze a sound in terms of its spectrum (the distribution of sound components as a function of frequency), which is important for identifying sounds and for separating out sounds that occur together.

Figure 4. An illustration of how the basilar membrane separates out the different frequency components of complex sounds. A sound wave (left) entering the ear is broken down into its three constituent frequency components, with the lowest frequency component (top right) causing activity near the apex of the cochlea and the highest frequency component (bottom right) causing activity near the base of the cochlea.

Sitting on top of the basilar membrane are the hair cells, with one row of inner hair cells and about three rows of outer hair cells running along the length of the cochlea (Figure 5). In total, there are about 3,500 inner hair cells and 12,000 outer hair cells in each human cochlea (Møller, 2013). These numbers are tiny compared to the ~120 million photoreceptor cells in each retina. Each hair cell has rows of stereocilia, which look a little like tiny hairs. When the basilar membrane vibrates up and down, the stereocilia move from side-to-side. When they are bent in one direction, ion channels open in the stereocilia, causing potassium ions to flow into the hair cell from the scala media. This produces a depolarization of the cell, causing the inner hair cells to release neurotransmitter which in turn produces action potentials in the attached auditory nerve fibers. Because each inner hair cell is located at a particular place on the basilar membrane, activity in each hair cell, and each auditory nerve fiber, is also tuned to a particular frequency, so that the frequency decomposition performed by the basilar membrane is represented in the auditory nervous system in a place code (different neurons carry information about different frequencies).

Figure 5. A cross section of the organ of Corti.

In contrast, the outer hair cells are not thought to be primarily responsible for transmitting signals to the brain. Instead, depolarization of the outer hair cells causes them to change their length in synchrony with the stimulating sound, mechanically amplifying the motion of the basilar membrane. This greatly increases the sensitivity of the ear (by about 50 dB), and also greatly enhances the frequency tuning of the basilar membrane, so that it is much better at separating components with similar frequencies (better frequency selectivity). The outer hair cells receive descending neural input from the brainstem via efferent fibers in the olivocochlear bundle. Activity in the olivocochlear bundle suppresses the motion of the basilar membrane.

Cochlear hearing loss refers to hearing loss due to damage to the cochlea, including the hair cells and the auditory nerve. Cochlear hearing loss is a type of sensorineural hearing loss.

Damage to Hair Cells

The main underlying physiological cause of clinical hearing loss (threshold elevation) is damage to, or dysfunction of, the sensitive hair cells in the cochlea. The outer hair cells are especially vulnerable. These cells amplify the motion of the basilar membrane and enhance its tuning. Hence, damage to the outer hair cells not only reduces sensitivity to soft sounds but also reduces our ability to separate out the different frequency components of sounds (Figure 6). Hearing aids can restore the loss of sensitivity to some extent, but cannot easily correct for the loss of frequency selectivity. One reason why hearing aids can work well in quiet environments, but are not very useful in environments with a lot of background noise, is that they don’t provide much help to the ear in separating sounds from different sound sources. One of the main difficulties experienced by people with sensorineural hearing loss is hearing in such environments.

Figure 6. A simulation of the response of the basilar membrane to the utterance “baa,” for an ear with normal hearing and for an ear with outer hair cell loss. Center frequency refers to the best frequency of each place on the basilar membrane. Notice that, for the damaged ear, the bump corresponding to the third formant (F3) is both reduced in level, and less prominent, corresponding to a reduction in audibility and a loss of frequency selectivity respectively.

Additionally, because outer hair cells selectively amplify low-level sounds, they are responsible for the large “dynamic range” of normal human hearing. The dynamic range, spanning the levels between a sound at threshold and a sound that is uncomfortably loud, is about 100 dB (a factor of ten thousand million in terms of sound power). People who have normal hearing can hear soft sounds because they are amplified by the outer hair cells, but high-level sounds are not amplified and are not uncomfortably loud. Loss of outer hair cells reduces sensitivity to soft sounds, but doesn’t affect high-level sounds, thereby increasing the growth of loudness (the perceptual correlate of sound level) with level. This abnormally rapid growth of loudness with level for people with hearing loss is termed loudness recruitment (Figure 7). Loudness recruitment is compensated to some extent in modern hearing aids through the use of automatic gain control, which applies the greatest amplification to lower-level sounds, mimicking the effects of the outer hair cells.

Figure 7. A schematic illustration of loudness recruitment, showing how loudness (the perceived magnitude of a sound) increases more rapidly with sound level for an ear with a cochlear hearing loss affecting the outer hair cells.

Source: Based on Moore et al. (1992).

Substantial damage to inner hair cells can elevate hearing thresholds, and loss of inner cells in an entire region of the cochlea (which respond to a particular range of sound frequencies) means that the region becomes insensitive to sounds. These “dead regions” are reflected by elevated hearing thresholds over a particular frequency range in the audiogram, but specialized tests are required for accurate diagnosis (Moore, 2004).

In mammals, hair cells do not regenerate, so once lost, function is permanently affected. Outer hair cells seem to be particularly susceptible to damage, with the high-frequency regions of the cochlea usually affected first.

Causes of Cochlear Damage


Hair cell loss and dysfunction progress with age. Presbycusis refers to sensorineural hearing loss due to aging. The global prevalence of moderate or higher grades of hearing loss is 2.3% at 30–34 years, 12.7% at 60–64 years, and 58% at 90–94 years (WHO, 2021). In the audiogram, aging tends to affect the extended high-frequency range first (see Figure 8), although substantial loss of outer hair cells may also occur in apical (low-frequency) cochlear regions (Wu et al., 2021). The effects of aging per se are difficult to disentangle from the combined effects of lifetime exposure to noise, disease, and ototoxic drugs. It is possible that cumulative noise exposure is responsible, at least in part, for the greater loss observed at high frequencies (Wu et al., 2021). The stria vascularis, which provides the electric potential in the scala media of the cochlea needed to drive transduction in the hair cells, becomes less effective with increasing age (Schmiedt, 2010), although this may be related at least in part to potentially modifiable factors such as disease rather than age per se. Degeneration of the stria vascularis reduces the sensitivity of the hair cells even if the hair cells themselves are undamaged. Presbycusis has been classified into different “audiometric phenotypes,” including “metabolic” (losses across frequencies, with a somewhat greater loss at higher frequencies) and “sensory” (losses at higher frequencies, with little loss at low frequencies; Dubno et al., 2013). Metabolic loss is associated with degeneration of the stria vascularis and is more prevalent in females, while sensory loss is associated with noise exposure and is more prevalent in males. Older individuals tend to have either metabolic losses or combined metabolic and sensory losses.

Figure 8. Mean audiograms for three age groups.

Source: Data from Carcagno and Plack (2020).
Ototoxic Drugs

Ototoxic drugs are drugs that are damaging to the ear. These include many commonly prescribed medications, for example, aminoglycoside antibiotics, platinum-containing chemotherapy drugs (including cisplatin and carboplatin), loop diuretics, and nonsteroidal anti-inflammatory drugs (Zeng & Djalilian, 2010). The sites of damage vary, but may include damage to the hair cells and to the auditory nerve.

Noise Exposure

Exposure to loud noise is the main preventable cause of hearing loss. A century or so ago, occupational noise exposure at noisy factories was the main problem. While occupational exposure remains a concern, in the early 21st century recreational exposure to loud music, at loud sporting events, or through the use of firearms is the main issue, at least in countries with regulations for occupational exposure. Worldwide, the WHO estimates that 1.1 billion young people could be at risk of hearing loss due to listening to music at high levels for long periods of time (WHO, 2021). As an example, in 2011 the average exposure in bars and clubs in Manchester, United Kingdom, was recorded to be almost 100 dBA (Howgate & Plack, 2011). This is 15 dB above (or about 30 times the intensity of) what is considered a safe average exposure over an 8-hour period. Portable audio devices (e.g., phones) are also a cause of concern. The WHO estimates that over 50% of people aged 12–35 years may be risking their hearing through listening to music over such devices (WHO, 2021).

The damage done is a product of the intensity of exposure and duration of exposure—intense sounds for short durations can cause as much damage as more moderate sounds for longer durations. This “equal-energy” rule (exposures with equal energy, integrated over time, are roughly equally damaging) is the basis for most noise exposure regulations.

Noise exposure damages the stereocilia (“hairs”) on the hair cells which are necessary for transduction, and also causes damage through metabolic overstimulation. The outer hair cells in the high-frequency regions of the cochlea are particularly vulnerable (Wu et al., 2021). Damage can be temporary (associated with dulled hearing and tinnitus the next day after a loud concert, for example), but permanent loss can accumulate with repeated exposure. A “notch” in the audiogram with elevated thresholds in the region of 4 kHz is characteristic of noise-induced hearing loss, as this is the most sensitive frequency region and hence the most easily damaged. Nearly one in four adults in the United States have audiometric notches, with the highest prevalence amongst males and those with a history of occupational noise exposure (Carroll et al., 2017).


Several diseases are associated with cochlear hearing loss, including meningitis, mumps, autoimmune disorders, cardiovascular disease, and diabetes (Cunningham & Tucci, 2017; WHO, 2021). Ménière’s disease is a disorder of the inner ear and is associated with vertigo, hearing loss, and tinnitus. In contrast to most hearing disorders, the hearing loss in Ménière’s disease is most apparent at low frequencies.

Congenital Hearing Loss

A congenital hearing loss is a hearing loss that is present from birth. A properly functioning cochlea depends on many genes, and mutations in these genes can cause loss or dysfunction of the hair cells (Richardson et al., 2011). Maternal infection or problems during birth (for example, oxygen deprivation) can also cause hearing loss.

Subclinical Hair Cell Damage and Extended High-Frequency Hearing Loss

Despite the association between hair cell damage and elevated audiometric thresholds, the audiogram is remarkably insensitive to hair cell damage, particularly inner hair cell loss. In the chinchilla, 80% of inner hair cells can be lost without affecting threshold sensitivity (Lobarinas et al., 2013), provided that the loss is spread evenly throughout the cochlea. In the apex of the cochlea, 20%–40% of outer hair cells can be lost without threshold elevation (Bohne & Clark, 1982; Clark et al., 1987).

Furthermore, the first signs of hair cell damage often occur in the extended high-frequency range of the cochlea (above 8 kHz), above the range normally tested in the clinic. The effects of aging (Jilek et al., 2014), ototoxic drugs such as chemotherapy drugs (Konrad-Martin et al., 2010), and possibly noise exposure (Le Prell et al., 2013) are first evident in the extended high-frequency range of the audiogram, with effects spreading down to the clinical range as the damage becomes more severe (see Figure 8). Hence, audiometry in the extended high-frequency region may provide a useful early warning that the ear is being damaged.

Extended high-frequency hearing is important for sound localization, in particular, resolving confusions between whether the source sound is in front of, or behind, the listener (“front-back confusions”) (Best et al., 2005). Spectral changes at high frequencies resulting from the angle of incidence of the sound wave on the pinna (the external part of the ear) provide cues to the direction of the sound source (like a “directional signature” in the spectrum). Information in the extended high-frequency region may also be important for speech perception. Hence, a loss in the extended high-frequency region, which is currently regarded as subclinical, may in future become an important diagnostic target for understanding the listening difficulties experienced by an individual.

Cochlear Synaptopathy

The nerve fibers associated with lost cochlear hair cells will naturally degenerate over time. However, there may also be a loss in the connections (synapses) between intact inner hair cells and auditory nerve fibers (Figure 9). This disorder, termed cochlear synaptopathy, is not thought to cause an increase in audiometric threshold (and hence is a form of subclinical loss). Instead synaptopathy seems to particularly affect the group of auditory nerve fibers that code information at moderate-to-high levels, and hence may affect processing of suprathreshold sounds. Synaptopathy as a result of aging has been clearly demonstrated by histological studies in rodent models (Kujawa & Liberman, 2015; Sergeyenko et al., 2013), and in human temporal bones postmortem (Viana et al., 2015; Wu et al., 2018, 2021).

Figure 9. An illustration of how aging or noise exposure results in cochlear synaptopathy and neural loss.

Intense noise exposure is also an important cause of cochlear synaptopathy in rodent models (Kujawa & Liberman, 2009, 2015), and considerable loss of synapses (more than 50% reduction) may occur in the absence of significant hair cell loss or permanent threshold elevation. Noise-induced synaptopathy has been demonstrated in a primate model, although the sound levels required were higher than in the rodent models (Valero et al., 2017). Since synaptopathy is not revealed by the clinical audiogram, there is some concern that, unbeknownst to science, many young people are damaging their hearing through recreational noise exposure. However, the evidence for noise-induced synaptopathy in young adult humans is mixed, with some studies reporting characteristics consistent with synaptopathy and others not (Bramhall et al., 2019). Studies in living humans rely on proxy measures of synaptopathy (for example, recordings of gross auditory nerve activity via electrodes attached to the head) rather than direct synapse counts, and self-reported measures of lifetime noise exposure, both of which are subject to confounds and measurement error. A histopathological study on human temporal bones has revealed that older people with a history of noise exposure have an increased loss of auditory nerve fibers, consistent with noise-induced synaptopathy (Wu et al., 2021).

The perceptual consequences of cochlear synaptopathy are unclear, although synaptopathy is associated with deficits in neural temporal coding (Parthasarathy & Kujawa, 2018). Synaptopathy is a potential explanation for the listening difficulties and tinnitus (Schaette & McAlpine, 2011) seen in some young people with normal clinical hearing, and may contribute to the listening difficulties that people experience as they age.

Central Deficits

The auditory nerve carries the signal to auditory nuclei in the brainstem, and thereafter to the auditory cortex in the temporal lobe (Figure 10). Several auditory nuclei process the signals in the ascending pathways. For example, the superior olivary complex (or superior olive) contains nuclei that receive input from both ears, and performs some of the computations that are vital for sound localization.

Our understanding of central auditory neural processing is incomplete. However, one important aspect is that auditory nerve fibers, and auditory neurons in the brainstem, tend to synchronize their firing to the fluctuations in the sound waveform. This property, called “phase locking,” arises in part from the fact that the inner hair cells depolarize only when their stereocilia are bent in one direction, which produces synchronization to the individual pressure fluctuations in the sound wave (the “temporal fine structure”). This can occur for sound frequencies up to about 5 kHz. However, neurons will also synchronize to the slower fluctuations in overall sound level (the “envelope”). Temporal neural coding of both fine structure and envelope is important for accurate sound localization (the medial superior olive compares the arrival times of sounds at the two ears with 10 microsecond precision), speech perception, and for separating out sounds that occur together.

Figure 10. The auditory brainstem nuclei and the main ascending auditory neural pathways. The arrow in the right-hand panel shows the viewpoint for the main image.

The study of hearing loss has traditionally focused on cochlear function. However, attention is being increasingly drawn to the auditory neural pathways in the brainstem and the cortex as researchers seek a deeper understanding of the causes of listening difficulties. One problem with this endeavor is that it is technically difficult to dissociate, experimentally, the effects of central dysfunction from peripheral dysfunction, particularly if the peripheral dysfunction is subclinical and hard to measure (such as subclinical hair cell loss or cochlear synaptopathy).

“Auditory processing disorder” or “central auditory processing disorder” are broad terms for deficits in the neural processing of sounds in children or adults, which may be manifest by difficulties in discriminating the temporal features of sounds, in comparing sounds at the two ears for sound localization or sound separation, or in speech perception. Possible causes include age-related changes in central neural function, delayed central nervous system maturation, or the effects of neurological disorder or disease (Bamiou et al., 2001). However, the definition of auditory processing disorder and its utility in diagnosis, particularly in children, are controversial (Iliadou et al., 2018; Moore, 2018). Auditory perception is profoundly dependent on cognitive and linguistic abilities and groups around the world disagree on the relevance of these “top-down” processes to auditory processing disorder (Wilson, 2018).

The discussion is perhaps on firmer ground when it comes to the effects of age on temporal coding (Pichora-Fuller, 2020). Precise neural temporal coding declines with age in auditory regions of the brainstem and cortex. In humans, this can be observed in reduced electrophysiological responses (Marmel et al., 2013; Presacco et al., 2016) and in poorer performance on behavioral tasks that involve fine temporal perception, including sound localization (Füllgrabe et al., 2014; Hopkins & Moore, 2011; Moore, 2014, 2016). The decline may be due to a combination of neural loss, myelin degeneration, and reduction in GABA-ergic neural inhibition (Caspary et al., 1995; Cohen et al., 1990; Makary et al., 2011; Wu et al., 2021).

Effects of Hearing Loss on Sound Discrimination

Frequency Selectivity

Damage to the outer hair cells results in a reduced ability to separate out the different frequency components of sounds. This can be observed in masking tasks, in which the participant is required to detect one sound (the signal, typically a pure tone) in the presence of another sound (the masker, typically a pure tone or band of noise). In general, the closer in frequency the masker is to the signal, the higher in level the signal needs to be relative to the masker for it to be detected (the higher the “signal-to-masker” ratio at threshold). By comparing masking for a range of masker frequencies relative to the signal, it is possible to estimate the tuning properties of the basilar membrane at the place tuned to the signal frequency. An example of such an experiment is shown in Figure 11. Each curve shows the level of a narrowband noise masker needed to mask a 1 kHz pure-tone signal as a function of the frequency of the masker. When the masker is close in frequency to the signal the two tones are less easily separated, and a lower masker level is needed. The two plots show tuning curves measured from the normal and impaired ears of a single listener with a unilateral cochlear hearing loss. Notice that the curve is broader for the impaired ear, reflecting worse frequency selectivity (the impaired ear is less able to separate the signal and the masker when they differ in frequency). Similar studies have shown that the tuning can be up to four times broader for an ear with cochlear hearing loss compared to a healthy ear (Moore et al., 1999).

Figure 11. Tuning curves measured using forward masking (in other words, the masker preceded the signal) for the normal and impaired ears of a listener with unilateral cochlear hearing loss. The curves show the level of a narrowband noise masker required to mask a 1-kHz pure-tone signal as a function of the frequency of the masker.

Source: Data from Moore and Glasberg (1986).

Intensity Discrimination

Sounds are largely characterized by variations in intensity across frequency and across time. Hence, an ability to discriminate the intensities of sound features is important for identification; for example, to determine the positions of the spectral peaks that differentiate vowel sounds. Interestingly, provided that the sounds are presented at the same level relative to the individual’s hearing threshold (which, of course, is elevated for someone with a hearing loss), people with cochlear hearing loss perform as well as, or even better than, people with normal hearing on intensity discrimination tasks (Moore, 1996). A performance improvement may occur because the response of the basilar membrane, and hence loudness, grows more rapidly with level after outer hair cell damage (loudness recruitment). Hence, perceptually, the change in physical intensity may be greater for someone with cochlear loss.

There is some evidence that the deterioration in neural temporal coding with age may affect intensity discrimination (Harris et al., 2007). Intensity discrimination declines with age, but only at lower frequencies where temporal cues are thought to be more important.

Temporal Resolution

Temporal resolution refers to the ability to follow rapid fluctuations in a sound over time, vital in speech perception. Temporal resolution can be measured using a gap detection task, in which the listener is required to detect a brief temporal gap in an otherwise continuous sound. When the stimuli used are pure tones, which have a flat temporal envelope, listeners with a cochlear hearing loss often perform little worse than normal-hearing listeners (Moore et al., 1989). However, when the stimulus is a spectrally narrow band of noise, performance is worse for people with cochlear loss. This could be because the inherent fluctuations in the noise are amplified by loudness recruitment, leading to dips which may be confused with the gap (Moore, 1996). Hence, poor performance on some temporal resolution tasks may be due to abnormal loudness perception rather than poor temporal resolution per se.

Gap detection thresholds for low-pass bands of noise (noise with only low-frequency spectral components) increase with age, seemingly independently of audiometric thresholds (Snell, 1997). This may be indicative of neural decline leading to worse temporal resolution.

Pitch Discrimination

Listeners with cochlear hearing loss tend to be worse at discriminating the pitch of both pure tones (sinusoidal sound waves) and complex tones (which consist of a number of pure-tone harmonic components, such as vowel sounds and the sounds made by musical instruments) compared to normal-hearing listeners (Marmel et al., 2013; Moore & Carlyon, 2005). A decline in frequency selectivity due to outer hair cell damage leads to a reduced ability to separate the individual frequency components, which is thought to be important for pitch perception. However, the correlation between frequency discrimination and behavioral measures of frequency selectivity is weak (Moore & Peters, 1992; Tyler et al., 1983). Pitch perception is also thought to depend on neural temporal coding. The reduction in the precision of temporal coding may be an important contributor to the deterioration in pitch discrimination with increasing age (Marmel et al., 2013; Moore & Peters, 1992).

Spatial Hearing

Cochlear hearing loss alone does not seem to be a major predictor of the ability to localize sounds in space (Freigang et al., 2015), provided that the sounds are well above hearing threshold. However, the spectral cues used to determine sound elevation are present at high frequencies (above about 6 kHz) and so performance may be affected if these regions are rendered inaudible by a high-frequency hearing loss (Otte et al., 2013).

Localization performance deteriorates with age, particularly for sounds to the side of the head (Freigang et al., 2015). Localization in the horizontal plane depends in part on precise neural temporal coding, to determine the difference in the time of arrival of the sound at the two ears (interaural time differences). As neural temporal coding declines with age, older adults are worse at discriminating interaural time differences (King et al., 2014).

Effects of Hearing Loss on Speech Perception

Speech perception is arguably the most important function of the auditory system for humans. For the majority of people with hearing loss, the most debilitating consequence of the loss is a difficulty in speech communication, particularly in noisy environments.

Speech Perception in Quiet

Speech perception with an ideal signal in a quiet environment may not be greatly affected by hearing loss unless the loss is severe (Baskent, 2006; Nabelek, 1988). The speech signal covers a wide range of frequencies, from vowels at low frequencies to fricatives (such as /s/), which have significant energy above 8 kHz. The speech signal is highly redundant, in that there is much more information than is required for intelligibility. Hence, loss of high-frequency hearing, for example, may not greatly decrease speech perception in quiet. If sensitivity is greatly reduced in a region of the cochlea due to hair cell damage, the speech signal in the associated frequency region may be inaudible. However, this can be corrected to some extent by the amplification provided by a hearing aid.

Speech Perception in Noise

When the signal is degraded, or is masked by background noise, the available information is reduced and speech perception may decrease dramatically for people with hearing loss. Good speech perception is often particularly important in noisy environments, for example at social events or in classrooms. Unfortunately, these are just the sorts of environments that cause difficulties for people with hearing loss, and hearing aids provide limited benefit because noise is amplified along with the speech signal. People with hearing loss can find communication at social events uncomfortable and embarrassing, leading some to avoid such events and potentially become socially isolated. Classroom noise levels are often high, a problem that may be compounded by architectural design that pays little attention to acoustics. Children with normal audiograms may struggle to understand the teacher and their peers in classrooms and those with hearing loss will struggle even more.

Speech perception in noise depends on peripheral (including cochlear), central auditory, and cognitive factors (Anderson et al., 2013; Humes et al., 2013) and the relative influence of deficits in these aspects may depend on the nature of the speech-in-noise task, and whether amplification is provided to minimize the effects of reduced audibility due to hair cell loss (i.e., to raise the level of the speech in particular frequency regions so that it is above the threshold of hearing for the individual). Even if the majority of the speech signal is above threshold, a reduction in frequency selectivity due to outer hair cell loss may lead to difficulty separating the speech from the background (Baer & Moore, 1993). However, speech-in-noise perception is often not well predicted by the clinical audiogram (Anderson et al., 2013; Vermiglio et al., 2012), which reflects hair cell loss. Subclinical neural damage, particularly auditory nerve and central auditory dysfunction due to factors such as aging, may contribute. The decline in neural temporal coding with age leads to a degraded representation of the rapid temporal fluctuations in speech (Pichora-Fuller, 2020). Temporal coding in the brainstem measured using electrophysiological techniques has been associated with speech-in-noise performance (Anderson et al., 2013). Cognitive factors (in particular, working memory) may also be important in speech recognition (Akeroyd, 2008; Yeend et al., 2019), although a potential confound in some of these studies is that hearing loss may directly affect performance in cognitive tests if the materials are presented aurally (Füllgrabe, 2020).

Even some young adults with clinically normal hearing seek help from an audiologist due to difficulties in speech understanding. Impaired speech-in-noise abilities when a person has a normal audiogram has been given a number of terms over the years, including “obscure auditory dysfunction” (Saunders & Haggard, 1989), “auditory processing disorder” (Wilson, 2018), and “King-Kopetzky syndrome” (Hinchcliffe, 1992) after the authors of the early reports of this condition. The cause of this condition is uncertain but could be a combination of auditory and psychological factors (Zhao & Stephens, 2007). Cochlear synaptopathy has been suggested as a possible contributor, although the evidence is inconclusive (Bramhall et al., 2019; Guest et al., 2018).

Importance of Extended High-Frequency Hearing

Since the main information in the speech signal is below 8 kHz, it was thought that a hearing loss in the extended high-frequency range was not important for speech communication. However, there is increasing evidence that extended high-frequency information may be useful in real-world environments. In particular, when listening to a talker in room with other talkers (for example, at a party or in a busy bar or restaurant), the background speech has reduced energy at very high frequencies, since high-frequency sound is more directional and the talkers in the background are usually facing away from the listener. This means that, for the talker of interest who is facing the listener, the extended high-frequency energy may be less masked, and thereby be important for communication (Monson et al., 2019).

There is growing evidence that individuals with an extended high-frequency hearing loss, but normal hearing in the clinical range, have impaired speech-in-noise perception (Hunter et al., 2020; Yeend et al., 2019). This may be because of the direct importance of speech information in the extended high-frequency region, as described above. However, it is also possible that an extended high-frequency hearing loss is a marker for subclinical damage in the standard clinical range, up to 8 kHz. In other words, people with an extended high-frequency hearing loss may also have subclinical hair cell or nerve damage within the standard clinical range and it is this subclinical damage that is the main cause of their speech perception difficulties (Hunter et al., 2020).

Effects of Hearing Loss on Music Perception

It is ironic that music lovers and professional musicians—who rely on their hearing for their enjoyment of music and their ability to create music—often expose themselves to music sound levels that cause damage to the very organ on which they depend for these pleasures. It is well known that amplified popular music can reach very high levels. It is perhaps less well known that classical musicians are also often exposed to high noise levels, with a risk of noise-induced hearing loss (Sliwinska-Kowalska & Davis, 2012; Toppila et al., 2011).

Similar to speech-in-noise deficits, cochlear hearing loss may cause difficulties in perceiving the parts played by individual instruments in a complex musical scene, due to impaired frequency selectivity and the inaudibility of high frequencies. Both cochlear loss and central neural dysfunction may contribute to deficits in this “musical scene analysis” (Siedenburg et al., 2020). Additionally, pitch perception is a major component of music appreciation, being necessary for the perception of melody and harmony. Loss of frequency selectivity due to outer hair cell damage may impair pitch perception by making it harder to separate out the harmonics of musical notes. However, perhaps more important for older listeners is the decline in neural temporal coding with age. This means that the neural representation of the periodic fluctuations in sounds that are thought to underlie musical pitch, and therefore melody and harmony perception, is less precise for older listeners (Figure 12). These age-related changes have been associated with deficits in musical harmony perception (Bones & Plack, 2015).

Figure 12. The average spectrum of the far-field steady state electrophysiological response to a two-note chord (perfect 5th interval), for groups of younger and older listeners. This response probably originated mainly from the rostral brainstem (region of inferior colliculus). The black circles show the frequencies of the harmonics in the stimulus. The phase-locked neural response to the harmonics is stronger for the younger group, providing evidence for stronger neural temporal coding.

Source: Data from Bones and Plack (2015).

Tinnitus and Hyperacusis


Tinnitus is the perception of sound which has no external sound source. Objective tinnitus is associated with an actual sound that can be heard by another person at the entrance to the ear canal, and is usually caused by vascular abnormalities near the ear. Subjective tinnitus (which will be referred to just as “tinnitus” from now on) is much more common and is caused by an abnormality, or by abnormalities, in the auditory pathway. Tinnitus is often characterized by a high-frequency hiss or whistle, particularly audible in quiet environments. Most people have experienced temporary tinnitus, for example after listening to loud music. For about 5%–15% of people tinnitus is permanent and constant and for 1%–3% of people tinnitus is a serious condition, leading to sleepless nights, communication problems, and a dramatic reduction in the quality of life (Eggermont & Roberts, 2004).

Tinnitus is often associated with hearing loss, but many people with clinically normal hearing experience also experience tinnitus. Tinnitus with a normal audiogram has been associated with high levels of noise exposure (Guest et al., 2017) and may be related to noise-induced cochlear synaptopathy, although the evidence is mixed (Guest et al., 2017; Schaette & McAlpine, 2011).

The neurophysiological basis of tinnitus is unclear, but some authors suggest that tinnitus is due to increased neural noise, perhaps as a consequence of central neural “amplification” to compensate for a reduction in input from the periphery due to a conductive hearing loss, hair cell dysfunction, or loss of auditory nerve fibers/synaptopathy (Gu et al., 2010; Schaette & Kempter, 2006; Schaette & McAlpine, 2011). An analogy is the increase in background hiss caused by turning up the volume on a nondigital radio. However, tinnitus is a complex subjective phenomenon and may also result from other forms of neural reorganization (Eggermont & Roberts, 2004). Tinnitus often worsens at times of physical and psychological stress (Sahley & Nodar, 2001).


For some people, moderate-level everyday sounds can be unbearably loud and even painful. In severe cases, individuals may withdraw from normal activities in an attempt to avoid sounds altogether. Hyperacusis is a reduced tolerance of sounds, a condition which often co-occurs with tinnitus (Baguley, 2003). The condition is distinct from phonophobia (fear of sound) and misophonia (dislike of sound), which are considered to have a greater emotional component. Like tinnitus, hyperacusis has hearing loss as a major risk factor and may sometimes be a consequence of abnormal neural amplification, perhaps as an adaptive response to peripheral damage. Functional magnetic resonance imaging (a technique used to measure the spatial distribution of neural activity in the brain) suggests that hyperacusis is associated with increased neural activation in the auditory midbrain, thalamus, and cortex (Gu et al., 2010). Also, like tinnitus, hyperacusis often worsens when the individual is tired or stressed (Sahley & Nodar, 2001).

Psychosocial Consequences of Hearing Loss

Social Isolation

The consequences of hearing loss extend beyond speech-perception difficulties and difficulties with perceiving environmental sounds. Strained communication in social environments can lead people with hearing loss to withdraw from such situations or to feel a sense of separation from others even when in attendance (Brink & Stones, 2007; Vas et al., 2017), due in part to the stigma and self-stigma associated with hearing loss (Gagné et al., 2011). Hearing loss can lead to withdrawal from telephone communication (Vas et al., 2017). Even communication with friends and family can be impaired (Barker et al., 2017) and particular pressure is placed on romantic relationships (Kamil & Lin, 2015; Wallhagen et al., 2004). It is unsurprising, then, that hearing loss is consistently associated with social isolation (Shukla et al., 2020).

Mental Health

Hearing loss may lead not only to isolation but to loneliness (Shukla et al., 2020) and to psychological illness. Hearing loss is associated with unipolar depression (Gopinath et al., 2009; Kramer et al., 2002; Li et al., 2014) as well as distress and somatization (Nachtegaal et al., 2009). The results of a questionnaire-based study conducted during the COVID-19 pandemic suggest that the mental health of people with listening difficulties was particularly negatively affected by the enforced restrictions on social activity (Littlejohn et al., 2022).

In future, clinical practice may benefit from a more holistic approach to hearing health and there is a need for greater emphasis on counseling and emotional support in audiology training programs. There is also recognition that assessment of therapeutic interventions for hearing loss should include outcomes beyond tests of hearing ability, including measures of quality of life (Saunders et al., 2021). It has also been argued that mental-health professionals should be alert to the possibility of hearing loss in older patients presenting with depressive symptoms; this might represent an important opportunity for diagnosis and treatment of hearing loss, given the links between presbycusis and neuropsychiatric disorders (Rutherford et al., 2018).

Listening Effort and Listening-Related Fatigue

Listening in challenging environments (for example, if the speech signal is degraded or if there is background noise) requires considerable cognitive resources (“listening effort”) and this effort may lead to mental fatigue. People with hearing loss have particular difficulties in challenging environments and report greater listening effort and fatigue than do people with normal hearing (Alhanbali et al., 2017, 2018). Unless there is sufficient motivation to listen, the increased effort required may contribute to disengagement and social withdrawal (Herrmann & Johnsrude, 2020).

Cognitive Decline and Dementia

Hearing loss is associated with age-related cognitive decline (Pichora-Fuller, 2020) and clinical hearing loss is the highest potentially modifiable risk factor for dementia; higher than smoking, depression, less education, and hypertension (Livingston et al., 2020). Caution should be applied here because the evidence to date has not established conclusively that hearing loss causes dementia, although hearing-aid use has sometimes been associated with reduced risk of dementia (Amieva et al., 2018; Livingston et al., 2020). It is possible that vascular dementia and cochlear hearing loss have a shared cause in vascular pathology but a shared cause to account for the association between hearing loss and Alzheimer’s disease is less plausible. Neural degradation in the central auditory pathways might account for the association but it is unusual for neural damage to cause clinical hearing loss. Furthermore, the hearing loss associated with Alzheimer’s is mostly at high frequencies and hence is most likely the result of damage to the cochlea rather than to central neural pathways (Griffiths et al., 2020). If the association is causal, then several mechanisms might explain it, including social isolation and depression, an impoverished sensory environment, and chronically increased listening effort (Griffiths et al., 2020). Finally, the “overdiagnosis hypothesis” posits that hearing loss may be misdiagnosed as cognitive decline due to the use of verbal instructions or tasks (Uchida et al., 2019). Healthcare professionals should be alert to the possibility of undiagnosed hearing loss when evaluating older adults with apparent cognitive decline (Rutherford et al., 2018).


The physiological basis of hearing loss is much more than hair cell dysfunction. To understand listening difficulties, it is necessary to search beyond the clinical audiogram and look for the subclinical hair-cell, auditory neural, and cognitive processes that are required for us to perceive the acoustic world. The consequences of hearing loss are much more than difficulties in speech understanding. Hearing loss can have profound effects on mental health, and is the most important potentially modifiable risk factor for dementia. Given the large number of people affected and that almost all of us will experience hearing loss as we age, the importance of preventing hearing loss caused by environmental insults such as noise, providing more sensitive and comprehensive diagnostic tools, and developing new management approaches and treatments for people with hearing loss could hardly be greater.

Further Reading

  • Katz, J., Chasin, M., English, K., Hood, L. J., & Tillery. K. L. (Eds.). (2015). Handbook of clinical audiology (7th ed.). Wolters Kluwer.
  • Moore, B. C. J. (2007). Cochlear hearing loss: Physiological, psychological, and technical issues (2nd ed.). Wiley.
  • World Health Organization (WHO). (2021). World report on hearing.
  • Zeng, F.-G., & Djalilian, H. (2010). Hearing impairment. In C. J. Plack (Ed.), Hearing (pp. 325–348). Oxford University Press.