Thomas M. Hess, Erica L. O'Brien, and Claire M. Growney
Blood pressure is a frequently used measure in studies of adult development and aging, serving as a biomarker for health, physiological reactivity, and task engagement. Importantly, it has helped elucidate the influence of cardiovascular health on behavioral aspects of the aging process, with research demonstrating the negative effect of chronic high blood pressure on various aspects of cognitive functioning in later life. An important implication of such research is that much of what is considered part and parcel of getting older may actually be reflective of changes in health as opposed to normative aging processes. Research has also demonstrated that situational spikes in blood pressure to emotional stressors (i.e., reactivity) also have implications for health in later life. Although research is still somewhat limited, individual differences in personal traits and living circumstances have been found to moderate the strength of reactive responses, providing promise for the identification of factors that might ameliorate the effects of age-related changes in physiology that lead to normative increases in reactivity. Finally, blood pressure has also been successfully used to assess engagement levels. In this context, recent work on aging has focused on the utility of blood pressure as a reliable indicator of both (a) the costs associated with cognitive engagement and (b) the extent to which variation in these costs might predict both between-individual and age-related normative variation in participation in cognitively demanding—but potentially beneficial—activities. This chapter elaborates on these three approaches and summarizes major research findings along with methodological and interpretational issues.
Ananiev’s approach shares the Activity Theory (AT) paradigm, dominant in Soviet psychology. Ananiev builds on the main fundamentals of the AT paradigm, considering psyche as a special procreation of the matter, engendered by the active interaction of the individual with the environment. The unique feature of his approach to AT is that he turned it “toward the inside,” focusing on the relation of the human individual to his own physicality, to his own bodily substrate. Ananiev sought by his intention to keep a holistic vision of a human being, considering the latter in the context of his real life, that is, the bodily substrate in its biological specificity in context of the concrete sociohistorical life course of the personality. Like no other psychologist, Ananiev did not limit his research to the sphere of narrowly defined mental phenomena. He conducted a special kind of research, labeled as “complex,” in the course of which characteristics of the same subjects: sociological, socio-psychological, mental, physiological, and psychophysiological indicators—life events of the subjects—were monitored for many years. He focused on ontogenetic development in adulthood, which he, ahead of his time, considered as a period of dynamic changes and differentiated development of functions. The focus of his attention was on individual differences in the ontogenetic development of mental and psycho-physiological functions, especially those deviations from general regularities that resulted from the impact of the life course of the individual. Individualization, the increase of individual singularity, is the main effect of human development and its measure for Ananiev.
Ananiev developed a number of theoretical models and concepts. The best-known of Ananiev’s heritage is his theoretical model of human development, often named the “individuality concept.” According to this model, humans do not have any preassigned “structure of personality” or “initial harmony.” The starting point of human development is a combination of potentials—resources and reserves, biological and social. The human creates himself in the process of interaction with the world. Specialization, individually specific development of functions, appears here not as a distortion of the pre-set harmony of the whole but as the way of self-determining progressive human development. The most important practical task of psychology he viewed as psychological support and provision in the process of developing a harmonious individuality, based on the individual potentials.
The sensation of vision arises from the detection of photons of light at the eye, but in order to produce the percept of the world, extensive regions of the brain are required to process the visual information. The majority of information entering the brain via the optic nerve from the eye projects via the lateral geniculate nucleus (LGN) of the thalamus to the primary visual cortex, the largest visual area, having been reorganized such that one side of the brain represents one side of the world.
Damage to the primary visual cortex in one hemisphere therefore leads to a loss of conscious vision on the opposite side of the world, known as hemianopia. Despite this cortical blindness, many patients are still able to detect visual stimuli that are presented in the blind region if forced to guess whether a stimulus is present or absent. This is known as “blindsight.” For patients to gain any information (conscious or unconscious) about the visual world, the input from the eye must be processed by the brain. Indeed, there is considerable evidence from functional brain imaging that several visual areas continue to respond to visual stimuli presented within the blind region, even when the patient is unaware of the stimulus. Furthermore, the use of diffusion imaging allows the microstructure of white matter pathways within the visual system to be examined to see whether they are damaged or intact. By comparing patients who have hemianopia with and without blindsight it is possible to determine the pathways that are linked to blindsight function. Through understanding the brain areas and pathways that underlie blindsight in humans and non-human primates, the aim is to use modern neuroscience to guide rehabilitation programs for use after stroke.
The role of experience in brain organization and function can be studied by systematically manipulating developmental experiences. The most common protocols use extremes in experiential manipulation, such as environmental deprivation and/or enrichment. Studies of the effects of deprivation range from laboratory studies in which animals are raised in the absence of sensory or social experiences from infancy to children raised in orphanages with limited caregiver interaction. In both cases there are chronic perceptual, cognitive, and social dsyfunctions that are associated with chronic changes in neuronal structure and connectivity. Deprivation can be more subtle too, such as being raised in a low socioeconomic environment, which is often associated with poverty. Such experience is especially detrimental to language development, which in turn, limits educational opportunities. Unfortunately, the effects of some forms of socioemotional deprivation are often difficult, if not impossible, to ameliorate.
In contrast, adding sensory or social experiences can enhance behavioral functions. For example, placing animals in environments that are cognitively, motorically, and/or socially more complex than standard laboratory housing is associated with neuronal changes that are correlated with superior functions. Enhanced sensory experiences can be relatively subtle, however. For example, tactile stimulation with a soft brush for 15 minutes, three times daily for just two weeks in infant rats leads to permanent improvement in a wide range of psychological functions, including motoric, mnemonic, and other cognitive functions. Both complex environments and sensory stimulation can also reverse the negative effects of many other experiences. Thus, tactile stimulation accelerates discharge from hospital for premature human infants and stimulates recovery from stroke in both infant and adult rats. In sum, brain and behavioral functions are exquisitely influenced by manipulation of sensory experiences, especially in development.
Dyslexia, or a reading disability, occurs when an individual has great difficulty at the level of word reading and decoding. Comprehension of text, writing, and spelling are also affected. The diagnosis of dyslexia involves the use of reading tests, but the continuum of reading performance means that any cutoff point is arbitrary. The IQ score does not play a role in the diagnosis of dyslexia. Dyslexia is a language-based learning disability. The cognitive difficulties of dyslexics include problems with recognizing and manipulating the basic sounds in a language, language memory, and learning the sounds of letters. Dyslexia is a neurological condition with a genetic basis. There are abnormalities in the brains of dyslexic individuals. There are also differences in the electrophysiological and structural characteristics of the brains of dyslexics. Hope for dyslexia involves early detection and intervention and evidence-based instruction.
Trevor A. Harley
Research in the psychology of language has been dogged by some enduring controversies, many of which continue to divide researchers. Furthermore, language research has been riven by too many dichotomies and too many people taking too extreme a position, and progress is only likely to be made when researchers recognize that language is a complex system where simple dichotomies may not be relevant. The enduring controversies cover the width of psycholinguistics, including the work of Chomsky and the nature of language, to what extent language is innately determined and the origin of language and how it evolved. Chomsky’s work has also influenced our conceptions of the modularity of the structure of the mind and the nature of psychological processing. Advances in the sophistication of brain imaging techniques have led to debate about exactly what these techniques can tell us about the psychological processing of language. There has also been much debate about whether psychological processing occurs through explicit rules or statistical mapping, a debate driven by connectionist modeling, deep learning, and techniques for the analysis of “big data.” Another debate concerns the role of prediction in language and cognition and the related issues of the relationship between language comprehension and language production. To what extent is language processing embodied, and how does it relate to controversies about “embedded cognition”? Finally, there has been debate about the purpose and use of language.
There is intense contemporary public as well as professional psychological interest in bodily movement, gesture, and the subjective experience of movement. This has a background in knowledge that movements and the sensing of movements alike express the life of the whole person, whether in the arts, sports, and the pursuit of well-being, or in physiotherapies and psychotherapies of many kinds. The background of the numerous and varied areas of scientific research that contribute to this area has a long history in philosophy and cultural practices as well as in relations between different psychological and physiological topics. The significance of the sense of self-movement, kinesthesia, as opposed to the perception of moving objects, has not until recently been a central focus for research. To explain rising contemporary interest it is necessary to elucidate the usage of current terms—kinesthesia, proprioception, and haptic sense. This in turn leads to discussion of the historical background to modern research on kinesthesia and motor imagery, on phenomenology and sensed movement, on practice centered on kinesthetic appreciation, and on agency. All this is part of the field of inquiry into the psychology of performing and of appreciating dance.
Neil E. Rowland
Hunger is a specific and compelling sensation, sometimes arising from internal signals of nutrient depletion but more often modulated by numerous environmental variables including taste or palatability and ease or cost of procurement. Hunger motivates appetitive or foraging behaviors to find food followed by appropriate proximate or consummatory behaviors to eat it. A critical concept underlying food intake is the flux of chemical energy through an organism. This starts with inputs of food with particular energy content, storage of excess energy as adipose tissue or glycogen, and finally energy expenditure as resting metabolic rate (RMR) or as metabolic rate is modified by physical activity. These concepts are relevant within the context of adequate theoretical accounts based in energy homeostasis; historically, these are mainly static models, although it is now clear that these do not address practical issues such as weight gain through life. Eating is essentially an episodic behavior, often clustered as meals, and this has led to the idea that the meal is a central theoretical concept, but demonstrations that meal patterns are greatly influenced by the environment present a challenge to this tenet. Patterns of eating acquired during infancy and early life may also play a role in establishing adult norms. Direct controls of feeding are those that emphasize food itself as generating internal signals to modify or terminate an ongoing bout of eating, and include a variety of enteroendocrine hormones and brainstem mechanisms. Additionally, many studies point to the essential rewarding or hedonic aspects of food intake, including palatability, and this may involve integrative mechanisms in the forebrain and cerebral cortex.
Imprinting is a form of rapid, supposedly irreversible learning that results from exposure to an object during a specific period (a critical or sensitive period) during early life and produces a preference for the imprinted object. The word “imprinting” is an English translation of the German Prägung (“stamping in”), coined by Konrad Lorenz in 1935 to refer to the process that he studied in geese. Two types of imprinting have traditionally been distinguished: filial imprinting, involving the formation of an immediate social attachment to the mother or a mother-substitute, and sexual imprinting, involving the formation of a sexual preference that is manifested later in life. Both types of imprinting were subject to extensive experimental study beginning around 1950. Originally described in precocial birds (ducks, geese, and domestic chickens), imprinting has also been used to explain the formation of early social attachments in other species, including human infants. Imprinting has served as a useful model for studying the neural processes involved in learning and behavioral development and has provided a framework for thinking about other developmental processes.
Stephanie J. Wilson, Alex Woody, and Janice K. Kiecolt-Glaser
Inflammatory markers provide invaluable tools for studying health and disease across the lifespan. Inflammation is central to the immune system’s response to infection and wounding; it also can increase in response to psychosocial stress. In addition, depression and physical symptoms such as pain and poor sleep can promote inflammation and, because these factors fuel each other, all contribute synergistically to rising inflammation. With increasing age, persistent exposure to pathogens and stress can induce a chronic proinflammatory state, a process known as inflamm-aging.
Inflammation’s relevance spans the life course, from childhood to adulthood to death. Infection-related inflammation and stress in childhood, and even maternal stress during pregnancy, may presage heightened inflammation and poor health in adulthood. In turn, chronically heightened inflammation in adulthood can foreshadow frailty, functional decline, and the onset of inflammatory diseases in older age.
The most commonly measured inflammatory markers include C-reactive protein (CRP) and proinflammatory cytokines interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α). These biomarkers are typically measured in serum or plasma through blood draw, which capture current circulating levels of inflammation. Dried blood spots offer a newer, sometimes less expensive collection method but can capture only a limited subset of markers. Due to its notable confounds, salivary sampling cannot be recommended.
Inflammatory markers can be added to a wide range of lifespan developmental designs. Incorporating even a single inflammatory assessment to an existing longitudinal study can allow researchers to examine how developmental profiles and inflammatory status are linked, but repeated assessments must be used to draw conclusions about the associations’ temporal order and developmental changes. Although the various inflammatory indices can fluctuate from day to day, ecological momentary assessment and longitudinal burst studies have not yet incorporated daily inflammation measurement; this represents a promising avenue for future research.
In conclusion, mounting evidence suggests that inflammation affects health and disease across the lifespan and can help to capture how stress “gets under the skin.” Incorporating inflammatory biomarkers into developmental studies stands to enhance our understanding of both inflammation and lifespan development.
Conscience P. Bwiza, Jyung Mean Son, and Changhan Lee
Aging is a progressive process with multiple biological processes collectively deteriorating with time, ultimately causing loss of physiological functions necessary for survival and reproduction. It is also thought to have a strong evolutionary basis, largely resulting from the lack of selection force. Here, we discuss the evolutionary aspects of aging and a selection of theories founded on a variety of biological functions that have been shown to be involved in aging in multiple model organisms, ranging from the simple yeast, worms, flies, killifish, and rodents, to non-human primates and humans. The conglomerate of distinct theories has together revolutionized aging research in the past several decades, far more than what humankind has known since the dawn of civilization. However, not one theory alone can independently explain aging and should not be interpreted out of context of the cell and organism in its entirety. That said, the 21st century has been and will be an exciting time in the field of aging, with scientific advances on health span and lifespan being made at multiple fronts of biology and medicine in an unprecedented scale.
G. Campbell Teskey
The kindling phenomenon is a form of sensitization where, with repetition, epileptiform discharges become progressively longer and behavioral seizures eventually appear and then become more severe. The classic or exogenous kindling technique involves the repeated application of a convulsant stimulus. This technique also lowers seizure thresholds, the minimum intensity of a stimulus required to evoke an electrographic seizure, a process known as epileptogenesis. Endogenous kindling typically occurs following a brain-damaging event which lowers seizure thresholds to the point where self-generated epileptiform discharges recur, lengthen, propagate, and drive progressively more severe behavioral seizures. While exogenous kindling results in alterations in neuronal molecular, cellular/synaptic, and network function that give rise to altered behavior, there is a paucity of evidence for loss of neurons. In contrast, brain-damaging events, with neuronal loss, typically give rise to endogenous kindling. Kindling is a pan-species phenomenon and all mammals that have been examined, including humans, manifest exogenous kindling when seizure-genic (forebrain) structures have been targeted. Since humans display both exogenous and endogenous kindling phenomena this serves as a sober warning to clinicians to prevent seizures. Kindling serves as a robust and reliable model for epileptogenesis, focal as well as secondarily generalized seizures, and certain epileptic disorders.
Michael J. Lyons, Chandra A. Reynolds, William S. Kremen, and Carol E. Franz
The rapidly increasing number of people age 65 and older around the world has important implications for public health and social policy, making it imperative to understand the factors that influence the aging process. Twin studies can provide information that addresses critical questions about aging. Twin studies capitalize on a naturally occurring experiment in which there are some pairs of individuals who are born together and share 100% of their segregating genes (monozygotic twins) and some pairs that share approximately 50% (dizygotic twins). Twins can shed light on the relative influence of genes and environmental factors on various characteristics at various times during the life course and whether the same or different genetic influences are operating at different times. Twin studies can investigate whether characteristics that co-occur reflect overlapping genetic or environmental determinants. Discordant twin pairs provide an opportunity for a unique and powerful case-control study. There are numerous methodological issues to consider in twin studies of aging, such as the representativeness of twins and the assumption that the environment does not promote greater similarity within monozygotic pairs than dizygotic pairs. Studies of aging using twins may include many different types of measures, such as cognitive, psychosocial, biomarkers, and neuroimaging. Sophisticated statistical techniques have been developed to analyze data from twin studies. Structural equation modeling has proven to be especially useful. Several issues, such as assessing change and dealing with missing data, are particularly salient in studies of aging and there are a number of approaches that have been implemented in twin studies. Twins lend themselves very well to investigating whether genes influence one’s sensitivity to environmental exposures (gene-environment interaction) and whether genes influence the likelihood that an individual will experience certain environmental exposures (gene-environment correlation). Prior to the advent of modern molecular genetics, twin studies were the most important source of information about genetic influences. Dramatic advances in molecular genetic technology hold the promise of providing great insight into genetic influences, but these approaches complement rather than supplant twin studies. Moreover, there is a growing trend toward integrating molecular genetic methods into twin studies.
Nature–nurture is a dichotomous way of thinking about the origins of human (and animal) behavior and development, where “nature” refers to native, inborn, causal factors that function independently of, or prior to, the experiences (“nurture”) of the organism. In psychology during the 19th century, nature-nurture debates were voiced in the language of instinct versus learning. In the first decades of the 20th century, it was widely assumed that that humans and animals entered the world with a fixed set of inborn instincts. But in the 1920s and again in the 1950s, the validity of instinct as a scientific construct was challenged on conceptual and empirical grounds. As a result, most psychologists abandoned using the term instinct but they did not abandon the validity of distinguishing between nature versus nurture. In place of instinct, many psychologists made a semantic shift to using terms like innate knowledge, biological maturation, and/or hereditary/genetic effects on development, all of which extend well into the 21st century. Still, for some psychologists, the earlier critiques of the instinct concept remain just as relevant to these more modern usages.
The tension in nature-nurture debates is commonly eased by claiming that explanations of behavior must involve reference to both nature-based and nurture-based causes. However, for some psychologists there is a growing pressure to see the nature–nurture dichotomy as oversimplifying the development of behavior patterns. The division is seen as both arbitrary and counterproductive. Rather than treat nature and nurture as separable causal factors operating on development, they treat nature-nurture as a distinction between product (nature) versus process (nurture). Thus there has been a longstanding tension about how to define, separate, and balance the effects of nature and nurture.
Determining the mechanisms that underlie neurocognitive aging, such as compensation or dedifferentiation, and facilitating the development of effective strategies for cognitive improvement is essential due to the steadily rising aging population. One approach to study the characteristics of healthy aging comprises the assessment of functional connectivity, delineating markers of age-related neurocognitive plasticity. Functional connectivity paradigms characterize complex one-to-many (or many-to-many) structure–function relations, as higher-level cognitive processes are mediated by the interaction among a number of functionally related neural areas rather than localized to discrete brain regions. Task-related or resting-state interregional correlations of brain activity have been used as reliable indices of functional connectivity, delineating age-related alterations in a number of large-scale brain networks, which subserve attention, working memory, episodic retrieval, and task-switching. Together with behavioral and regional activation studies, connectivity studies and modeling approaches have contributed to our understanding of the mechanisms of age-related reorganization of distributed functional networks; specifically, reduced neural specificity (dedifferentiation) and associated impairment in inhibitory control and compensatory neural recruitment.
Patrick D. Gajewski and Michael Falkenstein
Healthy aging is associated with changes in sensory, motor, cognitive, and emotional functions. Such changes depend on various factors. In particular, physical activity not only improves physical and motor but also cognitive and emotional functions. Observational (i.e., associations) and cross-sectional studies generally show a positive effect of regular physical exercise on cognition in older adults. Most longitudinal randomized controlled intervention studies also show positive effects, but the results are inconsistent due to large heterogeneity of methodological setups. Positive changes accompanying physical activity mainly impact executive functions, memory functions, and processing speed. Several factors influence the impact of physical activity on cognition, mainly the type and format of the activity. Strength training and aerobic training yield comparable but also differential benefits, and all should be used in physical activities. Also, a combination of physical activity with cognitive activity appears to enhance its effect on cognition in older age. Hence, such combined training approaches are preferable to homogeneous trainings. Studies of brain physiology changes due to physical activity show general as well as specific effects on certain brain structures and functions, particularly in the frontal cortex and the hippocampus, which are those areas most affected by advanced age. Physical activity also appears to improve cognition in patients with mild cognitive dysfunction and dementia and often ameliorates the disease symptoms. This makes physical training an important intervention for those groups of older people.
Apart from cognition, physical activity leads to improvement of emotional functions. Exercise can lead to improvement of psychological well-being in older adults. Most importantly, exercise appears to reduce symptoms of depression in seniors. In future intervention studies it should be clarified who profits most from physical activity. Further, the conditions that influence the cognitive and emotional benefits older people derive from physical activity should be investigated in more detail. Finally, measures of brain activity that can be easily applied should be included as far as possible.
Idan Shalev and Waylon J. Hastings
Stress is a multistage process during which an organism perceives, interprets, and responds to threatening environmental stimuli. Physiological activity in the nervous, endocrine, and immune systems mediates the biological stress response. Although the stress response is adaptive in the short term, exposure to severe or chronic stressors dysregulates these biological systems, promoting maladaptive physiology and an accelerated aging phenotype, including aging on the cellular level. Two structures implicated in this process of stress and cellular aging are telomeres, whose length progressively decreases with age, and mitochondria, whose respiratory activity becomes increasingly inefficient with advanced age. Stress in its various forms is suggested to influence the maintenance and stability of these structures throughout life. Elucidating the interrelated connection between telomeres and mitochondria and how different types of stressors are influencing these structures to drive the aging process is of great interest. A better understanding of this subject can inform clinical treatments and intervention efforts to reduce (or even reverse) the damaging effects of stress on the aging process.
Vanessa L. Burrows
Stress has not always been accepted as a legitimate medical condition. The biomedical concept stress grew from tangled roots of varied psychosomatic theories of health that examined (a) the relationship between the mind and the body, (b) the relationship between an individual and his or her environment, (c) the capacity for human adaptation, and (d) biochemical mechanisms of self-preservation, and how these functions are altered during acute shock or chronic exposure to harmful agents. From disparate 19th-century origins in the fields of neurology, psychiatry, and evolutionary biology, a biological disease model of stress was originally conceived in the mid-1930s by Canadian endocrinologist Hans Selye, who correlated adrenocortical functions with the regulation of chronic disease.
At the same time, the mid-20th-century epidemiological transition signaled the emergence of a pluricausal perspective of degenerative, chronic diseases such as cancer, heart disease, and arthritis that were not produced not by a specific etiological agent, but by a complex combination of multiple factors which contributed to a process of maladaptation that occurred over time due to the conditioning influence of multiple risk factors. The mass awareness of the therapeutic impact of adrenocortical hormones in the treatment of these prevalent diseases offered greater cultural currency to the biological disease model of stress.
By the end of the Second World War, military neuropsychiatric research on combat fatigue promoted cultural acceptance of a dynamic and universal concept of mental illness that normalized the phenomenon of mental stress. This cultural shift encouraged the medicalization of anxiety which stimulated the emergence of a market for anxiolytic drugs in the 1950s and helped to link psychological and physiological health. By the 1960s, a growing psychosomatic paradigm of stress focused on behavioral interventions and encouraged the belief that individuals could control their own health through responsible decision-making. The implication that mental power can affect one’s physical health reinforced the psycho-socio-biological ambiguity that has been an enduring legacy of stress ever since.
This article examines the medicalization of stress—that is, the historical process by which stress became medically defined. It spans from the mid-19th century to the mid-20th century, focusing on these nine distinct phases:
1. 19th-century psychosomatic antecedent disease concepts
2. The emergence of shell-shock as a medical diagnosis during World War I
3. Hans Selye’s theorization of the General Adapation Syndrome in the 1930s
4. neuropsychiatric research on combat stress during World War II
5. contemporaneous military research on stress hormones during World War II
6. the emergence of a risk factor model of disease in the post-World War II era
7. the development of a professional cadre of stress researchers in the 1940s and 50s
8. the medicalization of anxiety in the early post–World War II era
9. The popularization of stress in the 1950s and pharmaceutical treatments for stress, marked by the cultural assimilation of paradigmatic stress behaviors and deterrence strategies, as well pharmaceutical treatments for stress.
Neil E. Rowland
Thirst is a specific and compelling sensation, often arising from internal signals of dehydration but modulated by many environmental variables. There are several historical landmarks in the study of thirst and drinking behavior. The basic physiology of body fluid balance is important, in particular the mechanisms that conserve fluid loss. The transduction of fluid deficits can be discussed in relation to osmotic pressure (osmoreceptors) and volume (baroreceptors). Other relevant issues include the neurobiological mechanisms by which these signals are transformed to intracellular and extracellular dehydration thirsts, respectively, including the prominent role of structures along the lamina terminalis. Other considerations are the integration of signals from natural dehydration conditions, including water deprivation, thermoregulatory fluid loss, and thirst associated with eating dry food. These mechanisms should also be considered within a broader theoretical framework of organization of motivated behavior based on incentive salience.