You are looking at 121-140 of 268 articles
David Bunce and Sarah Bauermeister
Intraindividual variability in the present context refers to the moment-to-moment variation in attentional or executive engagement over a given time period. Typically, it is measured using the response latencies collected across the trials of a behavioral neurocognitive task. In aging research, the measure has received a lot of recent interest as it may provide important insights into age-related cognitive decline and neuropathology as well as having potential as a neurocognitive assessment tool in healthcare settings. In the present chapter, we begin by reviewing the key empirical findings relating to age and intraindividual variability. Here, research shows that intraindividual variability increases with age and predicts a range of age-related outcomes including gait impairment, falls and errors more broadly, mild cognitive impairment, dementia, and mortality. Brain imaging research suggests that greater variability is associated with age-related or neuropathological changes to a frontal–cingulate–parietal network and that white matter compromise and dopamine depletion may be key underlying mechanisms. We then consider the cognitive and neurobiological theoretical underpinnings of the construct before providing a description of the various methods and metrics that have been used to compute measures of variability – reaction time cut-offs, raw and residualized intraindividual standard deviations, coefficient of variation, ex-Gaussian curve and fast Fourier transformation. A further section considers the range of neurocognitive tasks that have been used to assess intraindividual variability. Broadly, these tasks can be classified on a continuum of cognitive demands as psychomotor, executive control or higher-order cognitive tasks (e.g., episodic memory). Finally, we provide some pointers concerning the pressing issues that future research needs to address in the area. We conclude that the existing body of theoretical and empirical work underlines the potential of intraindividual reaction time variability measures as additions to the neuropsychological test batteries that are used in the early detection of a range of age-related neurocognitive disorders in healthcare settings.
Eric S. Cerino and Karen Hooker
Intraindividual variability (IIV) refers to short-term fluctuations that may be more rapid, and are often conceptualized as more reversible, than developmental change that unfolds over a longer period of time, such as years. As a feature of longitudinal data collected on micro timescales (i.e., seconds, minutes, days, or weeks), IIV can describe people, contexts, or general processes characterizing human development. In contrast to approaches that pool information across individuals and assess interindividual variability in a population (i.e., between-person variability), IIV is the focus of person-centered studies addressing how and when individuals change over time (i.e., within-person variability). Developmental psychologists interested in change and how and when it occurs, have devised research methods designed to examine intraindividual change (IIC) and interindividual differences in IIC. Dispersion, variability, inconsistency, time-structured IIV, and net IIV are distinct operationalizations of IIV that, depending on the number of measures, occasions, and time of measurement, reflect unique information about IIV in lifespan developmental domains of interest. Microlongitudinal and measurement-burst designs are two methodological approaches with intensive repeated measurement that provide a means by which various operationalizations of IIV can be accurately observed over an appropriate temporal frame to garner clearer understanding of the dynamic phenomenon under investigation. When methodological approaches are theoretically informed and the temporal frame and number of assessments align with the dynamic lifespan developmental phenomenon of interest, researchers gain greater precision in their observations of within-person variability and the extent to which these meaningful short-term fluctuations influence important domains of health and well-being. With technological advancements fueling enhanced methodologies and analytic approaches, IIV research will continue to be at the vanguard of pioneering designs for elucidating developmental change at the individual level and scaling it up to generalize to populations of interest.
Anja Van den Broeck and Sharon K. Parker
Job design or work design refers to the content, structure, and organization of tasks and activities. It is mostly studied in terms of job characteristics, such as autonomy, workload, role problems, and feedback. Throughout history, job design has moved away from a sole focus on efficiency and productivity to more motivational job designs, including the social approach toward work, Herzberg’s two-factor model, Hackman and Oldham’s job characteristics model, the job demand control model of Karasek, Warr’s vitamin model, and the job demands resources model of Bakker and Demerouti. The models make it clear that a variety of job characteristics make up the quality of job design that benefits employees and employers alike. Job design is crucial for a whole range of outcomes, including (a) employee health and well-being, (b) attitudes like job satisfaction and commitment, (c) employee cognitions and learning, and (d) behaviors like productivity, absenteeism, proactivity, and innovation. Employee personal characteristics play an important role in job design. They influence how employees themselves perceive and seek out particular job characteristics, help in understanding how job design exerts its influence, and have the potential to change the impact of job design.
Vincente Martínez-Tur and Carolina Moliner
Traditionally, justice in teams refers to a specific climate—called justice climate—describing shared perceptions about how the team as a whole is treated. Justice at the individual level has been a successful model from which to build the concept of justice in teams. Accordingly, there is a parallelism between the individual and team levels in the investigation of justice, where scholars’ concerns and responses have been very similar, despite studying different levels of construct. However, the specific particularities of teams are increasingly considered in research. There are three concepts (faultlines, subgrouping, and intergroup justice) that contribute to knowledge by focusing on particularities of teams that are not present at the individual level. The shift toward team-based structures provides an opportunity to observe the existence of dividing lines that may split a team into subgroups (faultlines) and the difficulty, in many cases, of conceiving of the team members as part of a single group. This perspective about teams also stimulates the study of the subgroup as a source of justice and the focus on intergroup justice within the team. In sum, the organizational context facilitates shared experiences and perceptions of justice beyond individual differences but also can result in potential conflicts and discrepancies among subgroups within the team in their interpretation of fairness.
Erica H. Wojcik, Irene de la Cruz-Pavía, and Janet F. Werker
Language is a structured form of communication that is unique to humans. Within the first few years of life, typically developing children can understand and produce full sentences in their native language or languages. For centuries, philosophers, psychologists, and linguists have debated how we acquire language with such ease and speed. Central to this debate has been whether the learning process is driven by innate capacities or information in the environment. In the field of psychology, researchers have moved beyond this dichotomy to examine how perceptual and cognitive biases may guide input-driven learning and how these biases may change with experience. There is evidence that this integration permeates the learning and development of all aspects of language—from sounds (phonology), to the meanings of words (lexical-semantics), to the forms of words and the structure of sentences (morphosyntax). For example, in the area of phonology, newborns’ bias to attend to speech over other signals facilitates early learning of the prosodic and phonemic properties of their native language(s). In the area of lexical-semantics, infants’ bias to attend to novelty aids in mapping new words to their referents. In morphosyntax, infants’ sensitivity to vowels, repetition, and phrase edges guides statistical learning. In each of these areas, too, new biases come into play throughout development, as infants gain more knowledge about their native language(s).
Lori E. James and Sara Anne Goring
The questions of whether and why language processes change in healthy aging require complicated answers. Although comprehension appears to be more stable across adulthood than does production, there is evidence for age-related changes and also for constancy within both input and output components of language. Further, these changes can be considered at various levels of the language hierarchy, such as sensory input, words, sentences, and discourse. As concluded in several other comprehensive reviews, older adults’ language production ability declines much more noticeably than does their comprehension, presumably because comprehension is able to benefit from contextual processing in a way that production cannot. Specifically, lexical and orthographic retrieval become more difficult during normal aging, and these changes appear to represent the most noticeable age-related declines in language production. Some theories of age-related decline focus on global deterioration of cognitive function, whereas other theories predict changes in specific processes related to language function. Both types of theories have received empirical support as applied to language performance, although additional theoretical development is still needed to capture the patterns of effects. Further, in order to truly understand how cognitive aging impacts the ability to understand and produce language, it is necessary to examine how age-related shifts in goals, expertise, and compensatory strategies influence language processes. There are important implications of research on language and cognitive aging, in that language can play a role in physical health and psychological well-being. In summary, our review of the existing literature on language and cognitive aging supports previous claims that language ability is asymmetrically impacted by age, with smaller overall effects of aging on comprehension than production processes.
W. James Weese and P. Chelladurai
The study of leadership has a long and distinguished history. Over the past 100 years, researchers have pursued distinct lines of inquiry summarized in the trait theories, the behavioral theories, the contingency theories, and the transactional/transformational theories of leadership. More recent cognitive approaches have dominated the leadership literature base with emphasis on the areas of emotional intelligence and servant leadership. Even as new leadership models emerge, it is important to note that portions of the older theories continue to inform our understandings. The voluminous research base confirms three things about leadership. Leadership is a social process, involving people and engaging their emotions, motivations, and moods. Secondly, leadership is about influence. True leaders influence the thoughts and behaviors of people and groups without the manipulation of rewards or punishments. Some writers suggest that leadership is synonymous with influence. Finally, leaders focus, inspire, and motivate people and groups toward the accomplishment of a predetermined goal or objective. They bring clarity to a desired end and they inspire colleagues to channel their talents and energies toward its attainment. The theoretical developments of leadership, and the latest developments in particular (i.e., emotional intelligence and servant leadership), hold great promise for application in the sports domain.
Markus Wettstein, Hans-Werner Wahl, and Michael Schwenk
When referring to life space, researchers usually mean the area in which individuals move in their everyday lives. Life space can be measured based on different approaches, by means of self-reports (i.e., questionnaires or diaries) or by more recent approaches of technology-based objective assessment (e.g., via Global Positioning System [GPS] devices or smartphones). Life space is an important indicator of older adults’ out-of-home mobility and is meaningfully associated with autonomy, well-being, and quality of life. Substantial relationships between life space and socio-demographic indicators, health, and cognitive abilities have been reported in previous research. Future research on life space in old age will benefit from a more comprehensive and stronger interdisciplinary perspective, from taking into account different time scales (i.e., short- and long-term variability), and from considering life space as a multidimensional measure that can be best assessed based on multi-method approaches with multiple indicators.
Loneliness or perceived social isolation is a subjective experience relating to dissatisfaction with one’s social relationships. Most research has focused on the experience of loneliness in old age, but levels of loneliness are also known to be high among teenagers and young adults. While poor health may be associated with increased feelings of loneliness, there is now considerable evidence on the role of loneliness as a risk factor for poor mental and physical health. Studies show that loneliness is associated with an increased risk of developing dementia and chronic diseases, and also with a higher rate of mortality. Risky health behaviors, a poor cardiovascular profile and compromised immune functioning have all been proposed as potential pathways through which loneliness may affect health. However, much still remains to be understood about these mechanisms.
Susan Krauss Whitbourne
Research methods in lifespan development include single-factor designs that either follow a single cohort of individuals over time or compare age groups at a single time point. The two basic types of studies involving the manipulation of the single factors of age, cohort, and time of measurement are longitudinal and cross-sectional. Each of these has advantages and disadvantages, but both are characterized by limitations because they cannot definitively separate the joint influences of age, cohort, and type of measurement. The third group of designs involves manipulation of two or more levels of each factor to permit inferences to be drawn that separate personal from social aging.
The theoretical problems involved in both the single-factor and sequential designs combine with practical issues to present lifespan developmental researchers with a number of choices in approaching the variables of interest. The theoretical problems include the inevitable linking of personal with social aging, particularly evident in single-factor designs, and the fact that selective attrition leads to the differential availability of increasingly select older samples. Practical problems include the need to assign participants to appropriate age intervals and such clerical issues as the need to track participants in follow-up investigations. Researchers must also be aware of methodological issues related to task equivalence across individuals of different ages and the need to covary for potential confounds that could lead to differences across groups of participants due to such factors as education and health status.
The increasing recognition of the need to address these issues is leading to a body of literature that reflects the growing sophistication of the field along with the more widespread availability of sophisticated analytic methods. As these improvements continue to raise the level of scholarship in the field, there will be a greater understanding of both ontogenetic change as well as the influence of context on development from childhood through later life.
Philip Parker and Robert Brockman
Longitudinal structural equation modeling (LSEM) is used to answer lifespan relevant questions such as (a) what is the effect of one variable on change in and other, (b) what is the average trajectory or growth rate of some psychological variable, and (c) what variability is there in average trajectories and what predicts this variability. The first of these questions is often answered by a LSEM called an autoregressive cross-lagged (ACL) model. The other two questions are most typically answered by an LSEM called a latent growth curve (LGC). These models can be applied to a few time waves (measured over several years) or to many time waves (such as present in diary studies) and can be altered, expanded, or even integrated. However, decisions on what model to use must be driven by the research question. The right tool for the job is not always the most complex. And, more importantly, the right tool must be matched to the best possible research design. Sometimes in lifespan research the right tool is LSEM. However, researchers should prioritize research design as well as careful specification of the processes and mechanisms they are interested in rather than simply choosing the most complicated LSEM they can find.
Gawon Cho, Giancarlo Pasquini, and Stacey B. Scott
The study of human development across the lifespan is inherently about patterns across time. Although many developmental questions have been tested with cross-sectional comparisons of younger and older persons, understanding of development as it occurs requires a longitudinal design, repeatedly observing the same individual across time. Development, however, unfolds across multiple time scales (i.e., moments, days, years) and encompasses both enduring changes and transient fluctuations within an individual. Measurement burst designs can detect such variations across different timescales, and disentangle patterns of variations associated with distinct dimensions of time periods. Measurement burst designs are a special type of longitudinal design in which multiple “bursts” of intensive (e.g., hourly, daily) measurements are embedded in a larger longitudinal (e.g., monthly, yearly) study. The hybrid nature of these designs allow researchers to address questions not only of cross-sectional comparisons of individual differences (e.g., do older adults typically report lower levels of negative mood than younger adults?) and longitudinal examinations of intraindividual change (e.g., as individuals get older, do they report lower levels of negative mood?) but also of intraindividual variability (e.g., is negative mood worse on days when individuals have experienced an argument compared to days when an argument did not occur?). Researchers can leverage measurement burst designs to examine how patterns of intraindividual variability unfolding over short timescales may exhibit intraindividual change across long timescales in order to understand lifespan development. The use of measurement burst designs provides an opportunity to collect more valid and reliable measurements of development across multiple time scales throughout adulthood.
Matthew S. Fritz and Houston F. Lester
Mediator variables are variables that lie between the cause and effect in a causal chain. In other words, mediator variables are the mechanisms through which change in one variable causes change in a subsequent variable. The single-mediator model is deceptively simple because it has only three variables: an antecedent, a mediator, and a consequent. Determining that a variable functions as a mediator is a difficult process, however, because causation can be inferred only when many strict assumptions are met, including, but not limited to, perfectly reliable measures, correct temporal design, and no omitted confounders. Since many of these assumptions are difficult to assess and rarely met in practice, the significance of a statistical test of mediation alone usually provides only weak evidence of mediation.
New methodological approaches are constantly being developed to circumvent these limitations. Specifically, new methods are being created for the following purposes: (1) to assess the impact of violating assumptions (e.g., sensitivity analyses) and (2) to make fewer assumptions and provide more flexible analysis techniques (e.g., Bayesian analysis or bootstrapping) that may be more robust to assumption violations. Despite these advances, the importance of the design of a study cannot be overstated. A statistical analysis, no matter how sophisticated, cannot redeem a study that measured the wrong variables or used an incorrect temporal design.
Nicole D. Anderson
Healthy aging is accompanied by decrements in episodic memory and working memory. Significant efforts have therefore been made to augment episodic and working memory in healthy older adults. Two principal approaches toward memory rehabilitation adults are restorative approaches and compensatory approaches. Restorative approaches aim to repair the affected memory processes by repeated, adaptive practice (i.e., the trained task becomes more difficult as participants improve), and have focused on recollection training, associative memory training, object-location memory training, and working memory training. The majority of these restorative approaches have been proved to be efficacious, that is, participants improve on the trained task, and there is considerable evidence for maintenance of training effects weeks or months after the intervention is discontinued. Transfer of restorative training approaches has been more elusive and appears limited to other tasks relying on the same domains or processes. Compensatory approaches to memory strive to bypass the impairment by teaching people mnemonic and lifestyle strategies to bolster memory performance. Specific mnemonic strategy training approaches as well as multimodal compensatory approaches that combine strategy training with counseling about other factors that affect memory (e.g., memory self-efficacy, relaxation, exercise, and cognitive and social engagement) have demonstrated that older adults can learn new mnemonics and implement them to the benefit of memory performance, and can adjust their views and expectations about their memory to better cope with the changes that occur during healthy aging. Future work should focus on identifying the personal characteristics that predict who will benefit from training and on developing objective measures of the impact of memory rehabilitation on older adults’ everyday functioning.
Christopher Hertzog and Taylor Curley
Metamemory is defined as cognitions about memory and related processes. Related terms in the literature include metacognition, self-evaluation, memory self-efficacy, executive function, self-regulation, cognitive control, and strategic behavior. Metamemory is a multidimensional construct that includes knowledge about how memory works, beliefs about memory (including beliefs about one’s own memory such as memory self-efficacy), monitoring of memory and related processes and products, and metacognitive control, in which adaptive changes in processing approaches and strategies may be contemplated if monitoring of memory processes (encoding, retention, retrieval) indicates that alternative strategies may be required. Older adults generally believe that their memory has declined and that, on average, they have less control over memory and lower memory self-efficacy than young and middle-aged adults. Many but not all aspects of online memory monitoring are well preserved in old age, such as the ability to discriminate between information that has been learned versus not learned. A major exception concerns confidence judgments concerning whether recognition memory decisions are correct; older adults are more prone to high-confidence memory errors, believing they are recognizing something they have not encountered previously. The evidence regarding metacognitive control is more mixed, with some hints that older adults do not use monitoring to adjust control behaviors (e.g., devoting more time and effort to studying items they believe have not yet been well-learned). However, any age deficits in self-regulation based on memory monitoring or adaptive strategy use can probably be addressed through instructions, practice, or training. In general, older adults seem capable of exerting metacognitive control in memory studies, although they may not necessarily do so without explicit support or prompting.
Michael J. Lyons, Chandra A. Reynolds, William S. Kremen, and Carol E. Franz
The rapidly increasing number of people age 65 and older around the world has important implications for public health and social policy, making it imperative to understand the factors that influence the aging process. Twin studies can provide information that addresses critical questions about aging. Twin studies capitalize on a naturally occurring experiment in which there are some pairs of individuals who are born together and share 100% of their segregating genes (monozygotic twins) and some pairs that share approximately 50% (dizygotic twins). Twins can shed light on the relative influence of genes and environmental factors on various characteristics at various times during the life course and whether the same or different genetic influences are operating at different times. Twin studies can investigate whether characteristics that co-occur reflect overlapping genetic or environmental determinants. Discordant twin pairs provide an opportunity for a unique and powerful case-control study. There are numerous methodological issues to consider in twin studies of aging, such as the representativeness of twins and the assumption that the environment does not promote greater similarity within monozygotic pairs than dizygotic pairs. Studies of aging using twins may include many different types of measures, such as cognitive, psychosocial, biomarkers, and neuroimaging. Sophisticated statistical techniques have been developed to analyze data from twin studies. Structural equation modeling has proven to be especially useful. Several issues, such as assessing change and dealing with missing data, are particularly salient in studies of aging and there are a number of approaches that have been implemented in twin studies. Twins lend themselves very well to investigating whether genes influence one’s sensitivity to environmental exposures (gene-environment interaction) and whether genes influence the likelihood that an individual will experience certain environmental exposures (gene-environment correlation). Prior to the advent of modern molecular genetics, twin studies were the most important source of information about genetic influences. Dramatic advances in molecular genetic technology hold the promise of providing great insight into genetic influences, but these approaches complement rather than supplant twin studies. Moreover, there is a growing trend toward integrating molecular genetic methods into twin studies.
Ildiko Tombor and Susan Michie
People’s behavior influences health, for example, in the prevention, early detection, and treatment of disease, the management of illness, and the optimization of healthcare professionals’ behaviors. Behaviors are part of a system of behaviors within and between people in that any one behavior is influenced by others. Methods for changing behavior may be aimed at individuals, organizations, communities, and/or populations and at changing different influences on behavior, e.g., motivation, capability, and the environment. A framework that encapsulates these influences is the Behavior Change Wheel, which links an understanding of behavior in its context with methods to change behavior. Within this framework, methods are conceptualized at three levels: policies that represent high-level societal and organizational decisions, interventions that are more direct methods to change behavior, and behavior change techniques that are the smallest components that on their own have the potential to change behavior. In order to provide intervention designers with a systematic method to select the policies, interventions, and/or techniques relevant for their context, a set of criteria can be used to help select intervention methods that are likely to be implemented and effective. One such set is the “APEASE” criteria: affordability, practicability, effectiveness, acceptability, safety, and equity.
Joseph E. Gaugler, Colleen M. Peterson, Lauren L. Mitchell, Jessica Finlay, and Eric Jutkowitz
Mixed methods research consists of collecting and analyzing qualitative and quantitative data within a singular study. The “methods” of mixed methods research vary, but the ultimate goal is to provide greater understanding and explanation via the integration of qualitative and quantitative data. Mixed methods studies have the potential to advance our understanding of complex phenomena over time in adult development and aging (e.g., depression following the death of a spouse), but the utility of this approach depends on its application. The authors systematically searched the literature (CINHAL, Embase, Ovid/Medline, PubMed, PsychInfo, and ProQuest) to identify longitudinal mixed methods studies focused on aging. They identified 6,351 articles published between 1994 and 2017, of which 174 met the inclusion criteria. The majority of mixed methods studies reported on the evaluation of interventions or educational programs. Non-interventional studies tended to report on experiences related to the progression of various health conditions, the needs and experiences of caregivers, and the lived experiences of older adults. About half (n = 81) of the mixed methods studies followed a sequential explanatory design where a qualitative component followed quantitative evaluation, and most of these studies achieved “integration” by comparing qualitative and quantitative data in Results sections. There was considerable heterogeneity across studies in terms of overall design (randomized trials, program evaluations, cohort studies, and case studies). As a whole, the literature suffered from key limitations, including a lack of reporting on sample selection methodology and mixed methods design characteristics. To maximize the value of mixed methods in adult development in aging research, investigators should conform to recommended guidelines (e.g., depict participant study flow and use recommended notation) and consider more sophisticated mixed methods applications to advance the state of the art.
Alexandre J.S. Morin and David Litalien
As part of the Generalized Structural Equation Modeling framework, mixture models are person-centered analyses seeking to identify distinct subpopulations, or profiles, of participants differing quantitatively and qualitatively from one another on a configuration of indicators and/or relations among these indicators. Mixture models are typological (resulting in a classification system), probabilistic (each participant having a probability of membership into all profiles based on prototypical similarity), and exploratory (the optimal model is typically selected based on a comparison of alternative specifications) in nature, and can take different forms. Latent profile analyses seek to identify subpopulations of participants differing from one another on a configuration of indicators and can be extended to factor mixture analyses allowing for the incorporation of latent factors to the model. In contrast, mixture regression analyses seek to identify subpopulations of participants’ differing from one another in terms of relations among profile indicators. These analyses can be extended to the multiple-group and/or longitudinal analyses, allowing researchers to conduct tests of profile similarity across different samples of participants or time points, and latent transition analyses can be used to assess probabilities of profiles transition over time among a sample of participants (i.e., within person stability and change in profile membership). Finally, growth mixture analyses are built from latent curve models and seek to identify subpopulations of participants following quantitatively and qualitatively distinct trajectories over time. All of these models can accommodate covariates, used either as predictors, correlates, or outcomes, and can even be extended to tests of mediation and moderation.
Barbi Law, Phillip Post, and Penny McCullagh
Modeling and imagery are distinct but related psychological skills. However, despite sharing similar cognitive processes, they have traditionally been investigated separately. While modeling has shown similar psychological and physical performance benefits as imagery, it remains an understudied technique within applied sport psychology. Social cognitive and direct perception approaches remain often-used explanations for the effectiveness of modeling on skill acquisition; however, emergent neuropsychological explanations provide evidence to support these earlier theories and a link to the imagery literature.
With advances in technology and the development of applied frameworks, there is renewed interest in exploring modeling effects and how they parallel imagery use in applied settings. Specifically, modeling research has expanded beyond controlled laboratory settings to explore the effect of various theoretical models on motor performance and related cognitions within practice and competitive settings. The emergence of affordable video editing technology makes it easy for coaches and athletes to incorporate modeling into practice. The accessibility of video technology has sparked applied research on how various forms of modeling influence motor performance and cognitions, such as confidence and motivation. These applied investigations demonstrate the complementary nature of modeling and imagery in enhancing sport performance and skill acquisition, while highlighting the challenges in separating modeling and imagery effects. Both literatures offer possibilities for new methodological approaches and directions for studying these psychological skills in tandem as well as independently. Thus, there is much that imagery and modeling researchers can learn from each other in sport and other performance settings.