You are looking at 221-231 of 231 articles
Nikos Ntoumanis, Cecile Thørgersen-Ntoumani, Eleanor Quested, and Nikos Chatzisarantis
Compelling evidence worldwide suggests that the number of physically inactive individuals is high, and it is increasing. Given that lack of physical activity has been linked to a number of physical and mental health problems, identifying sustainable, cost-effective, and scalable initiatives to increase physical activity has become a priority for researchers, health practitioners, and policymakers. One way to identify such initiatives is to use knowledge derived from psychological theories of motivation and behavior change. There is a plethora of such theories and models that describe a variety of cognitive, affective, and behavioral mechanisms that can target behavior at a conscious or an unconscious level. Such theories have been applied, with varying degrees of success, to inform exercise and physical activity interventions in different life settings (e.g., schools, hospitals, and workplaces) using both traditional (e.g., face-to-face counseling and printed material) and digital technology platforms (e.g., smartphone applications and customized websites). This work has offered important insights into how to create optimal motivational conditions, both within individuals and in the social environments in which they operate, to facilitate long-term engagement in exercise and physical activity. However, we need to identify overlap and synergies across different theoretical frameworks in an effort to develop more comprehensive, and at the same time more distinct, theoretical accounts of behavior change with reference to physical activity promotion. It is also important that researchers and practitioners utilize such theories in interdisciplinary research endeavors that take into account the enabling or restrictive role of cultural norms, the built environment, and national policies on physical activity.
Theoretical Perspectives on Age Differences in Brain Activation: HAROLD, PASA, CRUNCH—How Do They STAC Up?
Sara B. Festini, Laura Zahodne, and Patricia A. Reuter-Lorenz
Cognitive neuroimaging studies often report that older adults display more activation of neural networks relative to younger adults, referred to as overactivation. Greater or more widespread activity frequently involves bilateral recruitment of both cerebral hemispheres, especially the frontal cortex. In many reports, overactivation has been associated with superior cognitive performance, suggesting that this activity may reflect compensatory processes that offset age-related decline and maintain behavior. Several theories have been proposed to account for age differences in brain activation, including the Hemispheric Asymmetry Reduction in Older Adults (HAROLD) model, the Posterior-Anterior Shift in Aging (PASA) theory, the Compensation-Related Utilization of Neural Circuits Hypothesis (CRUNCH), and the Scaffolding Theory of Aging and Cognition (STAC and STAC-r). Each model has a different explanatory scope with regard to compensatory processes, and each has been highly influential in the field. HAROLD contrasts the general pattern of bilateral prefrontal activation in older adults with that of more unilateral activation in younger adults. PASA describes both anterior (e.g., frontal) overactivation and posterior (e.g., occipital) underactivation in older adults relative to younger adults. CRUNCH emphasizes that the level or extent of brain activity can change in response to the level of task demand at any age. Finally, STAC and STAC-r take the broadest perspective to incorporate individual differences in brain structure, the capacity to implement functional scaffolding, and life-course neural enrichment and depletion factors to predict cognition and cognitive change across the lifespan. Extant empirical work has documented that compensatory overactivation can be observed in regions beyond the prefrontal cortex, that variations in task difficulty influence the degree of brain activation, and that younger adults can show compensatory overactivation under high mental demands. Additional research utilizing experimental designs (e.g., transcranial magnetic stimulation), longitudinal assessments, greater regional precision, both verbal and nonverbal material, and measures of individual difference factors will continue to refine our understanding of age-related activation differences and adjudicate among these various accounts of neurocognitive aging.
Training is the systematic processes initiated by the organization that facilitate relatively permanent changes in the knowledge, skills, or affect/attitudes of organizational members. Cumulative meta-analytic evidence indicates that training is effective, producing, on average, moderate effect sizes. Training is most effective when designed so that trainees are active and encouraged to self-regulate during training, and when it is well-structured and requires effort on the part of trainees. Additional characteristics of effective training are: The purpose, objectives, and intended outcomes of training are clearly communicated to trainees; the training content is meaningful, and training assignments, examples, and exercises are relevant to the job; trainees are provided with instructional aids that can help them organize, learn, and recall training content; opportunities for practice in a safe environment are provided; feedback is provided by trainers, observers, peers, or the task itself; and training enables learners to observe and interact with others. In addition, effective training requires a prior needs assessment to ensure the relevance of training content and provides conditions to optimize trainees’ motivation to learn. After training, care should be taken to provide opportunities for trainees to implement trained skills, and organizational and social support should be in place to optimize transfer. Finally, it is important that all training be evaluated to ensure learning outcomes are met and that training results in increased job performance and/or organizational effectiveness.
Well-being is a core concept for both individuals, groups and societies. Greater understanding of trajectories of well-being in later life may contribute to the achievement and maintenance of well-being for as many as possible. This article reviews two main approaches to well-being: hedonic and eudaimonic well-being, and shows that it is not chronological age per se, but various factors related to age that underlie trajectories of well-being at older ages. Next to the role of genes, heritability and personality traits, well-being is determined to a substantial extent by external circumstances and resources (e.g., health and social relationships), and to malleable individual behaviors and beliefs (e.g., self-regulatory ability and control beliefs). Although many determinants have been identified, it remains difficult to decide which of them are most important. Moreover, the role of some determinants varies for different indicators of well-being, such as positive affect and life satisfaction. Several prominent goal- and need-based models of well-being in later life are discussed, which explicate mechanisms underlying trajectories of well-being at older ages. These are the model of Selection, Optimization, and Compensation, the Motivational Theory of Lifespan Development, Socio-emotional Selectivity Theory, Ryff’s model of Psychological Well-Being, Self-Determination Theory, and Self-Management of Well-being theory. Also, interventions based on these models are reviewed, although not all of them address older adults. It is concluded that the literature on well-being in later life is enormous, and, together with various conceptual models, offers many important insights. Still, the field would benefit from more theoretical integration, and from more attention to the development and testing of theory-based interventions. This remains a challenge for the science of well-being in later life, and could be an important contribution to the well-being of a still growing proportion of the population.
Craig D. Parks
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Psychology. Please check back later for the full article.
A social dilemma is a situation of interdependence between people in which there is conflict between doing what is best for oneself, and doing what is best for the group: Trying to produce the best personal outcome (selfishness) hurts the group effort, and contributing to the group effort (cooperation) leads to a less-than-optimal personal outcome. The best personal outcome is realized by acting for oneself when everyone else acts for the group. Because of this, if each group member does what is best for him/herself, the group will fail, and each person will end up a poor outcome. Solution of a social dilemma thus requires that at least some people forgo selfish interest in favor of the collective. Research into social dilemmas is primarily oriented around identifying the influences on a person’s willingness to cooperate, and designing interventions that will encourage more frequent cooperation. There are many real examples of social dilemmas: clean air, charities, public broadcasting, and groundwater, to name just a few.
Behavior in a social dilemma is influenced by both individual and situational variables. One of the most heavily studied of the individual variables is trust. It was first investigated in the 1940s and continues to be a focus of study today. While one would think that there is a straightforward relationship between trust and cooperation—higher levels of trust lead to greater cooperation—in fact, the nature of the influence of trust on cooperation continues to be debated. A major factor in this debate is that there is no single, uniformly accepted definition of “trust.” Some researchers define it in terms of conscientiousness (others are expected to do the “right thing”), some in terms of predictability (others are expected to act in a consistent manner across situations), and some in terms of reasoning (others are expected to analyze the situation in the same way as the Actor and reach the same conclusion about appropriate behavior as the Actor has reached). Each of these operational definitions has been empirically connected to cooperation, though not always in the same way, and sometimes in conflicting ways. While it is clear that “trust,” if defined very generally as an expectation about others, often has an impact on cooperation, researchers are still trying to understand which components of that expectation are critical, and whether there are situations in which trust does not affect cooperation.
Nicola D. Ridgers and Samuel K. Lai
Commercially available wearable activity trackers are small, non-invasive electronic devices that are worn on the body for the purposes of monitoring a range of outcomes including steps, energy expenditure, and sleep. These devices utilize sensors to track movement, and these recorded data are provided to the user via a visual display on the device itself and/or by syncing the device with an accompanying app or web-based program. Combined together, these devices and accompanying apps incorporate a broad range of behavior change techniques that are known to change behavior, including self-monitoring, goal setting, and social support. In recent years, wearable activity trackers have become increasingly popular, and the growth in ownership within different populations has occurred at an exponential rate. This growth in appeal has led to researchers and practitioners examining the validity and reliability of wearable activity trackers for measuring a range of outcomes and integrating the results into physical activity promotion strategies. Acceptable validity has been reported for steps and moderate validity for measuring energy expenditure. However, little research has examined whether wearable activity trackers are a feasible and effective method for changing physical activity behaviors in the short- and longer-term, either alone or in combination with additional strategies. Some initial results are promising, though concerns have been raised over longer-term use and impacts on motivation for physical activity. There is a need for research examining the longer-term use of wearable activity trackers in different population groups, and establishing whether this technology has any positive effects on physical activity levels.
David J. Madden and Zachary A. Monge
Age-related decline occurs in several aspects of fluid, speed-dependent cognition, particularly those related to attention. Empirical research on visual attention has determined that attention-related effects occur across a range of information processing components, including the sensory registration of features, selection of information from working memory, controlling motor responses, and coordinating multiple perceptual and cognitive tasks. Thus, attention is a multifaceted construct that is relevant at virtually all stages of object identification. A fundamental theme of attentional functioning is the interaction between the bottom-up salience of visual features and top-down allocation of processing based on the observer’s goals. An underlying age-related slowing is prominent throughout visual processing stages, which in turn contributes to age-related decline in some aspects of attention, such as the inhibition of irrelevant information and the coordination of multiple tasks. However, some age-related preservation of attentional functioning is also evident, particularly the top-down allocation of attention. Neuroimaging research has identified networks of frontal and parietal brain regions relevant for top-down and bottom-up attentional processing. Disconnection among these networks contributes to an age-related decline in attention, but preservation and perhaps even increased patterns of functional brain activation and connectivity also contribute to preserved attentional functioning.
Human visual development is a complex dynamic psychological/neurobiological process, being part of the developing systems for cognition, action, and attention. This article reviews current knowledge and methods of study of human visual development in infancy and childhood, in relation to typical early visual brain development, and how it can change in developmental disorders, both acquired (e.g., related to at-risk births) and genetic disorders. The newborn infant starts life with a functioning subcortical visual system which controls newborn orienting to nearby high contrast objects and faces. Although visual cortex may be active from birth, its characteristic stimulus selectivity and control of visual responses is generally seen to emerge around six to twelve weeks after birth. By age six months the infant has adequate acuity and contrast sensitivity in nearby space, and operating cortical mechanisms for discriminating colors, shapes, faces, movement, stereo depth, and distance of objects, as well as the ability to focus and shift attention between objects of interest. This may include both feedforward and feedback pathways between cortical areas and between cortical and subcortical areas. Two cortical streams start to develop and become interlinked, the dorsal stream underpinning motion, spatial perception and actions, and the ventral stream for recognition of objects and faces. The neural systems developing control and planning of actions include those for directed eye movements, reaching and grasping, and the beginnings of locomotion, with these action systems being integrated into the other developing subcortical and cortical visual networks by one year of age. Analysis of global static form (pattern) and global motion processing allows the development of dorsal and ventral streams to be monitored from infancy through childhood. The development of attention, visuomotor control and spatial cognition in the first years show aspects of function related to the developing dorsal stream, and their integration with the ventral stream.
The milestones of typical visual development can be used to characterize visual and visuo-cognitive disorders early in life, such as in infants with perinatal brain injuries and those born very prematurely. The concept of “dorsal stream vulnerability” is outlined. It was initially based on deficits in global motion sensitivity relative to static form sensitivity, but can be extended to the planning and execution of visuomotor actions and problems of attention, together with visuospatial and numerical cognition. These problems are found in the phenotype of children with both genetic developmental disorders (e.g., Williams syndrome, autism, fragile-X, and dyslexia), and in acquired developmental disorders related to very preterm birth, or in children with abnormal visual input such as congenital cataract, refractive errors, or amblyopia. However, there are subtle differences in the manifestation of these disorders which may also vary considerably across individuals. Development in these clinical conditions illustrates the early, but limited, plasticity of visual brain mechanisms, and provides a challenge for the future in designing successful intervention and treatment.
MacKenna L. Perry and Leslie B. Hammer
Study of the intersection of work with nonwork components of individuals’ lives has most often focused on roles within nuclear and extended families but is increasingly focused on nonwork domains beyond family, such as roles within friendships, communities, leisure activities, and the self. In line with the focus of most existing literature on the family-specific domain within nonwork lives, the nonwork domain will generally be referred to here as “family.” One popular conceptualization of linking mechanisms between work and family differentiates between work-family conflict or stress, which occurs when a work role and a nonwork role are not fully compatible and results in some type of physical or psychological strain. Alternatively, work-family enrichment occurs when participation in one role benefits life in the other role. Concepts similar to work-family enrichment include work-family positive spillover and work-family facilitation; all emphasize the ways in which one role can positively impact another role. Additionally, the popular concept of work-family balance highlights either a state of low conflict and high enrichment or the presence of effectiveness and satisfaction in both roles.
Broadly speaking, the links between work and family are bi-directional, such that the work domain can influence the family domain, the family domain can influence the work domain, and both can occur simultaneously. Work-family conflict and enrichment have been tied to important employee outcomes, including work (e.g., absenteeism), family (e.g., family satisfaction), and domain-unspecific outcomes (e.g., physical and psychological health), as well as to organizational outcomes (e.g., market performance). Working conditions contributing to work-family conflict and enrichment are frequently characteristic of lower wage jobs, such as low levels of control over work, high work demands, low levels of supervisor support, shift work, and temporary work that can lead to unpredictable schedules, high degrees of job insecurity, and increased health and safety hazards. Researchers are presented with unique challenges as the workplace continues to change, with more dual-earner couples, an increasingly aging workforce, and surges of technology that facilitates flexible work arrangements (e.g., telecommuting). Nonetheless, researchers and organizations work to explore relationships between work and family roles, develop policies related to work and family (i.e., national, state or local, and organizational), and build evidence-based interventions to improve organizations’ abilities to meet employees’ needs.
Working memory as a temporary buffer for cognitive processing is an essential part of the cognitive system. Its capacity and select aspects of its functioning are age sensitive, more so for spatial than verbal material. Assumed causes for this decline include a decline in cognitive resources (such as speed of processing), and/or a breakdown in basic control processes (resistance to interference, task coordination, memory updating, binding, and/or top-down control as inferred from neuroimaging data). Meta-analyses suggest that a decline in cognitive resources explains much more of the age-related variance in true working memory tasks than a breakdown in basic control processes, although the latter is highly implicated in tasks of passive storage. The age-related decline in working memory capacity has downstream effects on more complex aspects of cognition (episodic memory, spatial cognition, and reasoning ability). Working memory remains plastic in old age, and training in working memory and cognitive control processes yields near transfer effects, but little evidence for strong far transfer.
Sharon Glazer and Cong Liu
Work stress refers to the process of job stressors, or stimuli in the workplace, leading to strains, or negative responses or reactions. Organizational development refers to a process in which problems or opportunities in the work environment are identified, plans are made to remediate or capitalize on the stimuli, action is taken, and subsequently the results of the plans and actions are evaluated. When organizational development strategies are used to assess work stress in the workplace, the actions employed are various stress management interventions. Two key factors tying work stress and organizational development are the role of the person and the role of the environment. In order to cope with work-related stressors and manage strains, organizations must be able to identify and differentiate between factors in the environment that are potential sources of stressors and how individuals perceive those factors. Primary stress management interventions focus on preventing stressors from even presenting, such as by clearly articulating workers’ roles and providing necessary resources for employees to perform their job. Secondary stress management interventions focus on a person’s appraisal of job stressors as a threat or challenge, and the person’s ability to cope with the stressors (presuming sufficient internal resources, such as a sense of meaningfulness in life, or external resources, such as social support from a supervisor). When coping is not successful, strains may develop. Tertiary stress management interventions attempt to remediate strains, by addressing the consequence itself (e.g., diabetes management) and/or the source of the strain (e.g., reducing workload). The person and/or the organization may be the targets of the intervention. The ultimate goal of stress management interventions is to minimize problems in the work environment, intensify aspects of the work environment that create a sense of a quality work context, enable people to cope with stressors that might arise, and provide tools for employees and organizations to manage strains that might develop despite all best efforts to create a healthy workplace.