You are looking at 341-360 of 371 articles
Tiffany Bisbey and Eduardo Salas
Teams are complex, dynamic systems made up of interdependent members working toward a shared goal; but teamwork is more than working together as a group. Teamwork is a multifaceted phenomenon that allows a group of individuals to function effectively as a unit by using a set of interrelated knowledge, skills, and attitudes. Effective teamwork is marked by cooperation, communication, coordination, conflict management, coaching, and shared cognition among team members. The most effective teamwork leads to team performance gains that are greater than the sum of each individual member’s effort. These performance outcomes re-inform the teamwork process, thus creating a recursive feedback loop that drives team development and guides future performance. Along with performance outcomes, individual- and team-level changes incite learning and allow teams to adapt to the dynamic systems in which they exist. With each development cycle over time, teams learn how to maneuver their environment and allocate their resources to reach performance goals with more efficiency. There are many external factors that can influence this process, including organizational characteristics, situational demands, and team training interventions; as well as internal factors that emerge and evolve over the life of the team, such as shared mental models and psychological safety. Although teamwork is a complex phenomenon with many moving parts, a strong body of research guides practitioners in leveraging its influence on organizational effectiveness.
Sara J. Czaja and Chin Chin Lee
The expanding power of computers and the growth of information technologies such as the Internet have made it possible for large numbers of people to have direct access to an increasingly wide array of information sources and services. Use of technology has become an integral component of work, education, communication, entertainment, and health care. Moreover, home appliances, security systems, and other communication devices are becoming more integrated with network resources providing faster and more powerful interactive services. Older adults represent an increasing large proportion of the population and will need to be active users of technology to function independently and receive the potential benefits of technology. Thus, it is critically important to understand how older adults respond to and adopt new information technologies. Technology offers many potential benefits for older people such as enhanced access to information and resources and health-care services, as well as opportunities for cognitive and social engagement. Unfortunately, because of a number of factors many older people confront challenges and barriers when attempting to access and use technology systems.
Life is filled with goals or intentions that people hope to realize. Some of these are rather mundane (e.g., remembering to purchase a key ingredient for a recipe when stopping at the market), while others are more significant (e.g., remembering to pick up one’s child from school at the end of the day). Prospective memory represents the ability to form and then realize intentions at an appropriate time. A fundamental aspect of prospective memory is that one is engaged in one or more tasks (i.e., ongoing activities) between the formation of an intention and the opportunity to realize the goal. For instance, in the shopping example, one might form the intention at home and then travel to the market and collect several other items before walking past the desired ingredient. Considerable research has demonstrated that the efficiency of prospective memory declines with age, although age-related differences are not universal.
The neurocognitive processes underpinning age-related differences in the formation and realization of delayed intentions have been investigated in studies using event-related brain potentials. This research reveals that age-related differences in prospective memory arise from the disruption of neural systems supporting the successful encoding of intentions, the detection of prospective memory cues, and possibly processes supporting the retrieval of intentions from memory when a cue is encountered or efficiently shifting from the ongoing activity to the prospective element of the task. Therefore, strategies designed to ameliorate age-related declines in prospective memory should target a variety of processes engaged during the encoding, retrieval, and enactment of delayed intentions.
With roots that range from medicine to politics, to jurisdiction and historiography in ancient Greece, the concept of “crisis” played an eminent role in the founding years of Western academic psychology and continued to be relevant during its development in the 19th and 20th century. “Crisis” conveys the idea of an imminent danger of disintegration and breakdown, as well as a pivotal turning point with the chance of a new beginning. To this day, both levels of meaning are present in psychological discourses. Early diagnoses of a state of “crisis” of psychology date back to the end of the 19th century and focused on the question of the correct metaphysical foundation of psychology. During the interwar period, warnings of a disintegration of the discipline reached their first climax in German academia, when many eminent psychologists expressed their worries about the increasing fragmentation of the discipline. The rise of totalitarian systems in the 1930s brought an end to these debates, silencing the theoretical polyphony with physical violence. The 1960s saw a resurgence of “crisis literature” and the emergence of a more positive connotation of the concept in U.S.-American experimental psychology, when it was connected with Thomas Kuhn’s ideas of scientific “revolutions” and “paradigm shifts.” Since that time, psychological crisis literature has revolved around the question of unity, disunity, and the scientific status of the discipline. Although psychological crisis literature showed little success in solving the fundamental problems it addressed, it still provides one of the most theoretically rich and thought-provoking bodies of knowledge for theoretical and historical analyses of the discipline.
Influential theorists of pre-adult phases of the development of the individual person (infancy, childhood, and adolescence) have articulated myriad versions of stage theories, varying in specificity, rigidity, and many other parameters. Some stage theories are concerned with capacities defined somewhat narrowly and operationally defined by behavior. Elsewhere on the spectrum, some of the most influential stage theories have purported to indicate capacities or modes of considerable generality, by positing deep, structural changes either in intellectual capacity or in terms of some other aspect of human functioning treated as fundamental to the affective and the rational life. Jean Piaget’s stage theory of intellectual (cognitive) development is the paradigm of a theory of structural changes in the capacity for logical thought. Bluntly put, Piaget’s theory takes for granted the key characteristics of the thinking of the emotionally balanced, rational adult and attempts to define the necessary steps by which that state is to be attained from the time one starts life as a baby. Sigmund Freud’s theory of psychosexual stages, especially as articulated by Karl Abraham, is the paradigm of a stage theory in which significant aspects of adult functioning are redefined, rather than taken for granted. The steps intervening from babyhood, as thereafter articulated, thereby take on an innovative character. In both cases the substantial internal consistency of the stage model, notwithstanding numerous empirical shortcomings, has generated a kind of validity. But even such qualified praise cannot now be offered to Stanley Hall’s stage theory of individual development, which seems with hindsight little more than a derivative popularization of the recapitulationary evolutionism of the latter part of the 19th century. From an historical perspective, Hall’s, Freud’s, and Piaget’s stage theories of development are all artefacts, products of the sociocultural and scientific environments of their times.
Michael J. Zickar
Personnel and vocational testing has made a huge impact in public and private organizations by helping organizations choose the best employees for a particular job (personnel testing) and helping individuals choose occupations for which they are best suited (vocational testing). The history of personnel and vocational testing is one in which scientific advances were influenced by historical and technological developments.
The first systematic efforts at personnel and vocational testing began during World War I when the US military needed techniques to sort through a large number of applicants in a short amount of time. Techniques of psychological testing had just begun to be developed at around the turn of the 20th century and those techniques were quickly applied to the US military effort. After the war, intelligence and personality tests were used by business organizations to help choose applicants most likely to succeed in their organizations. In addition, when the Great Depression occurred, vocational interest tests were used by government organizations to help the unemployed choose occupations that they might best succeed in.
The development of personnel and vocational tests was greatly influenced by the developing techniques of psychometric theory as well as general statistical theory. From the 1930s onward, significant advances in reliability and validity theory provided a framework for test developers to be able to develop tests and validate them. In addition, the civil rights movement within the United States, and particularly the Civil Rights Act of 1964, forced test developers to develop standards and procedures to justify test usage. This legislation and subsequent court cases ensured that psychologists would need to be involved deeply in personnel testing. Finally, testing in the 1990s onward was greatly influenced by technological advances. Computerization helped standardize administration and scoring of tests as well as opening up the possibility for multimedia item formats. The introduction of the internet and web-based testing also provided additional challenges and opportunities.
The History of Psychological Psychotherapy in Germany: The Rise of Psychology in Mental Health Care and the Emergence of Clinical Psychology During the 20th Century
Two different but related developments played an important role in the history of psychologists in the fields of mental health care in Germany during the 20th century. The first development took place in the field of applied psychology, which saw psychological professionals perform mental testing, engage in counseling and increasingly, in psychotherapy in practical contexts. This process slowly began in the first decades of the 20th century and included approaches from different schools of psychotherapy. The second relevant development was the emergence of clinical psychology as an academic sub-discipline of psychology. Having become institutionalized in psychology departments at German universities during the 1960s and 1970s, clinical psychology often defines itself as a natural science and almost exclusively focuses on cognitive-behavioral approaches. There are four phases of the growing relationship between psychology and psychotherapy in Germany in which the two developments were increasingly linked: first, the entry of psychology into psychiatric and psychotherapeutic fields from approximately 1900 until 1945; second, the rise of psychological psychotherapy and the emergence of clinical psychology after World War II until 1972, when the diploma-regulations in West Germany were revised; third, a phase of consolidation and diversification from 1973 until the pivotal psychotherapy law of 1999; and fourth, the shifting equilibrium as established profession and discipline up to the reform of the psychotherapy law in 2019. Overall, the emergence of psychological psychotherapy has not one single trajectory but rather multiple origins in the different and competing academic and professional fields of mental health care.
The history of concepts about the adult and that of research into adult constructs show progression from a simple characterization of growth to a variety of complex constructs that define the terrain. Originally, the term adult encompassed all species and events that had attained full physical maturation, a product connotation. Later, time and events (e.g., marriage, the birth of children) became proxies for adult development. The absence of considerations of adult development was augmented by the fact that, for much of the past, adults could not be seen in long-term individual evolution since lifetimes were not extensive.
In the 73 years of Psychological Abstracts, adults under various headings (e.g., adulthood, middle age) was referenced in a mere .01% of citations. The first mention of “adult” in a journal title was in 1994. Into the 21st century, although the exploration of various adult constructs abounds, the use of single terms (e.g., intelligence, wisdom) to describe multidimensional attributes leads to misunderstanding and reductionism. There is scant cross-construct analysis and, along with its parent discipline of psychology, analysis of adult development remains at the nascent descriptive level.
Looking at the two major constructs of adult personality and intelligence, personality has had the lion’s share of publications. An examination of trends in its analysis reveals that the constructs are defined in various ways, little in the way of socio-contextual appraisal has occurred, and, with respect to the appraisal of intelligence, motivation to perform is ill-examined.
Tara H. Abraham
The Macy Conferences on Cybernetics were a series of 10 interdisciplinary scientific meetings that took place in New York between 1946 and 1953. The meetings were sponsored by the Macy Foundation, which aimed to promote interdisciplinary approaches to the social, behavioral, and medical sciences. Co-organized by neuropsychiatrist Warren S. McCulloch and Frank Fremont-Smith, medical director of the Macy Foundation, the meetings brought together a variety of scientists from mathematics, psychology, engineering, anthropology, physics, ecology, psychiatry, neurophysiology, linguistics, and sociology. The conferences strove to apply tools from the physical sciences and mathematics to problems in the biological and human sciences. Such tools stemmed first from Norbert Wiener’s work on the anti-aircraft predictor, in which he employed the concept of negative feedback to explain purposeful behavior, and second from McCulloch’s work with Walter Pitts on the logic of neural activity, which purported to embody logical reasoning in the physiology of the brain. Wiener and McCulloch touted the practice of hypothetical modelling as a bridge over the divide between the natural and the artificial, and a method for explaining purposeful behavior in organisms and machines.
Discussions at the Macy Conferences expanded on this work, and participants discussed and debated models of cognitive functions such as sensation, communication, memory, and learning, all cast as functions of the mind and exemplars of purposeful behavior. Thus, the meetings signal a major shift in 20th-century psychology, when discussions of the mind took on a more central place in psychological discourse. Behaviorist psychologists in the early 20th century had largely rejected concepts of mind as unscientific and not objective. The Macy Conferences, in contrast, placed the mind at the nexus of interdisciplinary inquiry across the divide between the physical and human sciences, and helped to bring back the mind as a topic of objective, scientific inquiry in psychology and in the emerging cognitive sciences.
Vanessa L. Burrows
Stress has not always been accepted as a legitimate medical condition. The biomedical concept stress grew from tangled roots of varied psychosomatic theories of health that examined (a) the relationship between the mind and the body, (b) the relationship between an individual and his or her environment, (c) the capacity for human adaptation, and (d) biochemical mechanisms of self-preservation, and how these functions are altered during acute shock or chronic exposure to harmful agents. From disparate 19th-century origins in the fields of neurology, psychiatry, and evolutionary biology, a biological disease model of stress was originally conceived in the mid-1930s by Canadian endocrinologist Hans Selye, who correlated adrenocortical functions with the regulation of chronic disease.
At the same time, the mid-20th-century epidemiological transition signaled the emergence of a pluricausal perspective of degenerative, chronic diseases such as cancer, heart disease, and arthritis that were not produced not by a specific etiological agent, but by a complex combination of multiple factors which contributed to a process of maladaptation that occurred over time due to the conditioning influence of multiple risk factors. The mass awareness of the therapeutic impact of adrenocortical hormones in the treatment of these prevalent diseases offered greater cultural currency to the biological disease model of stress.
By the end of the Second World War, military neuropsychiatric research on combat fatigue promoted cultural acceptance of a dynamic and universal concept of mental illness that normalized the phenomenon of mental stress. This cultural shift encouraged the medicalization of anxiety which stimulated the emergence of a market for anxiolytic drugs in the 1950s and helped to link psychological and physiological health. By the 1960s, a growing psychosomatic paradigm of stress focused on behavioral interventions and encouraged the belief that individuals could control their own health through responsible decision-making. The implication that mental power can affect one’s physical health reinforced the psycho-socio-biological ambiguity that has been an enduring legacy of stress ever since.
This article examines the medicalization of stress—that is, the historical process by which stress became medically defined. It spans from the mid-19th century to the mid-20th century, focusing on these nine distinct phases:
1. 19th-century psychosomatic antecedent disease concepts
2. The emergence of shell-shock as a medical diagnosis during World War I
3. Hans Selye’s theorization of the General Adapation Syndrome in the 1930s
4. neuropsychiatric research on combat stress during World War II
5. contemporaneous military research on stress hormones during World War II
6. the emergence of a risk factor model of disease in the post-World War II era
7. the development of a professional cadre of stress researchers in the 1940s and 50s
8. the medicalization of anxiety in the early post–World War II era
9. The popularization of stress in the 1950s and pharmaceutical treatments for stress, marked by the cultural assimilation of paradigmatic stress behaviors and deterrence strategies, as well pharmaceutical treatments for stress.
Nikos Ntoumanis, Cecile Thørgersen-Ntoumani, Eleanor Quested, and Nikos Chatzisarantis
Compelling evidence worldwide suggests that the number of physically inactive individuals is high, and it is increasing. Given that lack of physical activity has been linked to a number of physical and mental health problems, identifying sustainable, cost-effective, and scalable initiatives to increase physical activity has become a priority for researchers, health practitioners, and policymakers. One way to identify such initiatives is to use knowledge derived from psychological theories of motivation and behavior change. There is a plethora of such theories and models that describe a variety of cognitive, affective, and behavioral mechanisms that can target behavior at a conscious or an unconscious level. Such theories have been applied, with varying degrees of success, to inform exercise and physical activity interventions in different life settings (e.g., schools, hospitals, and workplaces) using both traditional (e.g., face-to-face counseling and printed material) and digital technology platforms (e.g., smartphone applications and customized websites). This work has offered important insights into how to create optimal motivational conditions, both within individuals and in the social environments in which they operate, to facilitate long-term engagement in exercise and physical activity. However, we need to identify overlap and synergies across different theoretical frameworks in an effort to develop more comprehensive, and at the same time more distinct, theoretical accounts of behavior change with reference to physical activity promotion. It is also important that researchers and practitioners utilize such theories in interdisciplinary research endeavors that take into account the enabling or restrictive role of cultural norms, the built environment, and national policies on physical activity.
Theoretical Perspectives on Age Differences in Brain Activation: HAROLD, PASA, CRUNCH—How Do They STAC Up?
Sara B. Festini, Laura Zahodne, and Patricia A. Reuter-Lorenz
Cognitive neuroimaging studies often report that older adults display more activation of neural networks relative to younger adults, referred to as overactivation. Greater or more widespread activity frequently involves bilateral recruitment of both cerebral hemispheres, especially the frontal cortex. In many reports, overactivation has been associated with superior cognitive performance, suggesting that this activity may reflect compensatory processes that offset age-related decline and maintain behavior. Several theories have been proposed to account for age differences in brain activation, including the Hemispheric Asymmetry Reduction in Older Adults (HAROLD) model, the Posterior-Anterior Shift in Aging (PASA) theory, the Compensation-Related Utilization of Neural Circuits Hypothesis (CRUNCH), and the Scaffolding Theory of Aging and Cognition (STAC and STAC-r). Each model has a different explanatory scope with regard to compensatory processes, and each has been highly influential in the field. HAROLD contrasts the general pattern of bilateral prefrontal activation in older adults with that of more unilateral activation in younger adults. PASA describes both anterior (e.g., frontal) overactivation and posterior (e.g., occipital) underactivation in older adults relative to younger adults. CRUNCH emphasizes that the level or extent of brain activity can change in response to the level of task demand at any age. Finally, STAC and STAC-r take the broadest perspective to incorporate individual differences in brain structure, the capacity to implement functional scaffolding, and life-course neural enrichment and depletion factors to predict cognition and cognitive change across the lifespan. Extant empirical work has documented that compensatory overactivation can be observed in regions beyond the prefrontal cortex, that variations in task difficulty influence the degree of brain activation, and that younger adults can show compensatory overactivation under high mental demands. Additional research utilizing experimental designs (e.g., transcranial magnetic stimulation), longitudinal assessments, greater regional precision, both verbal and nonverbal material, and measures of individual difference factors will continue to refine our understanding of age-related activation differences and adjudicate among these various accounts of neurocognitive aging.
Neil E. Rowland
Thirst is a specific and compelling sensation, often arising from internal signals of dehydration but modulated by many environmental variables. There are several historical landmarks in the study of thirst and drinking behavior. The basic physiology of body fluid balance is important, in particular the mechanisms that conserve fluid loss. The transduction of fluid deficits can be discussed in relation to osmotic pressure (osmoreceptors) and volume (baroreceptors). Other relevant issues include the neurobiological mechanisms by which these signals are transformed to intracellular and extracellular dehydration thirsts, respectively, including the prominent role of structures along the lamina terminalis. Other considerations are the integration of signals from natural dehydration conditions, including water deprivation, thermoregulatory fluid loss, and thirst associated with eating dry food. These mechanisms should also be considered within a broader theoretical framework of organization of motivated behavior based on incentive salience.
The idea that suppressing an unwanted thought results in an ironic increase in its frequency is accepted as psychological fact. Wegner’s ironic processes model has been applied to understanding the development and persistence of mood, anxiety, and other difficulties. However, results are highly inconsistent and heavily influenced by experimental artifact. There are a substantial number of methodological considerations and issues that may underlie the inconsistent findings in the literature. These include the internal and external validity of the paradigms used to study thought suppression, conceptual issues such as what constitutes a thought, and consideration of participants’ history with and motivation to suppress the target thought. Paradigms that study the products of failed suppression, such as facilitated recall and attentional deployment to thought relevant stimuli may have greater validity. It is argued that a shift from conceptualizing the persistence of unwanted thoughts as products of failed suppression and instead as internal threat stimuli may have merit.
Training is the systematic processes initiated by the organization that facilitate relatively permanent changes in the knowledge, skills, or affect/attitudes of organizational members. Cumulative meta-analytic evidence indicates that training is effective, producing, on average, moderate effect sizes. Training is most effective when designed so that trainees are active and encouraged to self-regulate during training, and when it is well-structured and requires effort on the part of trainees. Additional characteristics of effective training are: The purpose, objectives, and intended outcomes of training are clearly communicated to trainees; the training content is meaningful, and training assignments, examples, and exercises are relevant to the job; trainees are provided with instructional aids that can help them organize, learn, and recall training content; opportunities for practice in a safe environment are provided; feedback is provided by trainers, observers, peers, or the task itself; and training enables learners to observe and interact with others. In addition, effective training requires a prior needs assessment to ensure the relevance of training content and provides conditions to optimize trainees’ motivation to learn. After training, care should be taken to provide opportunities for trainees to implement trained skills, and organizational and social support should be in place to optimize transfer. Finally, it is important that all training be evaluated to ensure learning outcomes are met and that training results in increased job performance and/or organizational effectiveness.
Well-being is a core concept for both individuals, groups and societies. Greater understanding of trajectories of well-being in later life may contribute to the achievement and maintenance of well-being for as many as possible. This article reviews two main approaches to well-being: hedonic and eudaimonic well-being, and shows that it is not chronological age per se, but various factors related to age that underlie trajectories of well-being at older ages. Next to the role of genes, heritability and personality traits, well-being is determined to a substantial extent by external circumstances and resources (e.g., health and social relationships), and to malleable individual behaviors and beliefs (e.g., self-regulatory ability and control beliefs). Although many determinants have been identified, it remains difficult to decide which of them are most important. Moreover, the role of some determinants varies for different indicators of well-being, such as positive affect and life satisfaction. Several prominent goal- and need-based models of well-being in later life are discussed, which explicate mechanisms underlying trajectories of well-being at older ages. These are the model of Selection, Optimization, and Compensation, the Motivational Theory of Lifespan Development, Socio-emotional Selectivity Theory, Ryff’s model of Psychological Well-Being, Self-Determination Theory, and Self-Management of Well-being theory. Also, interventions based on these models are reviewed, although not all of them address older adults. It is concluded that the literature on well-being in later life is enormous, and, together with various conceptual models, offers many important insights. Still, the field would benefit from more theoretical integration, and from more attention to the development and testing of theory-based interventions. This remains a challenge for the science of well-being in later life, and could be an important contribution to the well-being of a still growing proportion of the population.
The history of psychology is characterized by unparalleled complexity of its methodology and uniquely ambiguous subject matter closely entangled with issues of power, social justice, and ethics. This complexity requires inordinate levels of reflexivity and conceptual sophistication. In effect, a historian of psychology needs to explicate no less than one’s worldview—a broad position as to how people are situated in the world, relate to, change, and get to know it, and how knowledge develops through time—all coupled with one’s broad sociopolitical ethos. Traditional histories of psychology have operated with an astonishing lack of reflection about these issues. One of many deplorable results is that psychology still grapples with its racist and sexist legacies and lacks awareness of social injustices in existence today. The recently emerging approaches have begun to remedy this situation by focusing on situated practices of knowledge production. This article addresses how human agency can be integrated into these approaches, while focusing on knowledge production as not only situated in context but also, and critically, as a world-forming and history-making process. In tackling the shortcomings of relational approaches including social constructionism, the transformative activist stance approach draws on Marxist philosophy and epistemology—infused with insights from Vygotsky’s psychology and other critical theories of resistance. The core point is that knowledge is achieved in and through collaborative community practices realized by individually unique contributions as these come to embody and enact, in an inseparable blend, both cultural-historical contexts and unique commitments and agency of community members. The acts of being-doing-knowing are non-neutral, transformative processes that produce the world, its history and also people themselves, all realized in the process of taking up the world, rather than passively copying it or coping with it. And since reality is in-the-making by people themselves, knowing is about creating the world and knowing it in the very act of bringing about transformative and creative change. Thus, the historicity and situativity of knowledge are ascertained alongside a focus on its ineluctable fusion with an activist, future-oriented, political-ethical stance. Therefore, the critical challenge for the history of psychology is to understand producers of knowledge in their role of actors in the drama of life (rather than only of ideas), that is, as agents of history- and world-making, while also engaging in self-reflection on the historians’ own role in these processes, in order to practice history in responsive and responsible, that is, activist ways.
Craig D. Parks
A social dilemma is a situation of interdependence between people in which there is conflict between doing what is best for oneself, and doing what is best for the group: Trying to produce the best personal outcome (selfishness) hurts the group effort, and contributing to the group effort (cooperation) leads to a less-than-optimal personal outcome. The best personal outcome is realized by acting for oneself when everyone else acts for the group. Because of this, if each group member does what is best for him or herself, the group will fail, and each person will end up with a poor outcome. Solution of a social dilemma thus requires that at least some people forgo selfish interest in favor of the collective. Research into social dilemmas is primarily oriented around identifying the influences on a person’s willingness to cooperate and designing interventions that will encourage more frequent cooperation. There are many real examples of social dilemmas: clean air, charities, public broadcasting, and groundwater, to name a few.
Nicola D. Ridgers and Samuel K. Lai
Commercially available wearable activity trackers are small, non-invasive electronic devices that are worn on the body for the purposes of monitoring a range of outcomes including steps, energy expenditure, and sleep. These devices utilize sensors to track movement, and these recorded data are provided to the user via a visual display on the device itself and/or by syncing the device with an accompanying app or web-based program. Combined together, these devices and accompanying apps incorporate a broad range of behavior change techniques that are known to change behavior, including self-monitoring, goal setting, and social support. In recent years, wearable activity trackers have become increasingly popular, and the growth in ownership within different populations has occurred at an exponential rate. This growth in appeal has led to researchers and practitioners examining the validity and reliability of wearable activity trackers for measuring a range of outcomes and integrating the results into physical activity promotion strategies. Acceptable validity has been reported for steps and moderate validity for measuring energy expenditure. However, little research has examined whether wearable activity trackers are a feasible and effective method for changing physical activity behaviors in the short- and longer-term, either alone or in combination with additional strategies. Some initial results are promising, though concerns have been raised over longer-term use and impacts on motivation for physical activity. There is a need for research examining the longer-term use of wearable activity trackers in different population groups, and establishing whether this technology has any positive effects on physical activity levels.
Various self-concepts constitute major keywords in both psychological science and liberal political discourse. They have been central to psychology’s public-facing, policy-oriented role in the United States, dating back to the mid-19th century. Psychologists’ articulations of self-concept include an understanding of the individual, society, and the interventions needed to augment them both. Psychologists’ early enthusiasm for self-esteem has given way to competing concepts of the individual, namely self-regulation and self-control. Self-esteem in a modern sense coalesced out of the deprivation of the Great Depression and the political crises it provoked. The fate of self-esteem became tied to the capacities of the liberal welfare state to improve the psychic capacities of its citizens, in order to render them both more equal under the law and more productive in their daily existence. Western democracies, especially the United States, hit peak self-esteem in early 1990s. Since then, psychologists lost faith in the capacity of giving away self-worth to improve society. Instead, psychologists in the 21st century preached a neo-Victorian gospel of self-reliance. At the very historical juncture when social mobility became more difficult, when inherited social inequality became more entrenched, psychologists abandoned their Keynesian model of human capital and embraced its neoliberal counterpart.