You are looking at 61-70 of 374 articles
Creativity is perhaps what most differentiates humans from other species. Understanding creativity is particularly important in times of accelerated cultural and environmental change, such as the present, in which novel approaches and perspectives are needed. The study of creativity is an exciting area that brings together many different branches of research: cognitive psychology, social psychology, personality psychology, developmental psychology, organizational psychology, clinical psychology, neuroscience, mathematical models, and computer simulations. The creative process is thought to involve the capacity to shift between divergent and convergent modes of thought in response to task demands. Divergent thought is conventionally characterized as the kind of thinking needed for open-ended tasks, and it is measured by the ability to generate multiple solutions, while convergent thought is commonly characterized as the kind of thinking needed for tasks in which there is only one correct solution. More recently, divergent thought has been conceived of as reflecting on the task from unconventional contexts or perspectives, while convergent thought has been conceived of as reflecting on it from conventional contexts or perspectives. Personality traits correlated with creativity include openness to experience, tolerance of ambiguity, impulsivity, and self-confidence. Evidence that creativity is linked with affective disorders is mixed. Neuroscientific research on creativity using electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) suggests that creativity is associated with a loosening of cognitive control and decreased arousal. It has been shown that the distributed, content-addressable structure of associative memory is conducive to bringing task-relevant items to mind without the need for explicit search. Tangible evidence of human creativity dates back to the earliest stone tools devised over three million years ago, with the Middle-Upper Paleolithic marking the onset of art, science, and religion, and another surge of creativity in the present. Past and current areas of controversy concern the relative contributions of expertise, chance, and intuition, whether the emphasis should be on process versus product, whether creativity is a domain-specific or a domain-general function, the extent to which creativity is correlated with affective disorders, and whether divergent thinking entails the generation of multiple ideas or the honing of a single initially ambiguous mental representation that may manifest as different external outputs. Promising areas for further psychological study of creativity include computational modeling, research on the biological basis of creativity, and studies that track specific creative ideation processes over time.
Martin J. Packer and Michael Cole
There is growing appreciation of the role of culture in children’s psychological development (also called human ontogenesis). However, there are several distinct approaches to research on this matter. Cross-cultural psychology explores the causal influence of culture on differences in children’s development, treated as dependent variables. Researchers interested in the role of cultural learning in human evolution view culture as beliefs and values that are transferred from the mind of one individual to that of another.
By contrast, “cultural psychology” views culture not as a cause, but a constituent of human psychological functioning. It invites us to pay attention to the fact that humans live in societies filled with material artifacts, tools, and signs that mediate human activity; that is to say, they provide the means with which people interact with the world around them and with one another. From this perspective, culture provides constituents that are essential to human development: it has a constitutive role in development.
Although there continues to be much debate over how to define culture, it is generally agreed that different human social groups have distinct cultures, and it is common to assume that cultural differences lead to differences in the trajectories of children’s development. This is true, but it is also the case that culture is a universal requirement for development. Every child is born into a family and community with a language, customs, and conventions, and in which people occupy institutional roles with rights and responsibilities. These facts define universal requisites of human psychological development and include the acquisition of language, the development of a social identity, the understanding of community obligations, and the ability to contribute to the reproduction of the community. The interdependence of human communities—which probably had its origins in collaborative foraging and cooperative childrearing—seems to have placed species-specific demands on children’s development, selecting for the capacity to acquire a sensitivity not only to people’s goals and intentions but also to rights and responsibilities.
Joanne R. Smith
As social animals, humans are strongly influenced by the opinions and actions of those around them. Group norms are the expectations and behaviors associated with a social group, such as a nationality, an organization, or a sports team. Group norms can emerge during group interaction as group members are exposed to the opinions, or observe the actions, of fellow group members. Group norms can also emerge by comparing the attitudes and actions of the group with other groups. Leaders can also influence what is seen to be acceptable behaviors for group members to exhibit.
One of the most dominant approaches to the study of group norms is the social identity approach. The social identity approach proposes that belonging to a social group provides individuals with a definition of who one is, and a description and prescription of what is involved in being a group member. A large body of research has confirmed the power of group norms to determine the form and direction of group members’ attitudes and actions, particularly those individuals strongly attached to the group, across many behavioral domains.
In thinking about group norms, it is important to recognize that norms have both prescriptive (i.e., what should be done) and descriptive (i.e., what is done) elements. Research has found that group norms are most influential when aligned, but that misaligned or conflicting norms—either within the group or across multiple groups to which an individual belongs—can be particularly harmful in terms of engagement in a desired behavior. It is critical to appreciate and understand these complexities to be able to change group norms and, therefore, group members’ actions.
The insight that group norms are powerful determinants of behavior has been incorporated into behavior change interventions, including so-called “nudge” interventions. However, norms-based campaigns are not always successful, and can even lead to backlash effects, often because change agents have failed to consider identity-related processes, such as the role of leaders, the source of the influence attempt, and threats arising from attempts to change one’s group. Shared identity is a key mechanism through which people internalize (new) understandings of what it means to be a group member into the self-concept, and understanding these processes may lead to more enduring change in underlying motives, beliefs, and behavior.
Neil E. Rowland
Hunger is a specific and compelling sensation, sometimes arising from internal signals of nutrient depletion but more often modulated by numerous environmental variables including taste or palatability and ease or cost of procurement. Hunger motivates appetitive or foraging behaviors to find food followed by appropriate proximate or consummatory behaviors to eat it. A critical concept underlying food intake is the flux of chemical energy through an organism. This starts with inputs of food with particular energy content, storage of excess energy as adipose tissue or glycogen, and finally energy expenditure as resting metabolic rate (RMR) or as metabolic rate is modified by physical activity. These concepts are relevant within the context of adequate theoretical accounts based in energy homeostasis; historically, these are mainly static models, although it is now clear that these do not address practical issues such as weight gain through life. Eating is essentially an episodic behavior, often clustered as meals, and this has led to the idea that the meal is a central theoretical concept, but demonstrations that meal patterns are greatly influenced by the environment present a challenge to this tenet. Patterns of eating acquired during infancy and early life may also play a role in establishing adult norms. Direct controls of feeding are those that emphasize food itself as generating internal signals to modify or terminate an ongoing bout of eating, and include a variety of enteroendocrine hormones and brainstem mechanisms. Additionally, many studies point to the essential rewarding or hedonic aspects of food intake, including palatability, and this may involve integrative mechanisms in the forebrain and cerebral cortex.
Thomas F. Pettigrew
Intergroup attribution refers to causal attributions that people make about the behavior of out-groups and their own in-group. Attribution theory began in the late 1950s and 1960s. This initial interest was limited to how individuals causally interpreted the behavior of other individuals. But in the 1970s social psychologists began to consider causal attributions made about groups. The guiding theory for research in this area has been largely structured by the predictions of the ultimate attribution error (more accurately described as the intergroup attribution bias). Its principal contentions flow from phenomena already uncovered by attribution research on individual behavior. It holds that group attributions, especially among the highly prejudiced, will be biased for the in-group and against out-groups. Ingroup protection (explaining away negative ingroup behavior as situationally determined – “given the situation, we had to act that way”) is typically a stronger effect than ingroup enhancement (accepting positive ingroup behavior as dispositionally determined – “as a people, we are kind and compassionate toward other groups”). Many moderators and mediators of the effect have been uncovered. Asian cultures, for example, tend to be less prone to the intergroup attribution bias, while strong emotions can induce either more or less of the bias. Similarly, empathy and special training can significantly reduce the bias. Together with such closely related processes as the fundamental attribution error and actor-observer asymmetry, the intergroup attribution bias has proven highly useful in a great variety of applications. Moreover, the intergroup attribution bias serves as an integral component of the intergroup prejudice syndrome.
Mind cure, or mental healing, was a late 19th-century American healing movement that extolled a metaphysical mind-over-matter approach to the treatment of illness. Emerging in New England in the mid-19th century out of a mix of mesmerism and metaphysical philosophies, due to its effectiveness, by the 1880s it achieved national recognition. Three individuals are credited with creating and popularizing mental (or metaphysical) healing: Phineas Parkhurst Quimby, Warren Felt Evans, and Mary Baker Eddy. Mind cure was appealing because it helped treat ailments for which the medicines of the day were ineffective, especially problems with the “nerves.” Mental healers employed non-invasive mental and spiritual methods to treat ailing people, called mental therapeutics. As a practice and therapeutic philosophy, mind cure is historically noteworthy because it shaped the earliest forms of psychotherapy in the United States, advanced therapeutic work within the realm of mind-body medicine, birthed the influential New Thought movement, and helped set the stage for the beginnings of religious pluralism and the positive reception of Asian meditation teachers in the West.
Andrew S. Winston
The use of psychological concepts and data to promote ideas of an enduring racial hierarchy dates from the late 1800s and has continued to the present. The history of scientific racism in psychology is intertwined with broader debates, anxieties, and political issues in American society. With the rise of intelligence testing, joined with ideas of eugenic progress and dysgenic reproduction, psychological concepts and data came to play an important role in naturalizing racial inequality. Although racial comparisons were not the primary concern of most early mental testing, results were employed to justify beliefs regarding Black “educability” and the dangers of Southern and Eastern European immigration. Mainstream American psychology became increasingly liberal and anti-racist in the late 1930s and after World War II. However, scientific racism did not disappear and underwent renewal during the civil rights era and again during the 1970s and 1990s, Intelligence test scores were a primary weapon in attempts to preserve segregated schools and later to justify economic inequality. In the case of Henry Garrett, Arthur Jensen, and Philippe Rushton, their work included active, public promotion of their ideas of enduring racial differences, and involvement with publications and groups under control of racial extremists and neo-Nazis. Despite 100 years of strong critiques of scientific racism, a small but active group of psychologists helped revive vicious 19th-century claims regarding Black intelligence, brain size, morality, criminality, and sexuality, presented as detached scientific facts. These new claims were used in popular campaigns that aimed to eliminate government programs, promote racial separation, and increase immigration restriction. This troubling history raises important ethical questions for the discipline.
Susan C. Baker, Bernadette M. Watson, and Cindy Gallois
Language is a social behavior and a key aspect of social interaction. Language is ubiquitous and usually occurs with other human behaviors across diverse contexts. Thus, it is difficult to study it in isolation. This difficulty may be why most, albeit not all, social psychologists tend to neglect language, in spite of the prominence of language in early 20th century social psychology and the presence of numerous handbooks and reviews of this area. Language use has implications for many social psychological processes, and, given its role in daily social life, it is important to understand its social underpinnings. The field of language and social psychology highlights the relationship between language and communication and foregrounds the differences between the social-psychological and communication approaches. One central issue is bilingualism and the relationships among language, identity, and culture. Another is methodology, where social psychologists have tended to choose experimental and survey strategies to look at language (not always to the best advantage). This century has seen the development of new technologies that allow us to look at language on a large scale and in rich detail and that have the potential to transform this research. In part as a consequence, in the early 21st century there are many new topics emerging in language and social psychology that help to set a new agenda for future research.
The History of Psychological Psychotherapy in Germany: The Rise of Psychology in Mental Health Care and the Emergence of Clinical Psychology During the 20th Century
Two different but related developments played an important role in the history of psychologists in the fields of mental health care in Germany during the 20th century. The first development took place in the field of applied psychology, which saw psychological professionals perform mental testing, engage in counseling and increasingly, in psychotherapy in practical contexts. This process slowly began in the first decades of the 20th century and included approaches from different schools of psychotherapy. The second relevant development was the emergence of clinical psychology as an academic sub-discipline of psychology. Having become institutionalized in psychology departments at German universities during the 1960s and 1970s, clinical psychology often defines itself as a natural science and almost exclusively focuses on cognitive-behavioral approaches. There are four phases of the growing relationship between psychology and psychotherapy in Germany in which the two developments were increasingly linked: first, the entry of psychology into psychiatric and psychotherapeutic fields from approximately 1900 until 1945; second, the rise of psychological psychotherapy and the emergence of clinical psychology after World War II until 1972, when the diploma-regulations in West Germany were revised; third, a phase of consolidation and diversification from 1973 until the pivotal psychotherapy law of 1999; and fourth, the shifting equilibrium as established profession and discipline up to the reform of the psychotherapy law in 2019. Overall, the emergence of psychological psychotherapy has not one single trajectory but rather multiple origins in the different and competing academic and professional fields of mental health care.
Robert J. Sternberg
Intelligence needs to be understood in the cultural contexts in which it is displayed. For one thing, people in different cultures have different conceptions (implicit theories) of what intelligence is. Asian and African cultures tend to have broader and more encompassing views of intelligence than do Western cultures. Asians and Africans place less emphasis on mental speed and more emphasis on social and emotional aspects of behavior, as well as on wisdom. These implicit theories are important because in everyday life, people’s behavior is guided not so much by scores on standardized or other tests but rather by people’s implicit theories. For example, hiring and promotion decisions are usually based on such implicit theories, not on test scores.
Studies of performances by people, especially children, in different cultures suggest that the strengths of individuals across cultures are not necessarily well represented by conventional intelligence tests. For example, in some cultures, knowledge of herbal medications used to combat parasitic illnesses, or knowledge of hunting and gathering, or knowledge of how to effectively ice fish, can be more important to assessing intelligence than scores on a standardized test. Eskimo children may know how to navigate across the frozen tundra in the winter without obvious landmarks, yet they may not be able to attain high scores on conventional intelligence tests. Some of those who would score highly on such tests would be unable to do such navigation, to their peril.
There is no such thing as a culture-free test of intelligence, and there probably is no test that is genuinely culture-fair either. At best, tests should be culture-relevant, measuring the cognitive and other skills relevant to effectively adapt to particular cultures. These skills are likely to be partially but not fully overlapping across cultures. Thus, intelligence needs to be understood in its cultural contexts, not divorced from such contexts.