There has been an enormous expansion during the early 21st century in psychological research on topics relating to bilingualism, paralleling developments in other fields of psychology that investigate the interface between experience and the mind. These issues reflect the view that brains and minds remain plastic and can be modified by experience throughout life. In the case of bilingualism, a central question is whether bilingual experience modifies cognitive systems in general, and more specifically, if it improves cognitive ability and executive functioning. The research has produced contradictory results, in some cases supporting a beneficial effect on cognition and in some cases indicating no effect. Crucially, there is essentially no research that indicates that bilingualism is associated with poorer cognitive outcomes than found for those who are monolingual. Studies showing a positive role for bilingualism on cognitive outcomes have been reported across the life span. Early research with children in the first half of the 20th century concluded that bilingualism was detrimental to children’s intelligence, a claim that has been thoroughly refuted and replaced with evidence identifying specific cognitive processes that are more advanced in bilingual than in monolingual children. A few studies have even reported better attentional control, the foundation of executive functioning, for infants in the first year of life being raised in bilingual homes than for those in monolingual environments. Young adults frequently show no behavioral differences between language groups when performing executive function tasks, but neuroimaging (electrophysiology or brain imaging) consistently indicates that monolinguals and bilinguals use different brain regions and different degrees of effort to perform these tasks. The clearest language group differences, however, occur in older age where evidence for cognitive reserve from bilingualism is found most clearly in the postponement of symptoms of dementia. Therefore, it is necessary to analyze the factors that mediate these effects, notably, the nature of bilingual experience and the details of the cognitive task being used. The conclusion is that bilingualism is complex but there is evidence for a consistent and systematic impact on cognitive systems.
1-10 of 410 Results
Neal M. Ashkanasy and Agata Bialkowski
Beginning in the 1980s, interest in studying emotions in organizational psychology has been on the rise. Prior to 2003, however, researchers in organizational psychology and organizational behavior tended to focus on only one or two levels of analysis. Ashkanasy argued that emotions are more appropriately conceived of as spanning all levels of organizational analysis, and introduced a theory of emotions in organizations that spans five levels of analysis. Level 1 of the model refers to within-person temporal variations in mood and emotion, which employees experience in their everyday working lives. Level 2 refers to individual differences in emotional intelligence and trait affectivity (i.e., between-person emotional variables). Level 3 relates to the perception of emotions in dyadic interactions. Level 4 relates to the emotional states and process that take place between leaders and group members. Level 5 involves organization-wide variables. The article concludes with a discussion of how, via the concept of emotional intelligence, emotions at each level of the model form an integrated picture of emotions in organizational settings.
German E. Berrios and Ivana S. Marková
Writing the history of mental disorders is an unfinishable task. Each historical period is expected to write its own, and in a style designed to satisfy its own conceptual and social needs. In the 21st century such a historical account seems to be one that conceives of mental disorders as natural kinds, that is, as entities that for their meaning and ontology require to be related to a brain change. However, being aware that, after all, concepts are just instruments in the hands of humans opens up the possibility of writing a more comprehensive history of mental disorders, one based on their historical epistemology, that is, on the manner in which madness has been culturally reconfigured throughout the ages. This approach should be more fruitful in regard to finding ways of helping people with mental sufferings, a task which is about the only justification for the existence of the discipline called psychiatry.
Projective psychodiagnostics refers to the use of psychological instruments through which the subject is asked to respond to a set of ambiguous (though often suggestive) stimuli, thereby “projecting” aspects of their personality into these responses. The most prominent of these instruments includes the Rorschach Inkblot Technique, in which the subject is confronted with ten inkblots and is asked what these stimuli look like, and then what perceptual features make them look that way. Another common projective technique is the Thematic Apperception Test (TAT), a storytelling exercise in which the subject responds with a narrative to a series of ambiguous but sometimes highly charged black and white pictures depicting human interactions. Over time, new pictures have been developed for similar storytelling instruments targeted to children (the Children’s Apperception Test) or different ethnic populations. Both of these tests emerged under the influence of psychodynamic theories, and of the work of Carl Jung, whose Word Association Test served as a projective measure of psychological conflicts. Finally, there is a series of drawing tests which, while less commonly used, have had a projective history, including human figure drawings, the Bender–Gestalt Test, and the Wartegg Drawing Completion Test. Projective instruments have been used in a variety of psychiatric settings and have been criticized for being insufficiently grounded in either quantitative measures or scientific validity. The Rorschach has emerged with increasingly statistically based scoring systems addressing perceptual features, language, and content in the assessment of risk and diagnosis. The TAT is essentially a structured interview (since most scoring systems are not used by clinicians), but it nonetheless appears to be useful in gleaning information about a subject’s relationships with other people. Drawing tasks and sentence completion tests (derived from word association tests) are less commonly used, though more prevalent with children whose verbal abilities may be more limited. In general, projective tests appear to have some limited ability to define diagnosis and risk (and can be especially helpful in defining thought disorder and prognosis), but they may be most useful in helping clinicians obtain a deeper picture of conflicts and resources within the person tested.
Priscila G. Brust-Renck, Rebecca B. Weldon, and Valerie F. Reyna
Everyday life is comprised of a series of decisions, from choosing what to wear to deciding what major to declare in college and whom to share a life with. Modern era economic theories were first brought into psychology in the 1950s and 1960s by Ward Edwards and Herbert Simon. Simon suggested that individuals do not always choose the best alternative among the options because they are bounded by cognitive limitations (e.g., memory). People who choose the good-enough option “satisfice” rather than optimize, because they are bounded by their limited time, knowledge, and computational capacity. Daniel Kahneman and Amos Tversky were among those who took the next step by demonstrating that individuals are not only limited but are inconsistent in their preferences, and hence irrational. Describing a series of biases and fallacies, they elaborated intuitive strategies (i.e., heuristics) that people tend to use when faced with difficult questions (e.g., “What proportion of long-distance relationships break up within a year?”) by answering based on simpler, similar questions (e.g., “Do instances of swift breakups of long-distance relationships come readily to mind?”). More recently, the emotion-versus-reason debate has been incorporated into the field as an approach to how judgments can be governed by two fundamentally different processes, such as intuition (or affect) and reasoning (or deliberation). A series of dual-process approaches by Seymour Epstein, George Lowenstein, Elke Weber, Paul Slovic, and Ellen Peters, among others, attempt to explain how a decision based on emotional and/or impulsive judgments (i.e., system 1) should be distinguished from those that are based on a slow process that is governed by rules of reasoning (i.e., system 2). Valerie Reyna and Charles Brainerd and other scholars take a different approach to dual processes and propose a theory—fuzzy-trace theory—that incorporates many of the prior theoretical elements but also introduces the novel concept of gist mental representations of information (i.e., essential meaning) shaped by culture and experience. Adding to processes of emotion or reward sensitivity and reasoning or deliberation, fuzzy-trace theory characterizes gist as insightful intuition (as opposed to crude system 1 intuition) and contrasts it with verbatim or precise processing that does not consist of meaningful interpretation. Some of these new perspectives explain classic paradoxes and predict new effects that allow us to better understand human judgment and decision making. More recent contributions to the field include research in neuroscience, in particular from neuroeconomics.
Hannah S. Decker
The Diagnostic and Statistical Manual of Mental Disorders (DSM-III), the third diagnostic manual of the American Psychiatric Association (APA), was mainly a response to the vehement, insistent, and often persuasive antipsychiatry movement that had developed in the 1960s and 1970s. Coming from a number of directions, sociologists, lawyers, judges, social critics, and even some psychiatrists themselves, the movement challenged the medical model of psychiatry, the involuntary commitment of patients to mental hospitals, the “warehousing” of patients in hospitals without receiving effective treatment, and even whether patients with mental disorders had any illness at all. Additionally, psychiatrists were accused by some authors of “controlling” people to accrue power over them. Psychiatry as a profession was thrown on the defensive. The publication of an article in the prestigious journal Science in 1973 charging—through seemingly inspired experiments—that psychiatrists could not even diagnosis a mentally ill patient, created a sensation. This was the last straw for the beleaguered APA. Though only five years had passed since the last revision of the DSM, and little had changed, the Board of Trustees of the APA commissioned a revision that would show that psychiatry was a legitimate medical and scientific endeavor and thus counter the attacks of the antipsychiatry movement. The irony here is that in 2019, the Science article was shown to be in large part fraudulent. DSM-III turned out to be not a revision but a large, brand-new manual based solely on observable signs and symptoms, the “diagnostic criteria.” It upended the diagnosis and treatment of mental disorders in North America and in many other places as well. The Task Force that produced the manual was led by Robert Spitzer, a talented and energetic man, with an empirical bent, who never shied away from a fight. The Task Force he led shared his empiricism, and many of its members were determinedly antipsychoanalytic. There is no doubt that DSM-III helped to dethrone psychoanalysis as a leading method of thought and treatment in North America. Analysts had relied heavily on the diagnosis of neurosis, which Spitzer removed from the manual. Spitzer and the Task Force were strongly supported in their decisions by Melvin Sabshin, the APA’s new medical director, who himself wanted to rid psychiatry of “ideology,” and promote the profession more clearly as scientific and medical. The manual itself featured many new diagnoses because Spitzer wanted to include diagnoses that were important to clinicians. Thus, he prized reliability (psychiatrists agreeing on the same diagnosis) over validity (the accuracy of the diagnosis). A positive feature of DSM-III was its five-pronged diagnostic system, which, if used properly and completely, helped psychiatrists arrive at a deeper knowledge of their patients, as well as a more accurate prognosis. On the other hand, relying solely on diagnostic criteria encouraged some clinicians to practice a relatively quick “checklist” psychiatry instead of taking time to understand patients as human beings in all their complexity. Another shortcoming was the strict categorical approach of the diagnostic system which often led to comorbidity or “not elsewhere specified” diagnoses. Nevertheless, since the appearance of DSM-III, the DSMs have achieved an outsized influence over many key areas of life.
Alan C. Tjeltveit
How has ethics been connected with the science and profession of psychology? Has ethics been essential to psychology? Or have psychologists increasingly developed objective psychological understandings free of ethical biases? Is ethics in psychology limited to research ethics and professional ethics? Understanding the various connections among ethics and psychology requires conceptual clarity about the many meanings of ethics and related terms (such as moral, ideal, and flourishing). Ethics has included, but goes beyond, research and professional ethics, since ideas about what is good or bad, right or wrong, obligatory or virtuous have shaped psychological inquiry. In moral psychology, psychologists have sought to understand the psychology of ethical dimensions of persons, such as prejudice or altruism. Some psychologists have worked to minimize ethical issues in psychology in general, but others embraced psychologies tied to ethical visions, like advancing social justice. Many ethical issues (beyond professional ethics) have also been entangled in professional practice, including understanding the problems (“not good” states of affairs) for which clients seek help and the (“good”) goals toward which psychologists helped people move. Cutting across the various ways ethics and psychology have been interconnected is an enduring tension: Although psychologists have claimed expertise in the science of psychology and in the provision of psychological services, they have had no disciplinary expertise that equips them to determine what is good, right, obligatory, and virtuous despite the fact that ethical issues have often been deeply intertwined with psychology.
Feminist psychology as an institutionalized field in North America has a relatively recent history. Its formalization remains geographically uneven and its institutionalization remains a contested endeavor. Women’s liberation movements, anticolonial struggles, and the civil rights movement acted as galvanizing forces in bringing feminism formally into psychology, transforming not only its sexist institutional practices but also its theories, and radically challenging its epistemological and methodological commitments and constraints. Since the late 1960s, feminists in psychology have produced radically new understandings of sex and gender, have recovered women’s history in psychology, have developed new historiographical methods, have engaged with and developed innovative approaches to theory and research, and have rendered previously invisibilized issues and experiences central to women’s lives intelligible and worthy of scholarly inquiry. Heated debates about the potential of feminist psychology to bring about radical social and political change are ongoing as feminists in the discipline negotiate threats and dilemmas related to collusion, colonialism, and co-optation in the face of ongoing commitments to positivism and individualism in psychology and as the theory and practice of psychology remains embedded within broader structures of neoliberalism and global capitalism.
John C. Gibbs
Males and females differ—but only moderately—in moral judgment and morally relevant social behavior such as caring for others and aggression. Females more frequently use care-related concerns in their moral judgment. Research has to some extent supported traditional stereotypes of males as more assertive or independent (agency) and females as more relational or affiliative (communion). Males are on average more aggressive than females even after relational aggression is taken into account. In the expression of empathy and prosocial behavior, situational context plays a larger role for males than females. Males’ gender tendencies have been characterized as instrumental (“report talk,” object oriented, etc.) and females’ as socially and emotionally expressive (“rapport talk,” people oriented, etc.). In social relationships, adolescent girls generally engage in more intimate self-disclosure and active listening, provide more emotional support to one another, and emphasize affiliation and collaboration. Both biological and social experiential or cultural factors are involved in the formation of these morally relevant gender differences. Although average gender-linked differences in emphasis remain evident, a blend of instrumental and expressive characteristics may contribute to optimal morality for both genders. Sandra Bem termed the mixture of expressive (traditionally feminine) and instrumental (traditionally masculine) attributes in gender style “androgyny.” Highly androgynous adolescents and adults of both genders evidence more mature moral judgment and more adequate mental health.
Jean Piaget (1896–1980) is known for his contributions to developmental psychology and educational theory. His name is associated especially with Stage Theory. That we believe him to have focused solely on cognitive development, however, is not because he did. This is instead the result of the popularization of his writings in the United States during the Cold War. (A period of crisis and subsequent education reform.) The overpowering influence of those interests blinded us to his larger framework, which he called “genetic epistemology,” and of which his stages were just a part. To address the resulting and continuing misunderstandings, this essay presents original historical scholarship—distilling over a thousand pages of archival documents (correspondence, diary entries, budgets, and reports)—to provide an insider’s look at Piaget’s research program from the perspective of the Rockefeller Foundation: genetic epistemology’s primary funding agency in the United States from the mid-1950s through the early-1960s. The result is an examination of how a group of interested Americans came to understand Piaget’s writings in French in the period just prior to their wider popularization in English, as well as of how Piaget presented himself and his ideas during the reconstruction of Europe after World War II. My goal, however, is not to summarize the whole of this misunderstood program. Instead, I aim to provide a source of archivally-grounded perspective that will allow for new insights about the Genevan School that are unrelated to American Cold War interests. In the process, we also derive new means to see how Piaget’s experimental examinations of the development of individual knowledge served to inform his team’s investigations of the evolution of science (and vice versa).