Cost-benefit analysis (CBA) is a widely used economic appraisal method that aims to support politicians in making decisions about projects and policies. Several researchers have tried to uncover the extent to which CBA actually impacts decision-making by investigating the statistical relation between the results of CBA studies and political decisions. Although these studies show that there is no significant statistical relation between the outcomes of CBA studies and political decisions, there is clear evidence that the institutionalization of CBA affects the planning and decision-making process within the bureaucracy. Civil servants, for instance, use CBAs to government projects in the early phases of the planning process. The literature identifies various barriers that hamper politicians’ use of CBA when forming their opinion. First, politicians often receive results of CBA studies too late in the process. When politicians receive a CBA after they already made up their mind and communicated their viewpoint, the chance is low that the results of the CBA will (substantially) influence their decision. A second important barrier that limits the use of CBA by politicians is that they do not have enough trust in CBA’s impartiality. A third barrier is that politicians contest value judgments implicit in CBA.
The literature distinguishes six ideological value judgments that inevitably need to be made when conducting a CBA: (a) Which individuals have standing in a CBA? (b) Which preferences have standing in a CBA? (c) Which procedure is used to value impacts? (d) On which dimensions are standard numbers differentiated? (e) Which weight is assigned to preferences of individuals in the social welfare function? (f) Which approach is adopted to select the social discount rate? The implication of the fact that CBA analysts cannot escape from making value judgments when conducing the study is that CBA is currently a problematic tool for democratic decision-making because, when applied in practice, the analysis is based on a specific set of politically loaded premises that fosters (damages) the interests of politicians (not) endorsing these premises. It is possible to overcome this problem through informing politicians about the extent to which switching value judgements leads to different CBA outcomes. The introduction of so-called normative sensitivity analyses safeguards that politicians with different belief systems are equally equipped to use the results of a CBA to arrive at a well-founded evaluation of a government project.
The politics of crisis terminology is rarely examined directly. Crisis is an “umbrella,” under which resides a multitude of terms such as accidents, emergencies, fiascos, disasters, and catastrophes, as well as variations such as natural disasters, transboundary crises, and mega-crises. Yet the sheer diversity and frequent ambiguity among terms reflects the “politics” of how societies and political actors seek to cope with and address extreme events, which often pose a mixture of threat and opportunity. Central to an understanding is how (a) different terms are means of framing issues such as the scale and causes of the crisis, (b) crisis terms are part of governing strategies, and (c) nongovernmental actors (opposition parties, media, lobby groups, social movements, and citizens) can seek to influence government. A pivotal point in developing an understanding of crisis terminology is that rather bemoaning the lack of singular meanings for crisis and associated terms, or criticizing actors for “abuse” of the terms, one should recognize and accept that complex and contested crisis language and definitions are in themselves manifestations of politics in political societies.
National broadcasters are a standard feature across Africa. Set up by colonial regimes, they dominate media landscapes with their unrivaled geographic reach. Radio continues to be the main—and often only—source of information outside urban centers, where commercial media struggle to survive and illiteracy remains a challenge. Although access to new media has risen exponentially, use of mobile technology continues to be prohibitively expensive.
Some national broadcasters are official state broadcasters: owned, run, and editorially controlled by government. However, many claim to be public broadcasters. By definition, these are accountable to the public rather than the government of the day: accessible to a universal audience, inclusive of a wide range of views; and fair, balanced, and independent in their journalism. This aspiration is reflected in national and supranational policy such as the African Charter on Broadcasting and the Declaration of Principles on Freedom of Expression in Africa.
In reality, these broadcasters lack de jure independence, the basic condition for them to be considered “public.” They are, in law and in practice, state broadcasters—owed to a range of historical, social, financial, and political determinants despite attempts by journalists and civil society to change this. Principally, the political will has been lacking—in colonial as well as postcolonial elites—to relinquish control of newsrooms and open up space for dissent.
There is one exception: the South African Broadcasting Corporation was granted de jure independence following apartheid and enjoys unrivaled (though contested) legal guarantees and journalistic freedom. Its ongoing difficulties to fully meet its public broadcasting mandate despite this relatively conducive environment demonstrate that de jure independence is a necessary but not sufficient condition for successful broadcasting transformation, and that organizational culture is an important variable to be taken into account.
How do people make political judgments and decisions? Each day, people are faced with a host of political issues. They also possess a limited amount of cognitive resources and must grapple with topics on which there is not necessarily an objectively correct answer. In turn, people rely on accessible information to facilitate their political judgments and decisions. Information is accessible when it is activated in a person’s mind. The information can either be chronically accessible, such as the political issues that are consistently important to a person, or made accessible through the situation, such as the issues that the media choose to cover in a given time and place. Situational information becomes especially accessible when the context activates available information stored in memory or the information is consistent with a person’s motivations and goals, such as media coverage rendering civil rights more accessible for racial minorities. Priming refers to the usage of accessible information when making judgments and decisions, such as deciding whether to sign a petition or how to vote in an election. In recent years, considerable debate has emerged about the generalizability of findings and current conceptual models of accessibility and priming across people and contexts. As research on accessibility and priming progresses, scholars continue to examine these topics in novel areas (e.g., social media) and push in building nuanced theoretical frameworks that help to explain variability in priming across contexts. Overall, understanding how people use accessible information in political judgments and decisions stands as an important factor in developing a comprehensive picture of political life.
Toril Aalberg and Stephen Cushion
Public service broadcasters are a central part of national news media environments in most advanced democracies. Although their market positions can vary considerably between countries, they are generally seen to enhance democratic culture, pursuing a more serious and harder news agenda compared to commercial media . . . But to what extent is this perspective supported by empirical evidence? How far can we generalize that all public service news media equally pursue a harder news agenda than commercial broadcasters? And what impact does public service broadcasting have on public knowledge? Does exposure to public service broadcasting increase citizens’ knowledge of current affairs, or are they only regularly viewed by citizens with an above average interest in politics and hard news?
The overview of the evidence provided by empirical research suggests that citizens are more likely to be exposed to hard news, and be more knowledgeable about current affairs, when they watch public service news—or rather news in media systems where public service is well funded and widely watched. The research evidence also suggests there are considerable variations between public broadcasters, just as there are between more market-driven and commercial media. An important limitation of previous research is related to the question of causality. Therefore, a main challenge for future research is to determine not only if public service broadcasting is the preferred news provider of most knowledgeable citizens, but also whether it more widely improves and increases citizens’ knowledge about public affairs.
An expansive body of research known as racial priming consistently shows that media and campaign content can make racial attitudes more important factors in Americans’ political evaluations. Despite the well-established racial priming findings, though, there are some lingering questions about this line of research that have not been adequately settled by the extant literature. Perhaps the most frequently debated issue involves the effectiveness of implicit and explicit racial appeals. Can explicit appeals that directly invoke race and/or racial stereotypes, for example, effectively activate racial attitudes in white Americans’ political opinions? Or do racial appeals have to be implicit in nature, making only coded references to race in order to prime racially conservative support for political candidates and public policies? Along with this important topic, there are additional questions raised by the existing racial priming research, which include: Who is most susceptible to racial priming? Are political attacks on other minority groups, such as Muslims and Latinos, as potent as the appeals to anti-black stereotypes and resentments upon which the racial priming research is based? How did Obama’s presidency, which both heightened the salience of race in political discourse and increased the importance of racial attitudes in Americans’ partisan preferences, affect the media’s ability to prime race-based considerations in mass political evaluations?
Radio’s affordability, portability, and use of local languages have long granted it a special status among mass media in Africa. Its development across the continent has followed remarkably similar paths despite clear differences in different countries’ language policies, economic fortunes, and political transformations. Common to many countries has been the virtual monopoly over the airwaves enjoyed by the state or parastate broadcasting corporations during the first decades of independence. The wave of democratization since the late 1980s has brought important changes to the constitutional and economic landscape in radio broadcasting. Although private, religious, and community stations have filled the airwaves in many countries, it is also important to recognize the many subtle ways in which state-controlled radio broadcasting, both before and after independence, could include alternative ideas, particularly in cultural and sports programming. By the same token, radio’s culpability in orchestrating oppression—or even genocide, as in Rwanda’s case—stands to be examined critically. Liberalized airwaves, on the other hand, draw attention to developments that find parallels in radio history elsewhere in the world. They include radio’s capacity to mediate intimacy between radio personalities and their listeners in a way that few other media can. They also become apparent in radio’s uses in encouraging participation and interaction among ordinary citizens through phone-in programs that build on the rapid uptake of mobile telephony across Africa. Such developments call for a notion of politics that makes it possible to observe radio’s influence across the domains of formal politics, religion, and commercial interests.
Real-time response measurement (RTR), sometimes also called continuous response measurement (CRM), is a computerized survey tool that continuously measures short-time perceptions while political audiences are exposed to campaign messages by using electronic input devices. Combining RTR data with information about the message content allows for tracing back viewers’ impressions to single arguments or nonverbal signals of a speaker and, therefore, showing which kinds of arguments or nonverbal signals are most persuasive. In the context of applied political communication research, RTR is used by political consultants to develop persuasive campaign messages and prepare candidates for participating in televised debates. In addition, TV networks use RTR to identify crucial moments of televised debates and sometimes even display RTR data during their live debate broadcasts.
In academic research most RTR studies deal with the persuasive effects of televised political ads and especially televised debates, sometimes including hundreds of participants rating candidates’ performances during live debate broadcasts. In order to capture features of human information processing, RTR measurement is combined with other data sources like content analysis, traditional survey questionnaires, qualitative focus group data, or psychophysiological data. Those studies answer various questions on the effects of campaign communication including which elements of verbal and nonverbal communication explain short-term perceptions of campaign messages, which predispositions influence voters’ short-term perceptions of campaign messages, and the extent to which voters’ opinions are explained by short-term perceptions versus long-term predispositions. In several such studies, RTR measurement has proven to be reliable and valid; it appears to be one of the most promising research tools for future studies on the effects of campaign communication.
Benjamin R. Knoll and Cammie Jo Bolin
Religious communication affects political behavior through two primary channels: political messages coming from a religious source and religious messages coming from a political source. In terms of the first channel, political scientists have found that clergy do tend to get involved in politics, and church-goers often hear political messages over the pulpit, although not as frequently as might be expected. Sometimes these political messages are successful in swaying opinions, but not always; context matters a great deal. In terms of the second channel, politicians use religious rhetoric (“God talk”) in an attempt to increase their support and win votes. When this happens, some groups are more likely to respond than others, including political conservatives, more frequent church attenders, and racial/ethnic minorities. The scope and effectiveness of religious communication remains a field ripe for further research, especially in contexts outside of the United States.
Despite predictions that urbanization, economic development and globalization would lead to the recession of religion from public life, populations around the world continue to be highly religious. This pattern holds in most parts of the Global South and also in some advanced industrial democracies in the North, including in the United States. In grappling with the influence (or lack thereof) of religion on political life, a growing body of literature pays attention to how clergy–congregant communication might shape listeners’ political attitudes and behaviors. Considerable debate remains as to whether clergy–congregant communication is likely to change political attitudes and behavior, but there is a greater consensus around the idea that exposure to religious communication can at the very least prime (that is, increase the salience of) certain considerations that in turn affect how people evaluate political issues and whether they participate in politics. Religious communication is more likely to exert a persuasive and a priming influence among those already inclined to select into the communication and when the source of the communication is credible. More research is needed on the duration of religious primes and on the effects of religious communication in different political and social contexts around the world.
The representativeness heuristic was defined by Kahneman and Tversky as a decision-making shortcut in which people judge probabilities “by the degree to which A is representative of B, that is, by the degree to which A resembles B.” People who use this cognitive shortcut bypass more detailed processing of the likelihood of the event in question but instead focus on what (stereotypic) category it appears to fit and the associations they have about that category. Simply put: If it looks like a duck, it probably is a duck. The representativeness heuristic usually works well and provides valid inferences about likelihood. This is why political scientists saw it as an important part of a solution to an enduring problem in their field: How can people make political decisions when so many studies show they lack even basic knowledge about politics? According to these scholars, voters do not need to be aware of all actions and opinions of a political candidate running for office. To make up their mind on who to vote for, they can rely on cues that represent the performance and issue position of candidates, such as the party they are affiliated with, their ranking in the polls, and whether (for instance) they act/appear presidential. In other words, they need to answer the question: Does this candidate fit my image of a successful president? The resulting low-information rationality provides voters with much confidence in their voting decision, even though they do not know all the details about the history of each candidate. Using heuristics allows relatively uninformed citizens to act as if they were fully informed.
Despite this optimistic view of heuristics at their introduction to the discipline, they originated from research showing how heuristic use is accompanied by systematic error. Tversky and Kahneman argue that using the representativeness heuristic leads to an overreliance on similarity to a category and a neglect of prior probability, sample size, and the reliability and validity of the available cue. Kuklinsky and Quirk first warned about the potential effect of these biases in the context of political decision-making. Current research often examines the effects of specific cues/stereotypes, like party, gender, race, class, or more context-specific heuristics like the deservingness heuristic. Another strand of research has started exploring the effect of the representativeness heuristic on decision-making by political elites, rather than voters. Future studies can integrate these findings to work toward a fuller understanding of the effects of the representativeness heuristic in political decision-making, more closely consider individual differences and the effects of different contexts, and map the consequences that related systematic biases might have.
The idea of satisficing as a decision rule began with Herbert Simon. Simon was dissatisfied with the increasingly dominant notion of individuals as rational decision-makers who choose alternatives that maximize expected utility on two grounds. First, he viewed the maximizing account of decision-making as unrealistic given that individuals have cognitive limitations and varying motivations that limit cognitive ability and effort. Second, he argued that individuals do not even choose alternatives as if they are maximizing (i.e., that the maximizing account has predictive validity). Instead, he offered a theory of individuals as satisficers: decision-makers who consider a limited number of alternatives, expending limited cognitive effort, until they find one that is “good enough.” At this point, he argued, the consideration of alternatives stops.
The satisficing decision rule has influenced several subfields of political science. They include elite decision-making on military conflicts, the economy, and public policy; ideas of what the mass public needs to know about politics and the extent to which deficits in political knowledge are consequential; and understanding of survey responses and survey design. Political and social psychologists have also taken Simon’s idea and argued that satisficing rather than maximizing is a personality trait—stable characteristics of individuals that make them predisposed toward one or other type of alternative search when making decisions. Research in these subfields additionally raises normative questions about the extent to which satisficing is not only a common way of making decisions but a desirable one. Satisficing seems superior to maximizing in several respects. For example, it has positive effects on aspects of decision-makers’ well-being and is more likely to result in individuals voting their interests in elections.
There are, however, a number of directions in which future research on satisficing could be taken forward. These include a fuller incorporation of the interaction of affect and cognition, clearer tests of alternative explanations to satisficing, and more focus and understanding on the effects of the Internet and the “information age.”
Thomas J. Leeper
Empirical media effects research involves associating two things: measures of media content or experience and measures of audience outcomes. Any quantitative evidence of correlation between media supply and audience response—combined with assumptions about temporal ordering and an absence of spuriousness—is taken as evidence of media effects. This seemingly straightforward exercise is burdened by three challenges: the measurement of the outcomes, the measurement of the media and individuals’ exposure to it, and the tools and techniques for associating the two.
While measuring the outcomes potentially affected by media is in many ways trivial (surveys, election outcomes, and online behavior provide numerous measurement devices), the other two aspects of studying the effects of media present nearly insurmountable difficulties short of ambitious experimentation. Rather than find solutions to these challenges, much of collective body of media effects research has focused on the effort to develop and apply survey-based measures of individual media exposure to use as the empirical basis for studying media effects. This effort to use survey-based media exposure measures to generate causal insight has ultimately distracted from the design of both causally credible methods and thicker descriptive research on the content and experience of media. Outside the laboratory, we understand media effects too little despite this considerable effort to measure exposure through survey questionnaires.
The canonical approach for assessing such effects: namely, using survey questions about individual media experiences to measure the putatively causal variable and correlating those measures with other measured outcomes suffers from substantial limitations. Experimental—and sometimes quasi-experimental—methods provide definitely superior causal inference about media effects and a uniquely fruitful path forward for insight into media and their effects. Simultaneous to this, however, thicker forms of description than what is available from close-ended survey questions holds promise to give richer understanding of changing media landscape and changing audience experiences. Better causal inference and better description are co-equal paths forward in the search for real-world media effects.
Diana Panke and Julia Gurol
Smaller European Union member states face size-related challenges in the EU multilevel system, such as weighted voting in day-to-day policymaking in which EU secondary law is produced or high workloads and fewer resources during intergovernmental conferences (IGC) to set EU primary law. Coping with these challenges is paramount to smaller states’ success. Thus, they can use different strategies, most notably selective engagement and negotiation strategies that do not require much material power, such as persuasion, framing, and coalition-building, as well as the Council Presidency as a window of opportunity to influence the agenda. Applying these strategies allows small states to punch above their weight. Yet, doing so is easier the longer states have been members of the EU. Older, smaller states have more extensive networks, more insights about past policies, and in-depth knowledge on best practices that help them in effectively navigating day-to-day EU negotiations as well as IGCs.
Christina Ladam, Ian Shapiro, and Anand Sokhey
As the most common form of voluntary association in America, houses of worship remain an unquestionably critical component of American civil society. Major approaches to studying religion and politics in the United States are described, and the authors present an argument for focusing more attention on the organizational experience provided by religious contexts: studying how individuals’ social networks intersect with their associational involvements (i.e., studying religion from a “interpersonal” perspective) may actually shed new light on intrapersonal, psychological constructs like identity and religiosity.
Evidence is presented from two nationally representative data sets that suggests considerable variance in the degree to which individuals’ core social networks overlap with their houses of worship. This variance exists within and between individuals identifying with major religious traditions, and such networks are not characterized solely by agreement (as theories of self-selection might suggest).
Wouter van Atteveldt, Kasper Welbers, and Mariken van der Velden
Analyzing political text can answer many pressing questions in political science, from understanding political ideology to mapping the effects of censorship in authoritarian states. This makes the study of political text and speech an important part of the political science methodological toolbox. The confluence of increasing availability of large digital text collections, plentiful computational power, and methodological innovations has led to many researchers adopting techniques of automatic text analysis for coding and analyzing textual data. In what is sometimes termed the “text as data” approach, texts are converted to a numerical representation, and various techniques such as dictionary analysis, automatic scaling, topic modeling, and machine learning are used to find patterns in and test hypotheses on these data.
These methods all make certain assumptions and need to be validated to assess their fitness for any particular task and domain.
What is “threat framing”? It concerns how something or someone is perceived, labeled, and communicated as a threat to something or someone. The designation “threat,” notably, belongs to the wider family of negative concerns such as danger, risk, or hazard. Research on threat framing is not anchored in a single or specific field but rather is scattered across three separate and largely disconnected bodies of literature: framing theory, security studies, and crisis studies. It is noteworthy that whereas these literatures have contributed observations on how and under what consequences something is framed as a threat, none of them have sufficiently problematized the concept of threat. Crisis analysis considers the existence or perception of threat essential for a crisis to emerge, along with a perception of urgency and uncertainty, yet crisis studies focus on the meaning of “crisis” without problematizing the concept of threat. Likewise, security studies have spent a lot of ink defining “security,” typically understood as the “absence of threat,” but leave the notion of “threat” undefined. Further, framing theory is concerned with “problem definition” as a main or first function of framing but generally pays little or no attention to the meaning of “threat.” Moreover, cutting across these bodies of literature is the distinction between constructivist and rationalist approaches, both of which have contributed to the understanding of threat framing. Constructivist analyses have emphasized how threat framing can be embedded in a process of socialization and acculturation, making some frames appear normal and others highly contested. Rationalist approaches, on the other hand, have shown how threat framing can be a conscious strategic choice, intended to accomplish certain political effects such as the legitimization of extraordinary means, allocation of resources, or putting issues high on the political agenda. Although there are only a handful of studies explicitly combining insights across these fields, they have made some noteworthy observations. These studies have shown for example how different types of framing may fuel amity or enmity, cooperation, or conflict. These studies have also found that antagonistic threat frames are more likely to result in a securitizing or militarizing logic than do structural threat frames. Institutionalized threat frames are more likely to gain and maintain saliency, particularly if they are associated with policy monopolies. In the post-truth era, however, the link between evidence and saliency of frames is weakened, leaving room for a much more unpredictable politics of framing.
Daniel C. Hallin
Typologies are a central tool of comparative analysis in the social sciences. Typologies identify common patterns in the relationships among elements of media systems and wider social systems, and serve to generate research questions about why particular patterns occur in particular systems, why particular cases may deviate from common patterns, and what the consequences of these patterns may be. They are important for specifying the context within which particular processes operate, and therefore for identifying possible system-level causes, specifying the scope of applicability of theories, and assessing the validity of measurements across systems. Typologies of media systems date to the publication of Four Theories of the Press, which proposed a typology of authoritarian, libertarian, social responsibility and Soviet Communist media systems. Hallin and Mancini’s typology of media systems in Western Europe and North America has influenced most recent work in comparative analysis of media systems. Hallin and Mancini proposed three models differentiated on the basis of four clusters of variables: the development of media markets; the degree and forms of political parallelism; journalistic professionalism; and the role of the state. Much recent research has been devoted to operationalizing these dimensions of comparison, and a number of revisions of Hallin and Mancini’s model and proposals for alternative approaches have been proposed. Researchers have also begun efforts to develop typologies including media systems outside of Western Europe and North America.
Kevin Arceneaux and Martin Johnson
Students of public opinion tend to focus on how exposure to political media, such as news coverage and political advertisements, influences the political choices that people make. However, the expansion of news and entertainment choices on television and via the Internet makes the decisions that people make about what to consume from various media outlets a political choice in its own right. While the current day hyperchoice media landscape opens new avenues of research, it also complicates how we should approach, conduct, and interpret this research. More choices means greater ability to choose media content based on one’s political preferences, exacerbating the severity of selection bias and endogeneity inherent in observational studies. Traditional randomized experiments offer compelling ways to obviate these challenges to making valid causal inferences, but at the cost of minimizing the role that agency plays in how people make media choices.
Resent research modifies the traditional experimental design for studying media effects in ways that incorporate agency over media content. These modifications require researchers to consider different trade-offs when choosing among different design features, creating both advantages and disadvantages. Nonetheless, this emerging line of research offers a fresh perspective on how people’s media choices shapes their reaction to media content and political decisions.
Yotam Shmargad and Samara Klar
The field of political science is experiencing a new proliferation of experimental work, thanks to a growth in online experiments. Administering traditional experimental methods over the Internet allows for larger and more accessible samples, quick response times, and new methods for treating subjects and measuring outcomes. As we show in this chapter, a rapidly growing proportion of published experiments in political science take advantage of an array of sophisticated online tools. Indeed, during a relatively short period of time, political scientists have already made huge gains in the sophistication of what can be done with just a simple online survey experiment, particularly in realms of inquiry that have traditionally been logistically difficult to study. One such area is the important topic of social interaction. Whereas experimentalists once relied on resource- and labor-intensive face-to-face designs for manipulating social settings, creative online efforts and accessible platforms are making it increasingly easy for political scientists to study the influence of social settings and social interactions on political decision-making. In this chapter, we review the onset of online tools for carrying out experiments and we turn our focus toward cost-effective and user-friendly strategies that online experiments offer to scholars who wish to not only understand political decision-making in isolated settings but also in the company of others. We review existing work and provide guidance on how scholars with even limited resources and technical skills can exploit online settings to better understand how social factors change the way individuals think about politicians, politics, and policies.