Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Politics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 07 March 2021

Experimental Research in African Politicsfree

  • George Kwaku OfosuGeorge Kwaku OfosuDepartment of Political Science, Washington University in St Louis

Summary

Political scientists are increasingly using experiments to study African politics. Experimental methods help scholars to overcome two central research challenges: potential bias in responses to survey questions (social desirability bias), and establishing the effect of X on Y (causality). Regarding survey response bias, experimental methods have been used to study sensitive topics such as ethnic favoritism, clientelism, corruption, and vote buying. In terms of causality, experiments have helped to estimate the effects of programs aimed at enhancing the quality of democracy or public service delivery. Identifying the causes of the political behavior is critical to understanding the “nuts and bolts” of African politics. For policymakers, knowledge of what works to promote democratic accountability ensures the efficient allocation of scarce resources.

Introduction

Political scientists are increasingly using randomized experiments to study African politics. Since 2003, scholars have adopted experimental methods to overcome two main research challenges. First, the methods are used to mitigate concerns related to social desirability bias in the measurement of the prevalence and effects of sensitive topics like ethnic bias, clientelism, vote buying, corruption, election fraud, and violence. Second, experimental methods are used to overcome concerns about selection bias in estimating the effects of policies believed to enhance the quality of democracy in Africa’s multiparty regimes. Experiments have helped scholars to determine which policies are effective in promoting democratic quality—as well as when and why.

A randomized experiment is a study in which researchers randomly assign values (e.g., receive or do not receive) of an independent variable (e.g., antiviolence campaign message) among a set of subjects (voters) to assess the variable’s impact on some outcome(s) of interest (turnout or voter rejection of violent candidates; Cox & Reid, 2000). In contrast to observational studies, the individuals or units under study do not decide whether or not they take a particular value of the independent variable, which rules out selection bias.1 Randomization ensures that subjects assigned to control and treatment conditions can be expected to be identical in almost all relevant observed and unobserved attributes. Randomization also allows researchers to hold other factors that may influence the treatment and outcome variables constant, in order to isolate the independent effect of the causal factor of interest (Humphreys & Weinstein, 2009). A comparison of means between treatment and control groups in relevant outcomes indicates the causal effect of the randomized values of the putative causal factor.2

Four Types of Experimental Research

Scholars have used four main types of experiments to study African politics: survey, lab-in-the-field, natural, and field experiments (Figure 1). Social scientists often rely on surveys to examine topics of interest. Survey experiments are designed to help researchers to elicit honest responses while protecting respondents’ sensitive answers. A common assumption in the analysis of surveys is that respondents answer the questions truthfully (Blair, 2015).3 However, citizens may not willingly admit that they took a bribe for their vote or that they engaged in violence because such acts may not be socially acceptable or may incur legal penalties. Similarly, public officials may not admit that they discriminate along ethnic or party lines in hiring or in responding to citizens’ demands. Therefore, directly asking such questions may lead to incorrect inferences about the pervasiveness and impacts of such attitudes in the population.

Figure 1. Distribution of type of experiments in reviewed studies.

Survey experiments employ three primary forms: randomized response, list, and endorsement experiments.4 To illustrate each of these modes, imagine researchers want to estimate the proportion of voters who took a bribe for their vote. A randomized response experiment would ask the same question (“In the recent election, you voted for a party or candidate because they gave you a handout or bribe”) of all respondents. However, using a randomization device—such as a six-sided die—some respondents will be randomly assigned to answer Yes or No automatically, irrespective of their actual answer, while others are asked to answer truthfully.5 In this case, respondents’ correct answers are concealed from the researchers because they may be responding truthfully or they may be merely saying Yes or No according to the number they rolled, a strategy that encourages honest responses when required. A simple probability estimation model is used to estimate the share of individuals in the sample who took a bribe, taking into account the “noise” introduced by the proportion of false Yes and No answers.

In a list experiment, a researcher will generate a list of innocuous items, say four, which scholars believe do or do not influence vote choice. For a random set of respondents, the list will also include “selling their vote,” so that the respondents see five items (the treatment group), while for another group of respondents, the list will contain only the four innocuous elements (the control group). Respondents’ truthful answers to the sensitive question are protected by asking them to report only the number of items on the list that they agree influence their vote choice. The difference between the average number of items agreed to by those in the treatment group and the average number agreed to by those in the control group represents the share of individuals in the sample who agree to “selling their vote.”

Finally, in an endorsement survey experiment, respondents are randomly exposed to one of two otherwise similar vignettes. The treatment vignette may have a candidate endorsing or using vote buying in a campaign, while in the control vignette there will be no mention of such endorsement or use of bribes. Again, the difference in the share of respondents who choose to support the candidate in the treatment vignette versus those who support the candidate in the control vignette indicates the effect of vote buying on voters’ choice.

Lab experiments allow researchers to mitigate the potential influence of external factors on individual responses to the treatment. Lab experiments involve bringing respondents to a laboratory, exposing them to the various treatments, and measuring outcomes in the controlled environment. While lab experiments are often conducted in universities with students, lab-in-the-field experiments involve researchers’ moving their laboratories to the location inhabited by the subjects of interest and selecting participants from that place. The advantage of lab-in-the-field studies is that the participants are likely to be more representative of the population of interest than a nonrandom sample of students would be.

While survey and lab-in-the-field experiments aid measurement, the scenarios presented to the respondents usually are hypothetical. It is typically not clear whether the inferences drawn from such experiments reflect actual behavior. Natural experiments and field experiments have the advantage of allowing researchers to observe individuals’ real-world response to interventions.

In natural experiments, “nature” determines which subjects are exposed to the treatment. Researchers can then leverage a naturally occurring “trial” to estimate the effect of an intervention. Nature may include geography (landscape or borders) or government policies that were implemented in such a manner that access to the program of interest was as-if random (Dunning, 2012). One of the major disadvantages of a natural experiment is that it is sometimes not certain whether government policies were truly randomly deployed, which generates doubts about whether researchers have identified the causal impacts.

In contrast to a natural experiment, a field experiment involves the direct manipulation of an intervention by researchers. For example, using a randomization device or computer software, scholars determine whether an individual or a community receives flyers containing information about voting processes in an upcoming election, campaign promises of candidates, or the performance of an incumbent. Thus, in contrast to observational studies, in which scholars have to make guesses about, and try to account for, how some individuals or communities received a program or intervention in order to estimate causal effects after the fact, in field experiments researchers know precisely how the individual received treatment. Randomization rules out selection bias and enables the estimation of the effects of a policy with minimal assumptions.

Ethical Concerns With Field Experiments

While attractive because they generate “neat” causal estimates of policies and programs in the populations of interest, field experiments raise at least three major ethical concerns. The concerns relate to the ethical principles laid out in the Belmont report to guide clinical trials in medicine: respect of person, beneficence, and justice (Teele, 2014). First, the nature of many field experiments in social science makes it extremely difficult to ask for the consent to participate from the individuals involved (e.g., voters, elected officials, criminals, and civil servants), but lack of consent violates the respect of person. For example, seeking the consent of all voters within a district or region for the roll-out of television ads to encourage turnout or to seek approval of potentially corrupt public officials or for political parties to implement a transparency initiative may be impractical and, even if possible, can generate erroneous causal estimates. Indeed, the consent requirement becomes even more difficult when scholars need to employ deception—such as sending out (fake) constituents’ requests to legislators—to measure ethnic or partisan bias (McClendon, 2012). It is not clear whether social scientists should be exempt from requiring individual consent in search of the “common good” or whether they should be guided by alternative forms of participants’ consent (Humphreys, 2015). As the use of field experiments becomes more popular, it is imperative for the scholarly community to reach a consensus on the consent requirement.

Second, many well-intentioned Africanists conduct experiments aimed at “improving” the political, economic, and social processes and well-being of citizens in African countries, and they strive not to cause harm (beneficence). However, it is possible that some research may lead to unintended negative consequences in the population unbeknownst to researchers, who are often narrowly focused on the immediate impacts of interventions (Teele, 2014). For example, investigating whether legislators respond to constituents’ requests for personal assistance (casework) or providing information to citizens about the performance of officeholders relating to community development projects may divert incumbents’ attention from equally significant national policies. It is also possible, for example, that efforts at empowering women in the economic realm through micro-credit loans exposes them to domestic violence or abuse by men who feel threatened by the women’s increased social standing (Rahman, 1999). The unequal distribution of resources, such as educational materials or public infrastructure, within and across communities may generate animosity. Therefore, it is crucial for scholars to understand and to consider the potential short- and long-term effects of the initiative they intend to implement in their study communities through thorough consultation with local experts and qualitative assessments before rolling out an experiment.

Third, experimental research on the African continent is often carried out by academics who are based in affluent Western universities and who engage with governments, local communities, local organizations, and individuals, which generates concerns about asymmetric power relations and whether research has any benefits for the African subjects who bear the costs. Indeed, governments or communities may agree to collaborate on research projects that serve only the researchers’ curiosity or career goals, with no apparent benefits for the people (in violation of the principle of justice). Researchers need to ask themselves whether their experiments confer any benefit on the citizens of the countries they study.

In spite of the ethical concerns related to field experiments, assessment of whether a research design meets established standards is typically carried out by institutions outside the study country (typically at the researcher’s university). Only a few countries on the continent (such as Kenya, Malawi, and Uganda) have established institutional review boards (IRBs) to assess whether field experiments carried out in their communities satisfy ethical standards. To be sure, many Africanists who conduct field experiments are cautious about ethical standards and strive to meet them. Researchers should continue to ask the difficult questions related to the respect of persons, beneficence, and justice in their research. Importantly, the research community in the West also needs to engage with state actors and scholarly communities on the African continent to create institutions that help to establish and to enforce national ethical standards.

A Review of Experimental Research on African Politics

To review the literature on experimental research in Africa, published work on African politics that mentioned key terms related to experimental methods was identified using standard academic search engines.6 (While the review focused on published articles, a few notable working papers that have been widely cited in the field were also considered.) The search yielded 50 relevant studies, which were then classified into thematic areas based on the key variable of interest.7 Table 1 presents the six broad categories identified in the studies.8

Table 1. Key Themes in Experimental Research in African Politics

Topic

Cases (%)

Ethnicity

8 (16)

Election fraud and violence

7 (14)

Clientelism and vote buying

6 (12)

Information: voting and political responsiveness

5 (10)

Others

24 (48)

Total

50

Source: Author.

Since there are few studies in each of the categories, and since within each subject area scholars examined a variety of topics using different explanatory and outcome variables, it is difficult to conduct meta-analyses of the findings for any of the topics. Instead, the aim of the review is to broadly describe the types of topics Africanists have explored using experimental methods and what has been learned, in order to highlight the potential for such approaches to deepen scholarly understanding of the microfoundations of African politics.

Much of the important experimental work in African politics has involved collaboration between scholars and local civil society organizations. In rare cases, government agencies have worked with researchers to include some form of randomization in the roll-out of significant national programs, such as civic or voter education and the design of national institutions. More than a quarter (27%) of the studies reviewed were conducted in Uganda (see Figure 2), while Kenya, Ghana, Benin, and Nigeria have been the site of about a third of the experimental work. The concentration of experimental research in these few countries—due to costs, political openness, and academics’ interest—generates concern about the generalizability of the experimental findings.9 Nonetheless, carefully designed experiments increase our confidence in what we believe we know, and replication in different settings can help assess the applicability of previous findings to other settings and contexts.10

Figure 2. Distribution of Countries in Reviewed Studies (N = 48).

Source: Author's own data (N=48).

Five of the key thematic areas mentioned in Table 1 encompass the majority (52%) of experimental work on African politics: (1) political participation, (2) clientelism and vote buying, (3) election fraud and violence, (4) information (transparency initiatives), and (5) ethnicity. The next sections of this article discuss some of the key insights from the seminal experimental research on the topics and highlight the gaps that deserve attention. Each section on the various topics can be read in isolation; therefore, readers can sample the subject of their interest. The article concludes with a summary of findings and suggests future directions for experimental research on African politics.

Key Experimental Results in African Politics

Political Participation: Voting and Civic Engagement

For individuals, the costs of participating in elections or contacting officials to raise community concerns may outweigh the benefits of doing so (Downs, 1957; Niemi, 1976). While voter turnout remains relatively high in many African countries (about 65%), other forms of political participation between elections—such as raising community or public concerns with officeholders—remain very low.11 For example, according to Afrobarometer data, an overwhelming majority of citizens have never contacted their local councilor (77%), member of parliament (MP; 88%), or a government official (86%) to raise a concern.12

For Africanists, understanding what motivates high voter turnout and encourages other forms of civic participation is pertinent to promoting democratic responsiveness and consolidation. However, regarding turnout, for example, observational studies struggle to assess the causal effects of popular efforts, such as civic and voter education, door-to-door canvassing, and rallies, which policymakers, civil society groups, and political parties believe bring citizens to the polls. Indeed, these efforts may simply be targeted at places where implementers believe they are needed the most (for historical reasons) or easy to roll out (due to logistical concerns), making it difficult to isolate causal effects. Yet the small number of experiments that have been carried out in the area of political participation provide guidance on how randomization methods can be productively used to generate insights into what drives turnout and civic engagement in new democracies.

For example, in a unique study, Aker, Collier, and Vicente (2017) used a field experiment to show that providing important facts about elections to citizens through civic education can increase their knowledge of election processes and encourage turnout. In the run-up to Mozambique’s October 2009 presidential election, Aker and colleagues leveraged the Electoral Commission (EC)’s civic and voter education efforts to test the impact of providing neutral election information on voter participation.13 The researchers divided study subjects into three treatment groups and provided information to local citizens that complemented the EC’s efforts. The first treatment group received civic education, which involved distributing EC leaflets to targeted individuals. The researchers subsequently used SMS to send five text messages containing specific (factual) data about the elections, including dates, candidates’ names, type of election, and the need to vote (as well as information on how to vote). The second treatment, SMS hotline, informed citizens about a hotline they could call to report incidents of electoral misconduct in their area. The third treatment involved disseminating information from the first two groups through the circulation of free newspapers to selected citizens in randomly chosen communities. In partnership with the free and independent @Verdade newspaper, the researchers provided a target set of citizens with factual electoral information and national hotline numbers (to report incidents) on a weekly basis.

Comparing the average level of knowledge and the rates of turnout between the treatment and the control groups, Aker, Collier, and Vicente (2017) found that all three interventions equally increased citizens’ information about the electoral process, which in turn boosted turnout by 5% in the presidential elections. The researchers also found that the newspaper treatment, which combined the civic education and hotline interventions, decreased the rate of invalid ballots by 1% at polling stations, which represents a substantial 19% drop from 3.6% in the control polling places. They also found that citizens in the treatment group were more likely to communicate their policy priorities (via text messages) to the newly elected president compared to those in the control group, which is a step in promoting political representation.14 The composite (complex) nature of the authors’ interventions makes it hard to identify the mechanism linking each treatment to turnout. However, the findings suggest that giving citizens factual data on elections can promote voter turnout and encourage constituents to raise policy concerns with officeholders. Moreover, the results indicate that, in addition to traditional forms of voter education (i.e., leafleting and posters), civic education can use mobile technology systems to intensify its effects.

Moehler and Conroy-Krutz (2016) conducted a field experiment before Ghana’s 2012 general elections to test the effect of media (radio) content on partisans’ interest in electoral campaigns and civic participation. The authors randomly exposed riders in commuter vans (known locally as tro-tros) to the content of political talk shows from either a pro-government, a pro-opposition, or a neutral radio station. The control group heard no radio broadcast. Subjects exposed to any of the three types of radio programs became more interested in politics, as measured using an index that combines partisans’ answers to questions about whether they thought politics was essential and whether they were excited about the electoral campaign. However, it was not clear from the study whether the radio exposure subsequently encouraged voter turnout. The authors also found that citizens who listened to the pro-party or neutral radio programs were ambivalent about signing a petition to the major parties to complain about conditions surrounding commuter buses; the authors used subjects’ willingness to sign as a measure of civic participation.

However, because Ghanaians (and indeed many Africans) rarely use petitions as a means of political participation, it is not clear how the authors’ findings generalize to other standard modes of political participation, such as voting or contacting politicians between elections. Furthermore, Moehler and Conroy-Krutz could not specify which aspect of the media programming (i.e., personal attacks on opponents, policy discussions) drove the effects. For example, citizens exposed to the pro-opposition station were less likely to sign a petition to the major parties (which were not specified in the experiment) about the condition of commuter buses. Yet the authors did not conclude whether this was because individuals felt the opposition party would not listen or because citizens thought they would be unable to affect change through such actions.

Grossman, Michelitch, and Santamaria (2017) conducted a field experiment that leveraged a mobile phone-based platform established in Uganda to encourage contact between citizens and officeholders; the aim was to show that citizens are more likely to report service deficiencies to their political representatives when they are made to feel more internally or externally efficacious. Specifically, the authors found that when citizens were directly mentioned by their names and were told that their representative (specified by the MP’s name) was willing to listen to their concerns, they were more likely to report service delivery failures. Importantly, the effects were greater among females and among those from a different party than their MP, which indicates partisan cohesion. The findings suggest that a sense of internal and external efficacy may be necessary to encourage political participation between elections.

Constraints on Electoral Accountability

For many academics, policymakers, and observers of electoral processes in Africa, the major challenges to electoral accountability include clientelism, vote buying, election fraud, violence, and a lack of credible information about the quality of politicians. To address these challenges, democracy promoters and civil society organizations have launched many programs designed to help mitigate the challenges to electoral accountability. For political scientists, such efforts have offered unprecedented opportunities to conduct randomized experiments to learn about crucial questions relating to the political behavior of voters and politicians. The questions have included: Is clientelism or vote buying an effective electoral strategy? Does access to information about politicians’ quality and performance encourage issue-based voting? What strategies can help reduce election fraud and violence? Do strategies that reduce election fraud and violence incentivize political responsiveness? The next subsections review some of the answers provided in existing experimental work.

Clientelism and Vote Buying

In a clientelistic political system, voters support politicians in exchange for local or particularistic benefits to themselves or their community in lieu of broad-based national policies (Hicken, 2011). Vote buying involves the exchange of cash (or other material benefits) for votes during elections (Schaffer, 2007). Scholars and policymakers believe that both strategies are common in African politics and that they promote corruption, lead to the inefficient allocation of public resources, and undermine electoral accountability (Stokes, Dunning, Nazareno, & Brusco, 2013).

However, it is hard to estimate how prevalent and effective clientelism and vote buying are in election campaigns, because citizens may see both as morally objectionable and thus be unlikely to report selling their vote, and politicians may simply offer these benefits to people or communities they believe would vote for them (Nathan, 2016; Posner, 2005). Nonetheless, Africanists have used innovative randomization techniques in survey and field experiments to shed light on how (and why) clientelism and vote buying operate in African democracies.

In a pioneering and unique use of field experiments in the study of African politics, Wantchekon (2003) collaborated with real candidates in Benin’s 2001 presidential election to examine the effect of clientelism on vote choice. He convinced candidates from four major parties to randomize their campaign platforms to feature only “clientelistic” campaigns or only “public policy,” or to run their usual campaign, which combined both clientelistic and broad policy appeals (control) in a random set of villages. The clientelistic and public policy campaign messages focused on the promise to deliver on education, infrastructure, jobs, and healthcare. However, the clientelistic treatment emphasized the provision of these goods as individual and club benefits to members of the local community, while the public policy treatment stressed the issue as part of a broader national agenda.

To avoid potentially confounding clientelism and ethnic voting, Wantchekon (2003) deployed his experiment in eight noncompetitive districts. Half of the districts were incumbent electoral strongholds, and the other half were opposition strongholds. Because the noncompetitive constituencies were ethnically homogeneous, ethnicity was held constant and the effect of switching from clientelistic to public policy messages on voter support for candidates could be assessed. Consistent with popular beliefs, Wantchekon found that, on average, the promise of clientelistic goods increased electoral support for all types of candidates, while appeals to public policy reduced voter support compared to the control group. Further, he found that factors like a candidate’s incumbency status and the gender of voters also shape the extent of influence of clientelistic versus public policy messages. Women were more supportive of public policy platform messages, while men were more likely to vote for clientelistic platforms. Wantchekon attributed this difference to the fact that women are less likely to benefit from clientelism than men are. Further, clientelism was more effective for incumbents because voters considered their promise of particularistic benefits to be more credible.

While Wantchekon’s (2003) study was monumental, it also left four lingering questions. First, the clientelism treatment promised both community and personal (e.g., government jobs) benefits. Therefore, it is unclear which type of benefit had a stronger influence. Second, the study considered the impact of the “promise” of clientelistic goods rather than their supply.15 Since incumbents benefit more from clientelism than challengers, are the effects driven by the prior provision of clientelistic goods in the communities rather than the promise of future benefits? That is the campaign may have simply reminded the treated communities of prior transfers. Third, the study did not directly test the mechanisms linking clientelism and voting intentions. Finally, because the study was restricted to noncompetitive districts, further research is required to assess whether the conclusions also hold in competitive areas.16

Vicente (2014) leveraged a voter education campaign by the National Election Commission (NEC) of São Tóme and Príncipe (STP) in the country’s 2006 presidential election to assess the prevalence and effect of vote buying on voter behavior. The NEC’s door-to-door campaign against vote buying involved distributing about 10,000 voter-education leaflets to households in a random set of communities (treated). The leaflet contained the message: “Do not let your conscience take a banho─Your vote should be free and in good conscience.”17 The front page of the leaflet also featured three passages from STP’s electoral laws reminding voters that vote buying is illegal.

Vicente’s (2014) study found widespread vote buying in the election. About two thirds of citizens in the sample reported that they saw vote buying in their communities. One third reported that candidates offered an acquaintance a gift; 90% were said to have accepted the bribe. However, Vicente (2014) found that the leaflets significantly reduced the influence of bribes on self-reported voter behavior. Specifically, citizens who received the campaign leaflets were more likely to say that candidates’ gifts did not influence their voting decision and to report that their voting was conducted in good conscience. Using polling stations results, Vicente also found that the campaign against vote buying decreased voter turnout by 3% to 6% in targeted communities, which suggests that politicians may use the practice to encourage participation.18

Also, Vicente (2014) found that while the campaign increased the incumbent’s vote share by 4%, it decreased the challenger’s vote share by a similar amount. The campaign also reduced the reported incidence of vote buying by both the challenger (by 8%–9%) and the incumbent (by 6%) and decreased the value of the bribes offered by about $12 to $18 (U.S.). The author concluded that the civic education campaign appeared to have reduced the (reported) influence of vote buying on voting behavior (both turnout and choice). He further concluded that opposition candidates may be more reliant on vote buying because the campaign reduced their support relative to the incumbent, who may have substituted clientelistic promises for bribes, as noted by Wantchekon (2003) in Benin. The findings also suggested that incumbents and challengers may use vote buying for different purposes. Therefore, theories should reflect this possibility.

Similar to Vicente’s finding in STP, Kramon (2016b) demonstrated that vote buying may be effective among Kenyan voters. Using a list survey experiment, Kramon found that 20% to 25% of a nationally representative sample of citizens agreed with the statement that they “voted for a party or politician because they gave you money during the campaign.” The list-survey results contrasted with results for the direct version of the sensitive question, wherein only about 7% of respondents answered, which suggests the need for scholars to be cautious of the potential substantial response bias in measures of vote buying that rely on direct questions. Furthermore, Kramon found no difference in the reported effectiveness of vote buying in densely versus sparsely populated areas, which he used as a measure of the capacity of parties or candidates to monitor and perhaps enforce vote-buying deals, in contrast with previous studies, which suggested that vote buying is used where parties can monitor voter behavior (Brusco, Nazareno, & Stokes, 2004). Rather, Kramon found that voters located in areas with minimal radio service (a proxy for access to political information) were more likely to acknowledge that their vote was influenced by vote buying during the electoral campaign. Thus, like Vicente (2014), Kramon (2016b) showed that vote buying can be effective, especially among those with less access to political information (although the studies did not define the type of political information required).

Nonetheless, Vicente’s (2014) finding—that the anti-vote-buying campaign is effective—is exciting for democracy promoters. However, like the treatment in Wanchekon’s study in Benin, Vicente’s (2014) compound treatment did not allow the determination of whether such campaigns should emphasize the illegality of vote buying or merely ask voters to take the cash and vote their conscience, or both. The policy implications of the two approaches are different. For example, if threatening voters with legal sanction can reduce vote buying, then it might be ideal for promoting policy-based campaigns because voters would be deterred from accepting bribes in the first place. Yet, if voters are encouraged to renege on vote-buying arrangements, then while the arrangements would be ineffective, such practices would still be used in electoral campaigns because politicians could use them for purposes other than “buying votes” (see Kramon, 2016a). Also, like others working on the topic, Vicente did not directly observe vote buying, but understandably drew conclusions from citizens’ responses.

Kramon (2016a) examined potential mechanisms linking vote buying to voting decisions. He employed a unique experiment to show that politicians may use vote buying to signal the credibility of their campaign promises, rather than to ask citizens to sell their ballot. He suggested that candidates used electoral handouts to convey information about their credibility because it is harder for candidates to make credible policy commitments in clientelistic political systems (Bleck & Van de Walle, 2013). Drawing on psychology literature on the sources of credibility and on an experiment that varied whether or not respondents heard from a radio recording that a politician distributed gifts at a campaign rally in which the candidate also promised to provide local public goods, Kramon (2016a) argued and showed that handouts enable politicians to signal their competence, trustworthiness, and electoral viability, which are three dimensions of a credible candidate. The study helped explain why politicians may rely on handouts during election campaigns even though balloting remains secret.

The studies suggested that clientelism and vote buying can be effective electoral strategies, although they may work for different reasons. They also suggested that incumbents can rely on clientelism because they can make upfront payments, while challengers, who do not control state resources, can use vote buying to effectively signal the credibility of their commitment to deliver particularistic benefits. Ideally, scholars will continue to clarify—and use experiments to test—the theoretical connection between clientelism and vote buying, and for what purposes incumbents and challengers deploy each strategy.

Information: Voting and Political Responsiveness

The lack of information on politicians’ quality, policy positions, and performance incentivizes voting based on ethnicity (or other heuristics) and clientelism. In theory, the provision of information about candidates should induce political responsiveness, because voters can select high-quality candidates who share their preferences or punish poor-performing officeholders (for a review, see Ashworth, 2012). Scholars have leveraged transparency initiatives by citizen organizations and survey experiments to assess this theoretical proposition.

In one of the first significant studies of transparency initiatives in Africa, Humphreys and Weinstein (2012) partnered with Uganda’s parliament (2006–2011) and a civil society group, the Africa Leadership Institute (AFLI), to generate annual scorecards that contained reports on MPs’ performance in the National Assembly. The scorecards rated performance along three principal dimensions—work in the plenary sessions, committee meetings, and constituency service. The AFLI generated scorecards for all 296 legislators; average scores for legislators in the incumbent and opposition parties were included for comparison. While the researchers found that voters were receptive to the information and used it to update their beliefs about their MPs’ performance, it did not ultimately influence their voting decision. Disseminating the scorecards to a random set of constituencies did not encourage targeted MPs to improve their performance along the dimensions specified on their scorecard nor did it change officeholders’ chances of re-election. The authors argued that qualitative evidence about the behavior of incumbents in the treated locations suggested that, while such information may be necessary to hold politicians to account, it may not be sufficient in a dynamic electoral environment. Politicians can counteract the negative effects of information circulated several months before elections.

However, other scholars have found more promising evidence on the impact of information on holding politicians accountable and have underscored the conditions under which information may influence political behavior. For example, a field experiment conducted by Grossman and Michelitch (2018) in Uganda between the 2011 and 2016 elections suggested that the dissemination of information about politicians’ performance is likely to improve the responsiveness of incumbents on issues they can change (compared to those that are not under their control), but also only in competitive constituencies. Gottlieb (2016) used a field experiment across 95 localities in Mali in which a random set of communities was given information about their local government’s capacity and responsibility, as well as the performance of their local politicians compared to those in nearby locations. The information raised voters’ expectations about what their local government could (and should) do. Gottlieb’s results revealed that individuals in the treated communities were more likely to sanction poor performance (Gottlieb, 2016). Importantly, individuals in treated areas were more likely to challenge local leaders’ records during town hall meetings, which is conducive to holding officials accountable.

Some scholars have also found that, where it is available, information about candidates’ performance can be a stronger determinant of vote choice than ethnicity or party cues (Conroy-Krutz, 2013). Brierley, Kramon, and Ofosu (n.d.) found that policy discussions during parliamentary debates can moderate blind partisan-based voting in competitive electoral districts in new democracies. However, other experimental studies have concluded that ethnicity may condition the effect of information (Adida, Gottlieb, Kramon, McClendon et al., 2017; Carlson, 2015). For example, Adida et al. (2017) found that voters may use favorable information to reward co-ethnic candidates and exploit bad performance information to punish non-co-ethnics. However, Adida et al. (2017) also found that voters can simply ignore bad news about co-ethnics.

These studies suggested that the role of transparency (or information) in inducing electoral accountability may be more complicated than previously thought. However, it is hard to draw hard conclusions from existing empirical work because the context and measures used varied. Nevertheless, the results point to the future work required to build a solid theoretical and empirical basis for the role of information in political accountability.

Election Fraud and Violence

Africanists have also collaborated with civil society organizations to examine the effects of initiatives to reduce vote rigging and to promote peaceful elections in Ghana, Nigeria, and Uganda. Prominent among the interventions is election observation in which independent citizens’ organizations deploy thousands of trained personnel to monitor compliance with electoral laws during the pre-election, election day, and postelection phases to deter and detect fraud and violence (Bjornlund, 2004). Researchers have also assessed the effect of nationwide campaigns against electoral violence on politicians’ and voters’ behavior. These empirical studies have generated significant insights into whether such programs promote election integrity, and how political parties can rig elections.

In their pioneering experimental work on domestic election observers in Africa, Ichino and Schündeln (2012) partnered with Ghana’s largest citizen election monitoring group, the Coalition of Domestic Election Observers (CODEO), to examine the effect of monitors on voter registration fraud in the country’s 2008 general elections. CODEO deployed trained monitors to registration centers to detect and deter illicit voter registration practices. Until recently, it was hard to assess the impact of monitors on registration and election-day fraud because civil society groups deployed observers to locations with a high risk of fraud and violence (i.e., “hot spots”) based on historical or electoral competitiveness data. Randomization of the placement of observers helps identify the efficacy of election monitoring.

Ichino and Schündeln (2012) found that observers’ visits to voter registration centers reduced the rate of inflation of the voters’ list. Electoral areas (EAs) that were randomly visited by monitors during the 13-day registration period experienced a 3.5% reduction in the increase in the number of registered voters between 2004 and 2008. However, Ichino and Schündeln found that politicians displaced registration fraud to unmonitored stations. Unmonitored EAs in constituencies that had election observers at some locations saw a 2.7% increase in new voters, which suggests that parties shifted fraud to nearby EAs without monitors. Nevertheless, the presence of observers reduced the overall level of illegal voter registration by about 4% for EAs located in constituencies with monitors. These findings suggest that independent election monitors can thwart attempts by political party agents to rig the vote by packing the voter list with unqualified persons before elections.

A central limitation of Ichino and Schündeln’s (2012) report is that they were unable to identify how registration fraud occurs, or how parties shift fraudulent activities from one location to another. The authors plausibly suggested that parties may have bused illegal voters between registration centers.19 The authors discounted the role of election officials in the registration fraud. However, because it might only take a simple phone call for partisan Electoral Commission officials to direct the steps of their co-partisan party’s operatives, further research is required to understand how parties and local officials may coordinate fraud.

Ofosu and colleagues investigated the effect of domestic election observers on election-day fraud and violence in Ghana’s 2012 general elections (Asunka, Brierley, Golden, Kramon, & Ofosu, 2019). Partnering with CODEO, the researchers assigned a representative sample of 60 (competitive and noncompetitive) constituencies to one of three levels of observer saturation (low, 30%; medium, 50%; and high, 80%), which indicated the proportion of a fixed sample of polling stations within constituencies to which CODEO agreed to send monitors. Consistent with previous studies, the study found that monitors deterred fraud and violence where they were stationed. Their presence reduced suspicious turnout rates, which suggested that they curtailed the use of tactics like multiple voting or ballot stuffing at the treated locations. Polling stations with observers also recorded lower levels of voter intimidation.

Furthermore, in noncompetitive constituencies, political parties were shown to be able to shift fraud to unmonitored stations, but they were unable to do so in competitive districts. Thus, in their strongholds, parties are able to use their dense social and political network to move fraud (although it is not known how this actually occurs). However, parties were more likely to relocate voter intimidation efforts to unmonitored stations in competitive districts. These results suggest that local electoral competition may condition the effect of election observers on fraud and violence. However, as Ichino and Schündeln (2012) found, on average, the presence of more observers within constituencies reduced the overall level of fraud and violence.

How election monitoring reduces fraud and violence remains a matter of speculation in the literature. For example, monitors may reduce fraud because they increase the probability of detection and possible legal sanction. Alternatively, election officials and party activists simply may not want to be caught engaging in the socially reprehensible behavior of stealing (Hyde, 2009). Current research suggests both mechanisms may explain the effect of monitoring. For example, Callen, Gibson, Jung, and Long (2016) examined the impact of simply dispatching letters of intent to monitor election tallies to election officials on election day during Uganda’s 2011 presidential election. In treatment stations, they deployed their EC-accredited research assistants to deliver letters to polling officials that stated that tallies would be photographed using smartphones and compared against official results. In an innovative move, they also varied the content of the letter, which helped to reveal how election monitoring discourages ballot fraud. In one treatment arm, the letter stated that tallies would be captured by smartphones at the close of the polls (monitoring treatment). In another, polling station officials were simply reminded of the penalty for vote rigging in Uganda (punishment treatment).20 A third version of the letter contained both monitoring and punishment statements (both). Comparing election manipulation outcomes in treatment versus control polling stations, Callen et al. (2016) found that all three treatments almost equally reduced irregularities in the elections. Moreover, since all treatments generated similar effects, it is not clear whether just the observers’ visit (without necessarily staying there the entire day) generated the effects. Thus, while the authors suggested a switch to this mode of observation, future research should directly combine the traditional mode of monitoring with these treatment arms to test their claim.

The limited studies on the effect of election observation have focused only on its impact on political party operatives. Much less is known about whether (or how) election observation activities influence the attitudes of voters regarding their participation in, and perception of, the integrity of the electoral process (Bush & Prather, 2017, is a notable exception). Research on whether, by reducing election fraud and violence, election observation achieves its ultimate goal of incentivizing political responsiveness is also limited. Leveraging Asunka et al.’s (2019) experimental design and legislator spending on their state-provided Constituency Development Fund, follow-up research found that legislators elected in constituencies that were randomly assigned to intense election-day monitoring in Ghana’s 2012 elections spent more of the funds on public goods during their terms in office (Ofosu, 2019). The findings suggested that by increasing the quality of election-day balloting, election monitoring can incentivize politicians’ responsiveness to citizens’ demands. The study provided further evidence to suggest that the effect may run through incumbents’ expectation of future election-day monitoring that will limit their ability to rig elections. More research is required to establish the downstream effects of monitoring on voter attitudes and politicians’ behavior in developing democracies.

Regarding election violence, Collier and Vicente (2014) examined the impact of an antiviolence campaign effort by an independent civil society organization, Action Aid International Nigeria (AAIN), on political participation and behavior during Nigeria’s 2001 election, which was marred by violence. In partnership with AAIN volunteers, the authors carried out a field experiment in which for pairs of villages, one community was assigned to treatment and the other to control. In treated locations, the campaigners distributed pamphlets and clothing items bearing an antiviolence message: “No to political violence! Vote against violent politicians.” Importantly, AAIN also organized town hall meetings, popular theater performances, and door-to-door campaigns to increase political participation and to encourage voters to withdraw their support from candidates who used violence. A critical component of the experiment was the town hall meetings, which were designed to help citizens overcome the collective action problems associated with raising concerns about violence with authorities. Local representatives (officeholders) attended the meetings to discuss ways to counter politically motivated violence, and the popular plays, featuring dramas about bad (violent) versus good politicians, were targeted to youth (the main group used by politicians to enact violence) and others who were hard to attract to the town hall meetings (including women). At least one town meeting and one popular theater performance were organized for each treatment location.

The authors discovered that the campaigns reduced violence in the treated locations (as reported by residents and independent journalists) and increased citizens’ confidence in the electoral process. Further, the program encouraged individuals to take action (sending a pre-stamped postcard) to call attention to violence in their communities. Importantly, the antiviolence campaign increased turnout, which indicates that politicians use violence to reduce voter participation. The findings suggest that antiviolence campaigns that address collective action problems can boost citizens’ confidence and participation in the electoral process. Yet the authors also found that, while the campaign increased support for the incumbent, it depressed the vote for the opposition, which raises concerns that officeholders can substitute other election manipulation tactics, such as ballot fraud, vote buying, or clientelism, for violence.21

Ethnicity

Scholars have used various forms of experimental designs to understand the role of ethnicity in politics in Africa’s multiethnic societies. The studies have provided valuable insights into how political institutions may shape ethnic mobilization, the mechanisms linking the well-established relationship between ethnicity and public goods and violent conflicts, the extent of ethnic bias in voter preferences and economic relations, and when citizens can sidestep ethnicity in their vote choice.

To explore how institutions might shape the role of ethnicity in electoral competition, Posner (2004) used a more or less exogenous colonial border between Zambia and Malawi to show that, within a given institutional setting, the relative size of a cultural group determines whether politicians will mobilize it in electoral contests. Because winning elections involves putting together a winning coalition of different groups, the size of an ethnic group in a given country determines its electoral importance. Accordingly, while Chewas and Tumbukas have similar attributes, on average, across the colonial border, politicians mobilize these groups in Malawi, where they are large and advantageous to coalition building, but they are mostly ignored in Zambia, where they are small compared to other ethnic groups. The study underscores the instrumentality of ethnic mobilization in electoral politics and suggests that electoral institutions interact with the relative sizes of identity groups to influence ethnic-based politics in multiethnic societies.

Adida, Combes, Lo, and Verink (2016) employed a survey experiment to show that a potential vehicle for politicians to build a cross-ethnic winning coalition may be to choose a spouse from a politically relevant ethnic group. They discovered that leaders were married to non-co-ethnics in more than half of the 18 African countries covered by Afrobarometer Rounds 3 and 4. The authors argued that, by marrying across ethnic lines, politicians can credibly signal coalition building before the elections and thus win support. The researchers tested their argument in Benin, where President Thomas Boni Yayi, a Yoruba, is married to Chantal Yayi, a Fon. The authors found that priming the ethnic identity of the president’s wife increased the vote share of the president among his wife’s co-ethnics. However, Adida et al. (2016) provided evidence to suggest that voters were supportive of candidates who married their co-ethnics for symbolic rather than instrumental reasons. In general, there is a need to explore the political implications of cross-ethnic marriages in Africa.

For policymakers and many scholars, understanding why ethnically diverse societies generate fewer public goods and engage in deadly conflicts, and why voters choose officeholders based on their ethnicity, is vital to designing effective strategies to consolidate Africa’s multiethnic democracies.

In a landmark study, Habyarimana, Humphreys, Posner, and Weinstein (2007) used a series of cooperative games in a lab-in-the-field experiment involving 300 subjects recruited from urban slums in Kampala, Uganda, to examine the mechanism(s) that may explain the strong association between ethnic diversity and public goods provision (Banerjee, Iyer, & Somanathan, 2005). They showed that ethnically homogeneous societies are likely to provide higher levels of public goods, such as low crime, good schools, better healthcare services, adequate sanitation, and clean drinking water, for themselves because, in a similar setting, co-ethnics tend to choose to cooperate, which is advantageous for collective action, while non-co-ethnics do not (a strategy selection mechanism). Furthermore, cooperation is enhanced by the fact that co-ethnics are often closely linked in strong social networks, which enables them to find (and credibly threaten to apply) a social sanction of defectors. Habyarimana et al. (2007) found no evidence to support the argument that homogeneous societies are more prosperous because individuals share common preferences or altruism toward in-groups, or that their shared culture enhances their productivity. The findings of Habyarimana et al. suggest that, contrary to popular belief, multiethnic societies can agree on social outcomes that benefit everyone, but that they require institutions that promote cross-ethnic social networks and facilitate fair collective action efforts. Therefore, scholars need to pay attention to uncovering such institutions or factors.

As noted, many researchers contend that voters (citizens) often make political decisions based on ethnicity because they lack relevant information about candidates’ quality and policy positions. Ethnicity provides a cheap heuristic in making such decisions. This instrumental reasoning assumption underpins much of the literature on ethnic politics in Africa (Conroy-Krutz, 2013). Thus, academics and policymakers argue that providing information to voters about candidates’ quality and performance will minimize ethnic-based politics and encourage democratic responsiveness. In fact, using a survey experiment, Conroy-Krutz (2013) showed that when voters have access to data indicating that their co-ethnics are unpopular, were involved in past corruption, have a low level of education, performed poorly in office, or failed to distribute personal benefits during campaigns, they were significantly more likely to withdraw their support from such a hypothetical co-ethnic candidate. While the author’s use of a within-subject design, in which he exposed individuals to both control and treatment conditions during the survey, generates concerns about potential spillover effects and social desirability bias, the study suggests that voters may be willing to substitute quality information about candidates’ preferences, capabilities, and electoral viabilities for ethnicity when choosing officeholders, and that they may not unconditionally support co-ethnics. Carlson (2015) used a vote-choice (conjoint survey) experiment to demonstrate that ethnicity and performance interact to determine voters’ preferences. She found that voters in Uganda prefer candidates who are both co-ethnics and good performers; voters there do not favor co-ethnic shirkers. However, in contrast to Conroy-Krutz (2013), Carlson (2015) revealed that voters do not automatically transfer their vote to high-quality non-co-ethnic aspirants. Adida et al. (2017) found that an increase in access to information about legislators’ performance in parliament (attendance in plenary sessions and committee meetings) relative to other parliamentarians reduced identity voting. Similar to Carlson (2015), Adida et al. (2017) found that voters interviewed before Benin’s 2015 legislative elections were more likely to reward high-performing politicians. However, the results were more consistent with the concept of motivated reasoning described in psychology literature: i.e., identity conditions how voters process new information. Voters were more likely to reward high-performing co-ethnics (but not non-co-ethnics) but punished poor-performing non-co-ethnics. The authors used a comprehension survey to demonstrate how voters’ identity conditions how they selectively recall information about candidates. These experimental works suggest that ethnicity plays a role in how voters make decisions at the polls but that citizens also take the performance of co-ethnic candidates into account.

In a unique examination of how electoral competition may exacerbate ethnic tensions, Michelitch (2015) showed that, irrespective of the stage of the electoral cycle, citizens engage in economic discrimination along ethnic lines. Specifically, citizens in Ghana’s capital, Accra, can generally negotiate better transportation fares (and perhaps other small-scale market prices) from their co-ethnic drivers than from non-co-ethnics. However, during election time, economic discrimination against non-co-ethnics further diverges along partisan lines. Compared to non-co-ethnic non-copartisans, commuters who do not share the ethnicity of their drivers are nonetheless able to negotiate substantially better prices if they share partisanship with the driver. The findings demonstrate that electoral competition can exacerbate ethnic tensions and can affect commonplace behavior between citizens outside the political realm. The findings also suggest that institutions that promote cross-ethnic party coalition building can promote ethnic accommodation.22

Conclusion

A review of the literature on African politics demonstrates the diverse set of issues that Africanists have explored using a variety of experimental methods. Political scientists have leveraged randomization to provide causal evidence on initiatives that seek to promote voter turnout and civic participation, mitigate the constraints to electoral accountability, and investigate the role of ethnicity in Africa’s multiparty regimes.

However, for experiments, especially field experiments, to cement their essential contribution to the study of African politics, scholars need to overcome two main challenges. First, the concentration of experimental research in a few African countries implies that results in the literature may be exceptions rather than the rule. Therefore, there should be a systematic attempt to streamline research designs and to replicate results within and across countries in order to enhance the generalizability of experimental findings.23 Second, we need to begin a serious discussion about the creation of ethical standards to guide the conduct of field experiments in African countries. The discussion needs to involve engaging with relevant state actors and the scholarly communities on the continent to ensure that the voices and concerns of those who bear the costs of the studies are heard and are addressed in any adopted standards.

Finally, several important subjects, such as the causal impact of civil war, political regimes, colonialism, and several other factors, cannot be examined using experimental tools for ethical (and practical) reasons. This means that other quantitative and qualitative research methods remain fundamental to the study of politics in Africa and elsewhere. Experimental methods hold the potential to deepen our understanding of the microfoundations of many pertinent topics. Randomization cannot answer all questions, but when ethically and practically possible, scholars should harness its benefits to test the multiple sets of assumptions about how politics work in Africa.

Further Reading

  • Arrow, K. J. (1963). Social choice and individual values. New York, NY: John Wiley & Sons.
  • Björkman, M., & Svensson, J. (2009). Power to the people: Evidence from a randomized field experiment on community-based monitoring in Uganda. The Quarterly Journal of Economics, 124(2), 735–769.
  • Björkman, M., & Svensson, J. (2010). When is community-based monitoring effective? Evidence from a randomized experiment in primary health in Uganda. Journal of the European Economic Association, 8(2–3), 571–581.
  • Björkman-Nyqvist, M., de Walque, D., & Svensson, J. (2014). Information is power: Experimental evidence of the long-run impact of community-based monitoring. Washington, DC: World Bank.
  • Humphreys, M., Masters, W. A., & Sandbu, M. E. (2006). The role of leaders in democratic deliberations: Results from a field experiment in São Tomé and Príncipe. World Politics, 58(4), 583–622.
  • Robinson, A. L. (2016). Nationalism and ethnic-based trust: Evidence from an African border region. Comparative Political Studies, 49(14), 1819–1854.
  • Sheely, R. (2015). Mobilization, participatory planning institutions, and elite capture: Evidence from a field experiment in rural Kenya. World Development, 67, 251–266.

References

  • Adida, C., Gottlieb, J., Kramon, E., McClendon, G. et al. (2017). Reducing or reinforcing in-group preferences? An experiment on information and ethnic voting. Quarterly Journal of Political Science, 12(4), 437–477.
  • Adida, C. L., Combes, N., Lo, A., & Verink, A. (2016). The spousal bump: Do crossethnic marriages increase political support in multiethnic democracies? Comparative Political Studies, 49(5), 635–661.
  • Aker, J. C., Collier, P., & Vicente, P. C. (2017). Is information power? Using mobile phones and free newspapers during an election in Mozambique. Review of Economics and Statistics, 99(2), 185–200.
  • Ashworth, S. (2012). Electoral accountability: Recent theoretical and empirical work. Annual Review of Political Science, 15, 183–201.
  • Asunka, J., Brierley, S., Golden, M., Kramon, E., & Ofosu, G. (2019). Electoral fraud or violence: The effect of observers on party manipulation strategies. British Journal of Political Science, 49(1), 129–151.
  • Banerjee, A., Iyer, L., & Somanathan, R. (2005). History, social divisions and public goods in rural India. Journal of the European Economic Association, 3(2–3), 639–647.
  • Bjornlund, E. (2004). Beyond free and fair: Monitoring elections and building democracy. Washington, DC: Woodrow Wilson Center Press.
  • Blair, G. (2015). Survey methods for sensitive topics. APSA Comparative Politics Newsletter, 24(1), 12–16.
  • Blair, G., Imai, K., & Zhou, Y.-Y. (2015). Design and analysis of the randomized response technique. Journal of the American Statistical Association, 110(511), 1304–1319.
  • Bleck, J., & Van de Walle, N. (2013). Valence issues in African elections: Navigating uncertainty and the weight of the past. Comparative Political Studies, 46(11), 1394–1421.
  • Brierley, S. (n.d.). Unprincipled principals: Co-opted bureaucrats and corruption in Ghana. American Journal of Political Science.
  • Brierley, S., Kramon, E., & Ofosu, G. (n.d.). The moderating effect of debates on political attitudes. American Journal of Political Science.
  • Brusco, V., Nazareno, M., & Stokes, S. C. (2004). Vote buying in Argentina. Latin American Research Review, 39(2), 66–88.
  • Bush, S. S., & Prather, L. (2017). The promise and limits of election observers in building election credibility. The Journal of Politics, 79(3), 921–935.
  • Callen, M., Gibson, C. C., Jung, D. F., & Long, J. D. (2016). Improving electoral integrity with information and communications technology. Journal of Experimental Political Science, 3(1), 4–17.
  • Carlson, E. (2015). Ethnic voting and accountability in Africa: A choice experiment in Uganda. World Politics, 67(2), 353–385.
  • Carlson, M. D. A., & Morrison, R. S. (2009). Study design, precision, and validity in observational studies. Journal of Palliative Medicine, 12(1), 77–82.
  • Collier, P., & Vicente, P. C. (2014). Votes and violence: Evidence from a field experiment in Nigeria. The Economic Journal, 124(574), 327–355.
  • Conroy-Krutz, J. (2013). Information and ethnic politics in Africa. British Journal of Political Science, 43(2), 345–373.
  • Cox, D. R., & Reid, N. (2000). The theory of the design of experiments. Washington, DC: Chapman and Hall/CRC.
  • Downs, A. (1957). An economic theory of democracy. Boston, MA: Addison-Wesley.
  • Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge, UK: Cambridge University Press.
  • Dunning, T. (2016). Transparency, replication, and cumulative learning: What experiments alone cannot achieve. Annual Review of Political Science, 19, S1–S23.
  • Fafchamps, M., & Vicente, P. C. (2013). Political violence and social networks: Experimental evidence from a Nigerian election. Journal of Development Economics, 101, 27–48.
  • Gerber, A., & Green, D. (2012). Field experiments: Design, analysis, and interpretation. New York, NY: Norton.
  • Gottlieb, J. (2016). Greater expectations: A field experiment to improve accountability in Mali. American Journal of Political Science, 60(1), 143–157.
  • Gottlieb, J. (2017). Explaining variation in broker strategies: A lab-in-the-field experiment in Senegal. Comparative Political Studies, 50(11), 1556–1592.
  • Grossman, G., & Michelitch, K. (2018). Information dissemination, competitive pressure, and politician performance between elections: A field experiment in Uganda. American Political Science Review, 112(2), 280–301.
  • Grossman, G., Michelitch, K., & Santamaria, M. (2017). Texting complaints to politicians: Name personalization and politicians’ encouragement in citizen mobilization. Comparative Political Studies, 50(10), 1325–1357.
  • Habyarimana, J., Humphreys, M., Posner, D. N., & Weinstein, J. M. (2007). Why does ethnic diversity undermine public goods provision? American Political Science Review, 101(4), 709–725.
  • Hainmueller, J., Hopkins, D. J., & Yamamoto, T. (2013). Causal inference in conjoint analysis: Understanding multidimensional choices via stated preference experiments. Political Analysis, 22(1), 1–30.
  • Hicken, A. (2011). Clientelism. Annual Review of Political Science, 14, 289–310.
  • Humphreys, M. (2015). Reflections on the ethics of social experimentation. Journal of Globalization and Development, 6(1), 87–112.
  • Humphreys, M. & Weinstein, J. (2009). Field experiments and the political economy of development. Annual Review of Political Science, 12, 367–378.
  • Hyde, S. D. (2009). How international observers detect and deter fraud. In R. M. Alvarez, T. E. Hall, & S. D. Hyde (Eds.), Election fraud: Detecting and deterring electoral manipulation (pp. 201–215). Washington, DC: Brookings Institution Press.
  • Ichino, N., & Schündeln, M. (2012). Deterring or displacing electoral irregularities? Spillover effects of observers in a randomized field experiment in Ghana. The Journal of Politics, 74(1), 292–307.
  • Kramon, E. (2016a). Electoral handouts as information: Explaining unmonitored vote buying. World Politics, 68(3), 454–498.
  • Kramon, E. (2016b). Where is vote buying effective? Evidence from a list experiment in Kenya. Electoral Studies, 44, 397–408.
  • McClendon, G. (2012). Ethics of using public officials as field experiments subjects. Newsletter of the APSA Experimental Section, 3(1), 13–20.
  • Michelitch, K. (2015). Does electoral competition exacerbate interethnic or interpartisan economic discrimination? Evidence from a field experiment in market price bargaining. American Political Science Review, 109(1), 43–61.
  • Moehler, D. C., & Conroy-Krutz, J. (2016). Partisan media and engagement: A field experiment in a newly liberalized system. Political Communication, 33(3), 414–432.
  • Nathan, N. L. (2016). Local ethnic geography, expectations of favoritism, and voting in urban Ghana. Comparative Political Studies, 49(14), 1896–1929.
  • Nichter, S. (2008). Vote buying or turnout buying? Machine politics and the secret ballot. American Political Science Review, 102(1), 19–31.
  • Niemi, R. G. (1976). Costs of voting and nonvoting. Public Choice, 27(1), 115–119.
  • Posner, D. N. (2004). The political salience of cultural difference: Why Chewas and Tumbukas are allies in Zambia and adversaries in Malawi. American Political Science Review, 98(4), 529–545.
  • Posner, D. N. (2005). Institutions and ethnic politics in Africa. Cambridge, UK: Cambridge University Press.
  • Rahman, A. (1999). Micro-credit initiatives for equitable and sustainable development: Who pays? World Development, 27(1), 67–82.
  • Schaffer, F. C. (2007). Elections for sale: The causes and consequences of vote buying. Boulder, CO: Lynne Rienner.
  • Stokes, S. C., Dunning, T., Nazareno, M., & Brusco, V. (2013). Brokers, voters, and clientelism: The puzzle of distributive politics. Cambridge, UK: Cambridge University Press.
  • Teele, D. (2014). Reflections on the ethics of field experiments. In D. L. Teele (Ed.), Field experiments and their critics: Essays on the uses and abuses of experimentation in the social sciences. New Haven, CT: Yale University Press.
  • Vicente, P. C. (2014). Is vote buying effective? Evidence from a field experiment in West Africa. The Economic Journal, 124(574), 356–387.
  • Wantchekon, L. (2003). Clientelism and voting behavior: Evidence from a field experiment in Benin. World Politics, 55(3), 399–422.
  • Young, L. E. (2019). The psychology of state repression: Fear and dissent decisions in Zimbabwe. American Political Science Review 113(1), 140–155.

Notes

  • 1. When subjects decide which value of the variable they take, it becomes difficult to determine whether the variable is responsible for the outcome studied or if another underlying factor explains both the selection into the different variable conditions and, in turn, the responses.

  • 2. Other sophisticated experimental designs require the inclusion of relevant treatment weights and blocking variables (see Gerber & Green, 2012).

  • 3. Also see Blair (2015) for relevant literature on the different modes of survey experiments.

  • 4. Another form of survey experiment growing in popularity in the study of African politics is conjoint survey experiments, which allow scholars to simultaneously estimate how multiple factors affect a single outcome (Hainmueller, Hopkins, & Yamamoto, 2013). The method helps to boost the external validity of survey experiments because, by randomly varying multiple factors that may influence how, for example, voters select officeholders, it mimics real-life decisions.

  • 5. This type of randomized response experiment is called a forced-choice design (see Blair, Imai, & Zhou, 2015, for other variations).

  • 6. The Web of Science was mainly used to conduct the search.

  • 7. The list of studies and their classification is available here. The list is by no means an exhaustive account of experiments on politics conducted on the continent, but it covers most of the prominent studies in the field.

  • 8. The “Other” category includes collective action, gender quotas, local service delivery, and participatory democracy. Researchers have also used a variety of experiments to examine the functioning of local bureaucracies (Brierley, n.d.; Raffler, 2016) and authoritarian politics in Africa (Young, 2019).

  • 9. However, issues related to external validity or generalizability are not unique to experimental research (e.g., see Carlson & Morrison, 2009).

  • 10. For a broader discussion, see Dunning (2016).

  • 11. Average turnout was calculated using data from International IDEA’s database on turnout.

  • 12. Afrobarometer data averaged over seven rounds (R1–R7).

  • 13. Because the experiment relied on mobile phones, it reached 60% of the country’s polling stations that had coverage.

  • 14. They also report that the civic education component increased the incumbent’s vote share by 5% and decreased that of the challenger by 3%. Similarly, the newspaper treatment increased the incumbent’s vote share by 4% and that of the opposition by only 1%. The hotline did not affect vote shares.

  • 15. Gottlieb (2017) considered the practical challenges that politicians face when supplying clientelistic goods. She argued that brokers in clientelistic political systems or societies may exploit their position to advance their interest in mobilizing supporters (voters) for their principals (politicians). Her study suggested that the process through which clients (voters) select brokers affects the agency problems politicians face in a clientelistic political system.

  • 16. While some researchers also considered issues related to clientelism in African politics, it is hard to compare their studies to Wantchekon’s (2003) study because they often focused on parliamentary candidates, while he focused on presidential elections.

  • 17. Braho means showering voters with cash or gifts. A cartoon in the lower part of the leaflet said “Vote-buying . . . No Way!!!”.

  • 18. Nichter (2008) made a similar observation in Argentina.

  • 19. For example, in footnote 7, the authors wrote: An NDC (National Democratic Congress) agent and a taxi driver independently reported to a domestic registration observer in Trobu-Amasaman constituency in Greater Accra Region that, prior to the observer’s arrival, NPP (New Patriotic Party) pick-up trucks conveyed people from nearby villages to the registration center. Similarly in Ningo-Prampram constituency in Greater Accra Region, a domestic registration observer reported that both NDC and NPP were busing people to registration centers.

  • 20. Officials involved in vote rigging can be charged $1,000 or spend five years in prison (or both) for inaccurately reporting voting returns (Callen et al., 2016, p. 9).

  • 21. Building on these findings, Fafchamps and Vicente (2013) showed that the campaign had a significant effect on individuals who were not directly targeted; the indirect effects were often similar to the direct effects. For those directly targeted by the campaign, social proximity to another treated individual reinforced the treatment, while untreated individuals with kinship ties to a treated individual exhibited a positive treatment effect, which suggests that social ties were the strongest channel through which the spillover effect occurred.

  • 22. Nonetheless, survey and lab experiments can be used to test assumptions relating to the causes and impacts of these important topics.

  • 23. The Evidence in Governance and Politics’ Metekata initiatives are an attempt to mitigate this concern.