Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS (oxfordre.com/politics). (c) Oxford University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy and Legal Notice).

date: 11 December 2019

Using Online Experiments to Study Political Decision Making

Summary and Keywords

The field of political science is experiencing a new proliferation of experimental work, thanks to a growth in online experiments. Administering traditional experimental methods over the Internet allows for larger and more accessible samples, quick response times, and new methods for treating subjects and measuring outcomes. As we show in this chapter, a rapidly growing proportion of published experiments in political science take advantage of an array of sophisticated online tools. Indeed, during a relatively short period of time, political scientists have already made huge gains in the sophistication of what can be done with just a simple online survey experiment, particularly in realms of inquiry that have traditionally been logistically difficult to study. One such area is the important topic of social interaction. Whereas experimentalists once relied on resource- and labor-intensive face-to-face designs for manipulating social settings, creative online efforts and accessible platforms are making it increasingly easy for political scientists to study the influence of social settings and social interactions on political decision-making. In this chapter, we review the onset of online tools for carrying out experiments and we turn our focus toward cost-effective and user-friendly strategies that online experiments offer to scholars who wish to not only understand political decision-making in isolated settings but also in the company of others. We review existing work and provide guidance on how scholars with even limited resources and technical skills can exploit online settings to better understand how social factors change the way individuals think about politicians, politics, and policies.

Keywords: online experiments, Web experiments, Internet experiments, social influence, social networks, political decision making

The Introduction of Online Experiments to Political Science

Political scientists came to experimentation somewhat late, relative to their neighboring scholars in psychology and behavioral economics. Although political science experiments did take place during the mid-20th century (e.g., Eldersveld, 1956), experiments did not become a widely used tool in the field (to the extent that they are today) until the 1990s (Druckman, Green, Kuklinski, & Lupia, 2006). Perhaps one advantage of this late arrival, however, was that scholars began to familiarize themselves with experimental methods just as the Internet was cementing its role as a crucial tool for social science research. For example, JSTOR (the popular online article search engine) was founded in 1995 (Google Scholar nearly 10 years after that), creating new generations of students and scholars who were more likely to sift databases on the web than the stacks at the library. Thus, unlike fields like psychology, which had to adjust their decades-long use of experiments to this new and unfamiliar medium, political scientists began adopting both experiments and Internet tools nearly simultaneously.

What has changed dramatically during the short time-period since is how these online experiments are being used. Initially, experimentalists went online largely to administer straightforward survey experiments—that is, experiments embedded in ostensibly straightforward surveys. There is no better review of the early history of computer-assisted interviewing than Paul Sniderman’s (2011) essay, in which he describes Merrill Shanks’s early design of the program. That system, as Sniderman explains, was designed to alter the sequence of questions a respondent received based on his or her prior responses. Sniderman realized that the program could be amended to include randomization—effectively exposing respondents to different questions at random. Along with Tom Piazza, Sniderman put the idea into action and effectively created the first software allowing for computer-administered randomized survey experiments (for more detail, see Sniderman, 2011).

Over time, political scientists worked to introduce numerous omnibus surveys that allowed researchers to not only contribute individual questions to one collective online survey but also to contribute randomized experiments to these pooled resources. For example, the Multi-Investigate Project (see Sniderman, 2011 for more detail), the Time-Sharing Experiments in the Social Sciences (e.g., Franco, Malhotra, & Simonovits, 2015), the Cooperative Congressional Election Study (e.g., Nicholson, 2012), Volunteer Science (e.g., Radford et al., 2016), and (on occasion) the American National Election Studies (e.g., Robison, 2015) all offer opportunities for researchers to submit short experimental manipulations for inclusion.

As Druckman et al. (2006) point out, more than half of all experiments to appear in the American Political Science Review (APSR) occurred since 1992. Analyses of these studies indicate that a growing proportion of these experiments were administered online. In the two decades that followed (i.e., from 1992 to 2013), 25.7% of the experiments that appeared in the APSR were administered online. Of course, online experiments have become increasingly popular over time: the earliest was 1997, and the percentages grew rapidly. Between 2008 and 2013, 46.8%o of all experiments published in the APSR were conducted online. Figure 1 illustrates the number of articles in the APSR from 1992 until the present day that include experiments (dark line) and the number of articles that include online experiments (dashed line). As is evident from the figure, online experiments are a rapidly growing proportion of experimental work in political science.

Using Online Experiments to Study Political Decision Making

Figure 1. Experiments and online experiments published in American Political Science Review.

During a relatively short period of time, political scientists have made huge gains in the sophistication of what can be done with just a simple online survey experiment. Software allows researchers to record how long subjects spend reading information (e.g., Anderson, Redlawsk, & Lau, 2019; Redlawsk, 2002), answering questions (e.g., Burden & Klofstad, 2005), or completing tasks (e.g., Kuo, Malhotra, & Mo, 2017). Technology enables researchers to incorporate visual items with high precision (Couper, Conrad, & Tourangeau, 2007) and even to record exactly where and when subjects look at their computer screens (Dunaway, Searles, Sui, & Paul, 2018). “Today,” writes Shanto Iyengar (2011), one of the pioneers of experimentation in political science, “traditional experimental methods can be rigorously and far more efficient administered using an online platform” (p. 82).

One element of experimental work that proves more difficult for experimentalists hoping to conduct their studies online is human interaction. Politics is a social phenomenon. The remainder of this article is thus focused on recent advances and lingering issues with respect to studying social influence with online experiments.

Studying Social Influence With Online Experiments

For half a century or more, political scientists have been interested in understanding how social interactions shape the way people make political decisions. In 1954, for example, Berelson, Lazarsfeld, and McPhee argued that “cross-pressures”—conflicting political perspectives existing simultaneously in one’s social environment—can make people politically ambivalent and delay their political decisions. A wealth of scholarly work has since addressed the influence of social context on decision-making, with no clear consensus emerging about how people negotiate the pressures within their social environments. Scholars continue to debate how and when social settings increase or decrease people’s willingness to engage in politically relevant behaviors and the extent that they moderate or polarize their political views.

The Internet and other software innovations provide a wealth of new tools that can be used to inform these debates and to expand the knowledge of how social context shapes political decision-making. Whereas scholars interested in collecting opinions, measuring behaviors, and observing social interactions were once restricted to labs, phones, or fieldwork, today’s empirical social scientists have a growing pool of research participants and computational resources just a few clicks away. This extant work can help scholars to leverage this growing set of tools to investigate the effects of social context and lowers the barriers to enter a still emerging and increasingly important field of study.

Broadly, studies of social context tend to focus on audiences, networks, or groups. An audience is simply the recipient: Who is observing a political act? A social network consists of any set of actors (or “nodes”) and the relations (or “ties” or “edges”) between them (Katz, Lazer, Arrow, & Contractor, 2004, p. 308). Groups, according to Katz et al., are subsets of fully connected nodes, meaning that every member is connected to every other member in the group. Groups can thus be thought of as a particular kind of social network, and so this survey of existing work focuses primarily on the role of social networks. Experimental methods, in particular, stand to benefit from the growing availability of computational tools and provide tips for how researchers new to this area can design and run their own online experiments.

It is helpful to first review previous work on the influence of social context on political decision-making, covering how scholars have used surveys and other observational data. The next section then turns to the unique advantages that online tools add to this area of inquiry, focusing on how they can be leveraged to design novel experiments that investigate the role of social context in political decision-making. Overall, the Internet provides political scientists with an expansive, inexpensive, and effective means to learn about how social environments inform the decisions people make when it comes to politics.

Observations of Network Effects on Political Decision-Making

Political decisions are not made through introspection alone but rather, in the words of Huckfeldt and Sprague (1987), they are “imbedded within structured patterns of social interaction” (p. 1197). The way that political information flows through social networks, and the structure of these networks themselves, is paramount for understanding individuals’ political choices and actions. Conceptually, social networks consist of the linkages between an actor and his or her social contacts, between those contacts and their own, and so forth. As a result, as Katz et al. (2004) explain, the actors who make up a social network need not all know one another and may instead be connected through mutual connections or a path of connected individuals. Through these networks, both information and norms gradually flow (Bertrand, Luttmer, & Mullainathan, 2000). In order to understand how people form decisions and preferences, political scientists therefore may turn to the social networks in which people find themselves (Lazer, 2011). In decades past, scholars have primarily relied on observational data to carry out these interrogations. What follows is a collection of these studies. This is not a comprehensive review but rather provides select examples to illustrate how observational methods can be applied to this area of inquiry.

Surveys have proven to be a fruitful source of social network data. In the most straightforward of techniques, survey respondents can simply provide information about the specific individuals in their network. For example, Mutz (2002) employs a survey to uncover important findings regarding the influence of disagreement on political participation. In line with Berelson, Lazarsfeld, and McPhee (1954) findings about cross-pressures, Mutz analyzes survey respondents’ descriptions of their own networks and finds that perceived disagreement within a network reduces an individual’s political participation due to increased attitudinal ambivalence and the threat that political engagement poses to social harmony in politically acrimonious settings. Nir (2005) sets out to distinguish the effect of network-level cross-pressures (i.e., those that arise from one’s social contacts) from intraindividual cross-pressure (i.e., those that are due to internal conflict). To do so, she relies on the American National Election Studies, which similarly asks respondents to describe characteristics of the people with whom they discuss politics. She finds that network-level cross-pressures have no effect on political decision-making, while individual-level cross-pressure significantly depresses political engagement.

In addition to studying the effects of social settings on political participation, scholars have also used surveys to understand how social networks influence many other political phenomena, including polarization, disagreement, and learning. For example, Mutz and Mondak (2006) examine six separate survey instruments, each of which asks respondents to describe key characteristics of their discussion partners. They find consistent evidence that people are most likely to be exposed to diverse political perspectives while at work and that this exposure promotes tolerance for opposing viewpoints. Sokhey and McClurg (2012) use two different data sets that similarly include respondents’ perceptions of their discussion partners. They find that people rely on their network discussants as information shortcuts to figure out which political candidates to support. In that sense, network disagreement actually facilitates learning about candidates and correct voting (i.e., voting for candidates that best reflect one’s true policy preferences). Klofstad, Sokhey, and McClurg (2013) analyze panel survey data in which respondents describe their discussion partners. They find that general disagreement in a social network has very different consequences than partisan disagreement. In particular, general disagreement seems to decrease political interest, whereas partisan disagreement has no effect on interest.

One weakness of survey-based approaches to gathering social network data is that they rely entirely on respondents’ own reports impressions of their networks. Empirical work suggests that people vary when it comes to their ability to accurately report on their own friends and family. For example, Huckfeldt, Beck, Dalton, and Levine (1995) find that people are less likely to accurately report their social network connections’ political views when they disagree with them. They also find that more than a third of respondents who disagree with a friend were unable to accurately report the friend’s voting preferences. Casciaro (1998) shows that people who occupy higher hierarchical positions in a network are less likely to accurately report on friendships within that network. Personality-level traits can also impede a respondent’s ability to report on his or her social network: In Casciaro’s (1998) study, those who score high in “need for achievement” are better at doing so. Inconsistencies and bias in self-reported survey data can thus pose problems for researchers interested in network effects as they pertain to political decision-making.

In order to confront the limitations of self-report bias, researchers can expand their surveys to interview both a respondent and then interview his or her connections. This type of snowball sampling (also called chain referral sampling) requires the researcher to ask a first wave of respondents for the contact information of their friends or acquaintances and then to interview this second wave. This process can go on for as many waves as the researcher would like in order to collect data on larger and more elaborate social networks. Snowball sampling has provided political scientists with important findings about social influence in political settings. For example, Huckfeldt and Sprague (1988) employed a snowball sampling method to show that voters are more likely to discuss politics with people who share their political preferences and that even those who do discuss politics with people with different preferences frequently misperceive those preferences in a manner that is biased toward agreement. McClurg (2003) uses these same data to demonstrate that social interaction within a network allows people to gather more political information than their own individual resources would otherwise allow. Intuitively, the effect of additional social interaction depends in large part on the amount of political discussion occurring in the network.

Snowball sampling thus overcomes the imitations of having to rely on individuals to accurately report their peers’ political views, but it comes with its own set of weaknesses. As Berg (2004) outlines, snowball sampling requires highly cooperative subjects and exceptionally clear questions regarding specific social relations. It is also logistically more difficult to apply this type of method to weaker ties (i.e., those that are more socially distant) and to any relations that are too great in number to include (e.g., social media connections). In particular, the sample obtained from snowballing is restricted to those whom the focal respondent names, and it is not clear whether this truly represents this person’s most influential social ties or even those with whom he or she regularly discusses the topic at hand. For example, Walsh’s (2004) detailed ethnographic work suggests that many of the political conversations in which people engage take place with the most casual acquaintances they run into. Wojcieszak and Mutz (2009) show this is true in online spaces as well—political discussions arise in chat rooms, for example, that are not set up for political discourse. It is thus evidently difficult for individuals to accurately identify—and to provide contact information—for a true sampling of their most influential political discussants.

In addition to excluding potentially influential social ties, snowball sampling can be sensitive to question wording, as respondents tend to identify vastly different networks depending on what they are asked to report. “Name generator” studies refer to those in which respondents are instructed to report discussion partners by name, either by describing them in a survey or providing their contact information in a snowball sampling approach. Sokhey and Djupe (2014) compare distinct instructions that researchers might give to their respondents: name discussants from “over the last two weeks,” with whom the respondent disagrees, with whom they have talked about political candidates, and—most commonly—with whom they have talked about government, elections, and politics. The authors demonstrate that the wording of the task can indeed influence the types of networks that respondents report. For example, simply asking people to name political discussants appears to discourage them from reporting their disagreeable discussants, which can distort conclusions drawn about the effect of exposure to disagreement in one’s day-to-day life. Finally, as Berg (2004) argues, it can be difficult to implement a snowball sampling method beyond two waves due to inevitable non-response. This implies that the social networks obtained through snowball sampling tend to be relatively small.

A major advance in the study of social networks that can help to facilitate data collection about larger networks is the wide proliferation of social media platforms like Facebook and Twitter. By using data obtained from these large social media platforms, scholars are increasingly able to analyze the proliferation of information through social networks with the intended goal of understanding how it shapes political decisions. For example, Shmargad (2018) applies the snowball sampling method to the social media platform Twitter, starting with the accounts of political candidates who ran in the 2016 U.S. congressional races. Shmargad does not survey each of the 406 candidates in his sample but rather observes the proliferation of candidates’ messages via retweeting—the practice of reposting another Twitter user’s message. Using Twitter’s free and publicly available access point, also known as their application programming interface (API), he was able to track how frequently candidates’ messages were retweeted and, importantly, who retweeted them. He ultimately demonstrates that candidates who face financial disadvantages with respect to their opponent receive a higher percentage of votes when their messages are shared by influential Twitter users (i.e., those whose messages are highly retweeted).

Shmargad (2018) also finds that retweets from influential Twitter users do not appear to increase the vote percentages of richer candidates. This would seem to imply that Twitter can play an equalizing role in U.S. politics, since the platform disproportionately advantages financially weaker candidates. Shmargad and Zhang (2018) tackle this question head-on, investigating the role that Twitter plays at the level of the congressional race. Overall, across the 135 congressional races they analyze, financial equality between candidates was associated with equality at the voting booth. That is, when candidates in a race are closer in terms of the money they spend, they are also closer in the percentages of votes they receive. However, the authors also find there is substantial variation in the outcomes of financially asymmetric races. Even in the most financially asymmetric races, the gap in vote percentages between richer and poorer candidates can be as low as 5% and can exceed 50%. In order to explain this variation, the authors look at a host of Twitter metrics of the competing candidates, including metrics about their accounts, the accounts of users who shared candidate messages, and characteristics of the social network of these users. They find that a single metric can help to predict whether a financially asymmetric race will be close: the average number of retweets that the poorer candidate’s retweeters receive. Moreover, poorer candidates who score high on this metric do better than their party’s candidate did in that district or, in the case of a senate race, that state in the previous election year. Shmargad and Zhang thus show strong evidence that influential Twitter users can help poorer candidates at the voting booth.

Social media platforms, and their respective APIs, are an invaluable tool for political scientists interested in studying the effects of social settings. However, while observational data allow us to know who in the network knows whom, they cannot distinguish homophily from contagion: That is to say, are individuals who see the same thing already alike in other consequential ways? Or is it the mutual exposure that leads their opinions to converge? As Klar and Shmargad (2017) point out, even when both individuals’ opinions are discernable as well as the information to which they have been exposed, there are myriad confounds that could be responsible for both the information that one sees as well as for the attitudes of the information-seer. To draw causal inferences between social network characteristics and political decisions, experimental methods can be uniquely effective, and the benefits of experimental design in understanding social context and the advantages that online tools provide in this space are particularly valuable for scholars in this area.

The Logic and Logistics of Experimental Methods

Experimental methods allow researchers to manipulate only the independent variable of interest, or treatment, in order to determine whether it exclusively influences a particular dependent variable of interest, or outcome. Critical to this process is that the treatment is randomly assigned within a single population, thereby ensuring that—probabilistically—the subsample of treated units is identical to the subsample of untreated units except for the treatment itself. This ensures that any differences in outcome can be attributed exclusively to the treatment.

Experimental studies of social context might focus on a variety of possible treatments: the type of information that spreads through the network, traits of the people composed within it, or the structure of the social network itself (to raise just a few possibilities). In the field—that is, outside of a laboratory or survey setting—some scholars have successfully manipulated the type of information that spreads through a social network to observe its causal effects. For example, Nickerson (2008) conducted an impressive field experiment in Denver, Colorado, and Minneapolis, Minnesota. Paid canvassers delivered 486 door-to-door messages to households occupied by two residents during the weekend prior to the 2002 primary election. One resident in each household randomly received either a message encouraging him or her to vote (the “treatment condition”) or to recycle (the “control condition”). Following the election, the researcher examined turnout rates among the residents who did not receive the message. Indeed, he found that turnout increased for residents whose cohabitant was encouraged to vote, suggesting that the decision to participate in politics is indeed subject to network effects. Sinclair (2012) administered a similar design, in which she sent get-out-the-vote postcards to a random selection of voters and then checked whether their housemates were more likely to vote. She found weak evidence that, for habitual voters who received this added push, housemates who seldom vote became more likely to participate.

Manipulating characteristics of the social network itself—that is, traits of the network participants or the structure of how they are connected to one another—is very difficult in face-to-face settings. To be clear, researchers can quite easily manipulate the average composition of groups—for example, Klar (2014) randomly assigns subjects to discuss politics with a group of fellow partisans, a group of ideologically diverse partisans, or no group at all. She finds that the composition of the group itself influences the ideological extremity of the subjects’ political viewpoints. This demonstrates the influence that an individual’s broad social setting can play in his or her decision-making but cannot provide any insight with respect to the flow of information through one’s network or whether the structure of that network plays a role. Furthermore, group studies assume that all members of a network receive the same information at the same point in time, which is not necessarily a realistic depiction of how political information makes its way through a social group. Group studies offer important conclusions regarding peer pressure and social cues, but only network studies can reveal how information travels from one person to another and with what effect.

Online Experiments of Social Settings

In an article written for the journal Experimental Psychology, underneath a section aptly titled “Why Experimenters Relish Internet-Based Experimenting,” Reips (2002) succinctly gives his answer: “speed, low cost, external validity, experimenting around the clock, a high degree of automation of the experiment (low maintenance, limited experimenter effects), and a wider sample” (p. 244). Indeed, this applies to the emerging world of possibilities that online experimentation provides for the study of social networks in political decision-making.

The two greatest logistical obstacles to social network experiments are arguably (a) tracking information flow over time and (b) manipulating the network itself (i.e., its structure or composition). The former is desirable because scholars might wish to observe the information to which individuals in the network experiment are exposed and to whom the information subsequently travels, how often, and to what effect. Before the Internet, one could envision an elaborate study using postal mail (as Milgram did in 1967), in which people are asked to mail letters to friends and acquaintances. However, realistically, the burden on the respondent makes this almost impossible today, not to mention that such a design comes with very little control for the experimenter.

Online networks, however, accommodate the experimenter’s need to control the information that spreads, the experimenter’s need to carefully track precisely when and by whom information is received, as well as the respondent’s desire for minimal burden of participation. In what is surely one of the most cited online experiments in political science, Bond et al. (2012) worked with researchers at Facebook to see how messages flow through the online social network to affect decisions to vote. On election day, a sample of approximately 60 million Facebook users randomly received a message in their newsfeed that encouraged them to share that they had voted and to view images of a selection of their friends who had taken the opportunity to post that they had voted as well. A smaller randomly selected set of users received the message without any images of their friends, while another small control group received no message at all. The researchers then examined voting records using the Facebook members’ names and found that the “social” message—the one featuring images of users’ friends—encouraged turnout among users who received them and even increased turnout among close friends of those who had received them. They conclude that the decision to vote effectively spread through the social network as a result of the social reminders. This online experiment thus allowed researchers to track both which users shared, and which users received, information about voting behavior.

When the goal is to manipulate characteristics of the social network itself, scholars encounter additional logistic hurdles. Even, in the rare cases, when researchers are able to work directly with a company like Facebook, it is typically not possible to manipulate pre-existing networks. It is unlikely that researchers can get permission to, for example, randomly assign individuals to “defriend” their existing contacts and connect to others. Klar and Shmargad (2017) got around this limitation by simply creating their own online social network and inviting research subjects to participate in it. These authors sought to investigate how the structure of a social network (i.e., the patterns of connectivity within it) might influence exposure to minority viewpoints and, ultimately, the influence that this has on political attitudes. The authors argue that policy debates usually consist of (at least) two opposing viewpoints and, moreover, that one perspective often dominates within a given social network. The authors theorize that certain network structures—in particular, those that allow information to transfer between different regions—might better allow for minority viewpoints to proliferate. Testing this theory is best done through an experiment. Observational data would make it difficult for the authors to infer causality, since there are likely unobserved factors that are correlated to structural network features and also influence how information spreads.

As stated earlier, however, in practice it is not usually feasible to experimentally manipulate existing social networks, nor is it straightforward to observe who shares and receives information in a lab—especially over an extended period of time. An online study allows for both of these requirements to be met. Redlawsk and Pierce (2018; also see Pierce et al., 2016) make use of an online experimental platform (Anderson et al., 2019) that presents candidate information over time and allows experimenters to manipulate social responses to that information, in the form of “social cues” (“likes,” “dislikes,” “shares”) although they do not manipulate networks. Using this system, they are able to identify how first impressions of candidates influence evaluations and voting and how responses to candidate information may influence those impressions (Redlawsk & Pierce, 2018). In another study (Pierce et al., 2016) they examine how what they label “social cues” function as heuristics. Voters use the reactions of other, usually unknown, voters in part to determine their own responses to candidates.

Klar and Shmargad (2017) directly manipulate networks, recruiting a representative sample of adult participants to take part in an online social network called Political Pulse. The authors send these participants an initial survey, which asks for their consent to participate in the experiment. Participants who provide consent are then asked several demographic questions and instructed to complete a social media profile, which consists of several questions about their political preferences. These participants are then assigned to one of two social network conditions that vary in structure, or a control (i.e., no network) group. In each of the social network conditions, information about environmental issues is emailed to a set of participants, and the information is allowed to flow within the networks for a period of eight days. Some of the information is sent to one participant in each network, while other information is sent to several participants in each network. This design allows the authors to investigate how the different social network conditions shape the inequality in spread between messages that are sent to one versus many participants.

The next sections describe, in detail, how this social network experiment was carried out, focusing on how the online social network Political Pulse was created, how the different social network conditions were operationalized, and how the authors were able to get information to spread through the social networks as well as keep track of which participants were exposed to what information and when.

Creating the Online Social Network

In order to simulate the experience of participating in an online social network, Shmargad created a website called Political Pulse using WordPress, a free Internet hosting service (Klar & Shmargad, 2017). This website did not function like Facebook or Twitter, or any other popular social media platform that would have required substantial coding experience and months of development work to create. Instead, the website consisted of a set of web pages, each containing a single piece of “content”—either text from a news article or an embedded video from YouTube with a transcript of the words contained in the video. Each web page was identified using a unique eight-digit number, which was included in the web page’s URL. To make Political Pulse seem more like an online social network, the right-hand side of each web page contained a list of six names that research participants were told were their social network connections. Participants could click on each of these names, which would direct them to another web page that included a fabricated biography of the connection (including fictional answers to the same questions that participants answered when creating their social media profiles in the initial survey). These names did not actually represent any of the participants and in fact were the same for each person who visited the web page. Rather, the names only served to make participants feel like they were part of an online social network, and their actual network connections were operationalized only virtually, as detailed in the following section.

Operationalizing the Social Network Conditions

The social network conditions that were used in Klar and Shmargad (2017) were adapted from Centola’s (2010) social network experiments on the spread of health behaviors. The first social network condition, called a clustered lattice, consists of highly connected clusters of nodes that do not allow for information to readily transfer between different network regions. The second social condition, called a random network, assigns links between nodes randomly, with two restrictions: (a) every node in the random network has exactly six connections to other nodes, the same number of connections as in the clustered lattice, and (b) no two nodes can have more than one connection, so that each node is connected to six unique nodes. To obtain the random network, connections in the clustered lattice were randomly rewired using an algorithm developed by Maslov and Sneppen (2002). This algorithm was implemented with the statistical software R, and the code used is available at the Harvard Dataverse. Each of the two network conditions consisted of exactly 144 nodes. The resulting 288 nodes were each identified by a unique eight-digit number. Consenting participants were then randomly assigned to one of these nodes, with a unique identification number, or to a separate non-network control condition.

Getting Information to Spread

In order to get information to spread through the different social network conditions, a small number of participants in each condition were chosen to be “seed” nodes. In the first day of the study, these participants were sent an email containing a link to one of the various Political Pulse web pages. If participants clicked on the link they were sent, their social network connections were emailed the link on the following day. This procedure continued for a period of eight days, with participants receiving an email if at least one of their social network connections clicked one of the links on the previous day. In order to prevent spamming these participants, a maximum of one email was sent per day. This email summarized which social network connections visited which piece of content. The names of the social network connections that were featured in the email were taken from the list of names that appeared in the content web pages. The researchers manually sent out emails each day, which involved three distinct steps: (a) obtaining a list of the participants (i.e., node IDs) who visited each link on the previous day; (b) merging this list with a list of social network connections, in order to identify which participants would be receiving an email on the current day; and (c) emailing the participants a summary of what web pages their connections viewed on the previous day.

The first step was performed using Google Analytics, a website tracking and analytics tool. The second step was performed with the statistical software Stata. Once a list of node-content pairs was obtained from Google Analytics, each including both the eight-digit node ID of the participant and the eight-digit content ID of the web page they visited, this list was merged, in Stata, with a list of the social network connections. The end result is a list of node-content pairs, each including the node ID of a participant and the content ID pertaining to the link that would be featured in the email they would receive. The final step, emailing participants, was performed with Microsoft Excel using a “mail merge.” A mail merge is a way of sending multiple emails at once, each containing information specific to the receiver of the email. Each email included a list of links that a participant’s social network connections visited on the previous day. Beside each link was a list of names of the social network connections who had viewed each web page. Again, to preserve participant confidentiality, no real names were used. Rather, the participant’s social network connections were labeled with the same six names that were featured on the right-hand side of the content web pages they would visit. By directing participants to web pages, the researchers were able to track exactly which participant viewed which piece of content and when.

Tracking Exposure to Information

The URL links that were emailed to research participants consisted of the following structure: http://www.PoliticalPulse.com/XXXXXXXX?id=YYYYYYYY. The first string of Xs represents the unique eight-digit content ID, which is specific to the text or video that was featured on the web page. The second string of Ys represents the unique eight-digit node ID, which was specific to the participant who was viewing the web page. The mail merge helped to ensure that each link that was emailed to a participant included his or her unique node ID. These two ID numbers, specific to the content and participant, respectively, are featured in functionally different parts of the URL, which are separated by a question mark. Everything before the question mark is actually used to load the web page properly. In particular, the eight-digit content ID ensures that the web page with the appropriate content is loaded. On the other hand, text featured after the question mark does not change how the web page is loaded but rather is used solely for tracking purposes. In this case, the eight-digit node ID ensures that the researchers are able to know which of the research participants were visiting a particular web page. Google Analytics, a website tracking and analytics tool, easily integrates with WordPress and was used by the researchers to track which of the links were visited throughout the period of study. Using Google Analytics as well as the specific URL structure described here, the researchers were able to know which participant visited which web page and when.

Measuring Political Decision-Making

Online experiments are effective for investigating the influence of social context on both decisions to engage in politically relevant behaviors (e.g., Bond et al., 2012) as well as attitudes about policies themselves (Klar & Shmargad, 2017). Indeed, online experiments provide many ways of measuring both political behaviors and attitudes. The latter is straightforwardly measured with surveys administered throughout a study or once it is completed. For example, Klar and Shmargad simply ask their research participants about their policy preferences using a standard survey administered to all of those who completed the study.

Of course, while decisions to engage with politics can also be measured through survey data (e.g., by asking participants “How likely are you to vote?”), behavioral measures—which require participants to take concrete actions—can be more convincing, as they require participants to take somewhat costly action. As such, behavioral measures tend to be better at revealing what people are actually likely to do, as studies comparing self-reported voting behavior to actual voting find (e.g., Granberg & Holmberg, 1991; Katosh & Traugott, 1981). Although people participating in online experiments may not be within the researcher’s eyesight, there are nevertheless numerous available techniques to observe participants’ decisions to act and engage.

Visiting Websites

As already discussed, Klar and Shmargad (2017) are able to know exactly who visits a particular web page by providing each research participant with a unique URL link. Using Google Analytics, the researchers can then track visits to each web page to find out which participant visited it, when, how often, and for how long.

Sharing Articles

Shmargad and Klar (2018) randomly assign research participants to one of three different online groups: one composed primarily of co-partisans, one composed primarily of opposing partisans, and one composed primarily of independents. After giving the participants a chance to familiarize themselves with their group, the researchers provided participants with a list of recent news articles and asked them to share one with their group. Each article had been chosen from the website AllSides.com, which rates articles on a 5-point scale with 1 being “very liberal,” 3 being “moderate,” and 5 being “very conservative.” The researchers were therefore able to observe how partisan information about an online audience can shape the extent that articles shared by research participants is ideologically extreme or moderate.

Sharing Personal Information

Klar and Krupnikov (2016) expect that exposure to political disagreement can discourage people from sharing their own partisan views on social media. They invite research subjects to participate in a mock online social network and provide each subject with a sample news article that, as the researchers explain, could be a topic of discussion in the network. A random third of the subjects receive an article about partisan disagreement, a random third receive an article about partisan unity, and a random third receive a nonpolitical article. Participants then decide whether they would like to join the online social network and are asked to complete an online profile to post on the network. The authors find that exposure to partisan disagreement discourages individuals from participating in the social network and that those who chose to participate are discouraged from posting political information about themselves (e.g., their party identification and the candidate they supported in the most recent election).

Limitations and Concerns Regarding Online Experiments

This article presents online experiments as a tool for experimental researchers and highlights the new frontiers of this medium: studies of social interaction. Online experiments can be used to investigate some of the most relevant and timely questions regarding the influence of social networks on political decision-making. Online experiments, however, do pose particular concerns.

One perpetual issue is whether Internet-based samples, both with respect to the mode they employ (online as opposed to face-to-face) and the frequent use of opt-in panels (as opposed to probability-based sampling). Scholars largely agree that online panels are useful subject pools on experimental studies. With respect to the mode, Clifford and Jerit (2014) find few differences between participants’ attentiveness and sensitivity to socially desirable questions online versus in the laboratory. Online subjects do report to be more distracted and appear to use outside resources to help with things like knowledge tests, whereas laboratory subjects do not have that luxury. The authors conclude that online studies are suitable for experiments. The fact that respondents are more attentive, however, does come with some implications. Jerit and Barabas (2010), who compared a survey experiment with a similar natural experiment in the real world, do find that experimental effects in the former are magnified relative to the latter.

The second concern that often arises with online experiments has to do with the nature of the sample itself. A nontrivial percentage of online experiments employ opt-in samples, as opposed to probability-based populations. The debate over the drawbacks of various samples is beyond the scope of this particular article but there is an abundance of useful work assessing this issue. Druckman and Kam (2011) argue that convenience samples—whether on- or off-line—do not inherently pose a problem for experiments, so long as the covariates of interest vary within the subject pool.

Others have examined specific online samples more directly, with a particular focus on Amazon’s online labor pool “Mechanical Turk.” Berinsky, Huber, and Lenz (2012) found that Mechanical Turk samples are less representative of the national population than are probability-based sampling or Internet panels but are more representation than in-person convenience samples. Again, as Druckman and Kam (2011) explain, this representative might not pose a problem at all for experimental designs, so long as the Mechanical Turk respondents vary along dimensions that pertain to the study at hand. For that reason, Mullinix, Leeper, Druckman, and Freese (2015) find that online convenience samples produce largely similar experimental findings than do nationally representation population-based samples. Cassese, Huddy, Hartman, Mason, and Weber (2013) similarly find that socially mediated Internet surveys—that is, convenience samples gathered by recruiting subjects through online social networks—can produce subject pools with sufficient diversity to use for many experimental designs. One note of caution regarding Mechanical Turk—and, perhaps, any pool of professional survey-takers—is that they do appear to be more “savvy” with respect to the surveys themselves (Krupnikov & Levine, 2014), raising concerns for heightened demand effects (but see Mummolo & Peterson, 2019, who argue that demand effects themselves are likely overstated).

Scholars in fields other than political science have similarly grappled with the question of how well Internet-based settings suit experimental designs, particularly those that are traditionally administered off-line. Arechar, Gächter, and Molleman (2018) find that online subjects are often harder to recruit for long-term studies due to higher rates of attrition, but, nevertheless, online experimental studies on human interaction—specifically, cooperation and punishment—successfully replicate what has been previously found in the lab. This review of the literature suggests that, indeed, online experiments are not only an efficient and effective medium for survey-based experimental studies but also provide an accessible platform for scholars interested in social influence and interaction.

Areas for Future Work

There exist many areas where online experiments can provide important insights. First, the spread of rumors, misinformation, and inaccurate news reports on social media is a growing topic of scholarly concern, given the influence that “fake news” may have had on voters’ decisions to engage (or not engage) and who to support (or not support) in the 2016 U.S. presidential election (Allcott & Gentzkow, 2017). In a forum published in the journal Science, Lazer et al. (2018) call for scholarly inquiry into fake news, specifically with respect to its prevalence, its impact, and the types of interventions that could prevent it. Online experimental techniques can help to answer these questions. Simulated social networks, for example, allow researchers to observe the proliferation of information through a social network and, more importantly, the circumstances that can cause people to believe in a particular piece of news or share it. Online experiments can facilitate random assignment within every component of the social network: its structure, the messages that travel in it, and features of the network’s participants themselves.

Online experiments are also a useful tool for studying the effects of incivility on the decisions people make. Although partisan disagreement has always been an important concern among political scientists, political incivility has taken on renewed relevance in an increasingly polarized society. Online experiments can allow researchers to take a closer look into how incivility spreads through social networks and the effects it has people’s willingness to participate in political discussions. Klar and Krupnikov (2016), for example, argue that partisan disagreement can demobilize voters who are averse to incivility and bickering. A growing body of work is investigating the effects of political incivility (e.g., Mutz & Reeves, 2005; Sobieraj & Berry, 2011). With an online network experiment, researchers could manipulate the tone of messages spreading through a network to observe how varying rhetoric might influence participants’ decisions to engage or withdraw. Scholars could also investigate how varying levels of incivility might affect users’ willingness to share messages with their connections and their subsequent evaluations of relevant policies.

Online experimental researchers can also substantially broaden the types of political decisions that they typically measure. Google Analytics, for example, allows researchers to know whether and when subjects choose to visit specific websites. By assigning subjects to experimental social network conditions, researchers can test the types of treatments that cause subjects to decide whether and how to comment on articles, whether to participate in political conversations, whether to connect or disconnect from particular contacts, and so on. Scholars should consider online experiments as a tool for understanding important issues about how malleable and constantly changing social environments influence people’s political behaviors and attitudes. Creative research designs will be the main drivers of influential research on the political implications of sociality, and experimental methods stand to benefit greatly by the increasing prevalence of online tools.

References

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236.Find this resource:

Anderson, D. J., Redlawsk, D. P., & Lau, R. R. (2019). The Dynamic Process Tracing Environment (DPTE) as a tool for studying political communication. Political Communication, 36, 303–314.Find this resource:

Arechar, A. A., Gächter, S., & Molleman, L. (2018). Conducting interactive experiments online. Experimental Economics, 21(1), 99–131.Find this resource:

Barabas, J., & Jerit, J. (2010). Are survey experiments externally valid? American Political Science Review, 104(2), 226–242.Find this resource:

Berelson, B. R., Lazarsfeld, P. F., & McPhee, W. N. (1954). Voting: A study of opinion formation in a presidential campaign. Chicago, IL: University of Chicago Press.Find this resource:

Berg, S. (2004). Snowball sampling—I. In S. Kotz, C. B. Read, N. Balakrishnan, B. Vidakovic, & N. L. Johnson (Eds.), The encyclopedia of statistical science. New York, NY: John Wiley.Find this resource:

Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20(3), 351–368.Find this resource:

Bertrand, M., Luttmer, E. F. P., & Mullainathan, S. (2000). Network effects and welfare cultures. The Quarterly Journal of Economics, 115(3), 1019–1055.Find this resource:

Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298.Find this resource:

Burden, B., & Klofstad C. (2005). Affect and cognition in party identification. Political Psychology, 26(6), 869–886.Find this resource:

Burt, C. (1963). The use of electronic computers in psychological research. The British Journal of Statistical Psychology, 16(1), 118–125.Find this resource:

Casciaro, T. (1998). Seeing things clearly: Social structure, personality, and accuracy in social network perception. Social Networks, 20(1), 331–351.Find this resource:

Cassese, E. C., Huddy, L., Hartman, T. K., Mason, L., & Weber, C. (2013). Socially mediated Internet surveys: Recruiting participants for online experiments. PS: Political Science & Politics, 46(4), 775–784.Find this resource:

Centola, D. (2010). The spread of behavior in an online social network experiment. Science, 329(5996), 1194–1197.Find this resource:

Clifford, S., & Jerit, J. (2014). Is there a cost to convenience? An experimental comparison of data quality in laboratory and online studies. Journal of Experimental Political Science, 1(2), 120–131.Find this resource:

Couper, M. P., Conrad, F. G., & Tourangeau, R. (2007). Visual context effects in web surveys. Public Opinion Quarterly, 71(4), 623–634.Find this resource:

Druckman, J. D., Green, D. P., Kuklinski, J. H., & Lupia, A. (2006). The growth and development of experimental research in political science. American Political Science Review, 100(4), 627–635.Find this resource:

Druckman, J. D., & Kam, C. D. (2011). Students as experimental participants: A defense of the “narrow data base.” In J. D. Druckman, D. P. Green, J. H. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science (pp. 41–57). New York, NY: Cambridge University Press.Find this resource:

Dunaway, J. L., Searles, K., Sui, M., & Paul, N. (2018). News attention in a mobile era. Journal of Computer-Mediated Communication, 23(2), 107–124.Find this resource:

Eldersveld, S. (1956). Experimental propaganda techniques and voting behavior. American Political Science Review, 50(1956), 154–165.Find this resource:

Franco, A., Malhotra, N., & Simonovits, G. (2015). Underreporting in political science survey experiments: Comparing questionnaires to published results. Political Analysis, 23(2), 306–312.Find this resource:

Granberg, D., & Holmberg, S. (1991). Self-reported turnout and voter validation. American Journal of Political Science, 35(20), 448–459.Find this resource:

Huckfeldt, R., Beck, P. A., Dalton, R. J., & Levine, J. (1995). Political environments, cohesive social groups, and the communication of public opinion. American Journal of Political Science, 39(4), 1025–1054.Find this resource:

Huckfeldt, R., & Sprague, J. (1987). Networks in context: The social flow of political information. The American Political Science Review, 81(4), 1197–1216.Find this resource:

Huckfeldt, R., & Sprague, J. (1988). Choice, social structure, and political information: The informational coercion of minorities. American Journal of Political Science, 32(2), 467–482.Find this resource:

Iyengar, S. (2011). Laboratory experiments in political science. In J. D. Druckman, D. P. Green, J. H. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science (pp. 73–88). New York, NY: Cambridge University Press.Find this resource:

Katosh, J. P., & Traugott, M. W. (1981). The consequences of validated and self-reported voting measures. Public Opinion Quarterly, 45(4), 519–535.Find this resource:

Katz, N., Lazer, D., Arrow, H., & Contractor, N. (2004). Network theory and small groups. Small Group Research, 35(5), 307–332.Find this resource:

Klar, S. (2014). Partisanship in a social setting. American Journal of Political Science, 58(3), 687–704.Find this resource:

Klar, S., & Krupnikov, Y. (2016). Independent politics: How American disdain for parties leads to political inaction. New York, NY: Cambridge University Press.Find this resource:

Klar, S., & Shmargad, Y. (2017). The effect of network structure on preference formation. The Journal of Politics, 79(2), 717–721.Find this resource:

Klofstad, C. A., Sokhey A. E., & McClurg, S. D. (2013). Disagreeing about disagreement: How conflict in social networks affects political behavior. American Journal of Political Science, 57(1), 120–134.Find this resource:

Krupnikov, Y., & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science, 1(1), 59–80.Find this resource:

Kuo, A., Malhotra, N., & Mo, C. H. (2017). Social exclusion and political identity: The case of Asian American partisanship. Journal of Politics, 79(1), 17–32.Find this resource:

Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people’s choice. Oxford, U.K.: Duell, Sloan & Pearce.Find this resource:

Lazer, D. (2011). Networks in political science: Back to the future. PS: Political Science & Politics, 44(1), 61–68.Find this resource:

Lazer, D., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., . . . Zittrain, J. L. (2018). The science of fake news. Science, 359(8380), 1094–1096.Find this resource:

Maslov, S., & Sneppen, K. (2002). Specificity and stability in topology of protein networks. Science, 296(5569), 910–913.Find this resource:

McClurg, S. D. (2003). Social networks and political participation: The role of social interaction in explaining political participation. Political Research Quarterly, 56(4), 448–465.Find this resource:

Milgram, S. (1967). The small-world problem. Psychology Today, 1(1), 61–67.Find this resource:

Mullinix, K. J., Leeper, T. J., Druckman J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2(2), 109–138.Find this resource:

Mummolo, J., & Peterson, E. (2019). Demand effects in survey experiments: An empirical assessment. American Political Science Review, 113(2), 517–529.Find this resource:

Mutz, D. C. (2002). The consequences of cross-cutting networks for political participation. American Journal of Political Science, 46(4), 838–855.Find this resource:

Mutz, D. C., & Mondak, J. (2006). The workplace as a context for cross-cutting political discourse. The Journal of Politics, 68(1), 140–155.Find this resource:

Mutz, D. C., & Reeves, B. (2005). The new videomalaise: Effects of televised incivility on political trust. American Political Science Review, 99(1), 1–15.Find this resource:

Nickerson, D. (2008). Is voting contagious? Evidence from two field experiments. American Political Science Review, 102(1), 49–57.Find this resource:

Nicholson, S. P. (2012). Polarizing cues. American Journal of Political Science, 56(1), 52–66.Find this resource:

Nir, L. (2005). Ambivalent social networks and their consequences for participation. International Journal of Public Opinion Research, 17(4), 422–442.Find this resource:

Pierce, D. P., Redlawsk, D. P., & Cohen, W. W. (2016). Social influences on online political information search and evaluation. Political Behavior, 39(3), 651–673Find this resource:

Radford, J., Pilny, A., Reichelmann A., Keegan B., Welles, B. F., Joye, J., . . . Lazer D. (2016). Volunteer science: An online laboratory for experiments in social psychology. Social Psychology Quarterly, 79(4), 376–396.Find this resource:

Redlawsk, D. (2002). Hot cognition or cool consideration? Testing the effects of motivated reasoning on political decision making. Journal of Politics, 64(4), 1021–1044.Find this resource:

Redlawsk, D. P., & Peirce, D. P. (2018). The effects of first impressions on subsequent information search and evaluation. In H. Lavine & C. S. Taber (Eds.), The feeling, thinking citizen: Essays in honor of Milton Lodge (pp. 151–170). New York, NY: Routledge.Find this resource:

Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49(4), 243–256.Find this resource:

Robison, J. (2015). Who knows? Question format and political knowledge. International Journal of Public Opinion Research, 27(1), 1–21.Find this resource:

Shmargad, Y. (2018). Twitter influencers in the 2016 U.S. Congressional races. Journal of Political Marketing.Find this resource:

Shmargad, Y., & Klar, S. (2018). How partisan online environments shape encounters with political outgroups. Unpublished manuscript.Find this resource:

Shmargad, Y., & Zhang, L. (2018). Social media and the rise of dark horse candidates. Unpublished manuscript.Find this resource:

Sinclair, B. (2012). The social citizen: Peer networks and political behavior. Chicago, IL: University of Chicago Press.Find this resource:

Sniderman, P. M. (2011). The logic and design of the survey experiment. In J. D. Druckman, D. P. Green, J. H. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science (pp. 102–114). New York, NY: Cambridge University Press.Find this resource:

Sobieraj, S., & Berry, J. (2011). From incivility to outrage: Political discourse in blogs, radio, and cable news. Political Communication, 28(1), 19–41.Find this resource:

Sokhey, A. E., & Djupe, P. A. (2014). Name generation in interpersonal political network data: Results from a series of experiments. Social Networks, 36(1), 147–161.Find this resource:

Sokhey, A. E., & McClurg, S. D. (2012). Social networks and correct voting. The Journal of Politics, 74(3), 751–764.Find this resource:

Walsh, K. C. (2004). Talking about politics: Informal groups and social identity in American life. Chicago, IL: University of Chicago Press.Find this resource:

Wojcieszak, M. E., & Mutz, D. C. (2009). Online groups and political discourse: Do online discussion spaces facilitate exposure to political disagreement? Journal of Communication, 59(1), 40–56.Find this resource: