Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Communication. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 08 December 2023

Campaign Evaluation in Health and Risk Messagingfree

Campaign Evaluation in Health and Risk Messagingfree

  • Evan K. PerraultEvan K. PerraultDepartment of Communication, Purdue University

Summary

Due to their sheer scope in trying to reach large sections of a population, and the costs necessary to implement them, evaluation is vital at every stage of the health communication campaign process. No stage is more important than the formative evaluation stage. At the formative stage, campaign designers must determine if a campaign is even necessary, and if so, determine what the campaign’s focus needs to be. Clear, measurable, and realistically attainable objectives need to be a primary output of formative evaluation, as these objectives help to guide the creation of all future campaign efforts. The formative stage also includes pilot testing any messages and strategies with the target audience prior to full-scale implementation. Once the campaign is implemented, process evaluation should be performed to determine if the campaign is being implemented as planned (i.e., fidelity), and also to document the dose of campaign exposure. Identifying problem areas during process evaluation can ensure they get fixed prior to the completion of the campaign. Detailed process evaluation also allows for greater ease in replicating a successful campaign attempt in the future, but additionally can provide potential reasons for why a campaign was not successful. The last stage is outcome evaluation—determining if the objectives of the campaign were achieved. While it is the last stage of campaign evaluation, campaign designers need to ensure they have planned for it in the formative stages. If even just one of these stages of evaluation is minimized in campaign design, or relegated to an after-thought, developers need to realize that the ultimate effectiveness of their campaigns is likely to be minimized as well.

Subjects

  • Health and Risk Communication
  • Mass Communication
  • Media and Communication Policy

By definition, health communication campaigns are designed to generate effects for large populations of people, and span multiple channels of communication (Rogers & Storey, 1987). Because of their large reach, developing effective health communication campaigns is both extremely time- and resource- intensive. To ensure neither of these precious resources are squandered, it is vital that campaign designers incorporate evaluation into every stage of their campaigns.

The following entry is designed to introduce the reader to every stage of the campaign evaluation process and provide examples to help guide the reader to incorporate evaluation in his or her own campaign endeavors.

What is Evaluation?

Evaluation is generally understood to have two primary components: an empirical aspect (e.g., determining whether something happened or is happening) and a normative aspect (i.e., a conclusion about the value of something). This value aspect of evaluation is what separates evaluation research from more outcome-focused endeavors like basic science or clinical research (Fournier, 2005).

As a result, health communication campaign evaluation not only needs to determine if the campaign’s objectives have been reached (i.e., an outcomes focus), but also come to a conclusion about the overall worth of the campaign itself. Because effective campaign attempts are normally intertwined with numerous stakeholder groups, this additional focus on campaign value can help these stakeholder groups understand how their efforts could be improved in the future. Patton (2008) calls this stakeholder emphasis utilization-focused evaluation, where attention should be placed on “intended use by the intended users” (p. 37).

However, to determine if objectives have been achieved, or what could be done better the next time around, rigorous evaluation planning and execution must take place at every stage of the campaign process—with more work taking place at the front end of a campaign effort (Atkin & Freimuth, 2013).

Step 1—Formative Evaluation

Formative evaluation is likely the most important part of constructing any health communication campaign. Its importance is highlighted in the Centers for Disease Control and Prevention’s (CDC’s) own depiction of the health communication process, where of the 10 stages described, formative research and evaluation encompasses the first seven steps (Roper, 1993).

Campaign Necessity

In order to show commitment to efforts, or in response to new threats that emerge in society, sometimes leadership at the highest levels of organizations might see communication campaigns as a quick and easy tool to provide information to a society that they think is in need of its messages. However, rarely are citizens just waiting for information to stream down to them from health communication campaign efforts. “‘If we print it, they will come,’ holds true only if you are printing money” (National Institutes of Health, 2002, p. 103).

Therefore, the first thing formative evaluation can help program designers determine is whether or not a communication campaign is even necessary (Valente, 2002). It is possible that the citizenry already has the knowledge you think they lack, the attitudes you wish they held, or already practices the behaviors you wish they did. Knowing this information could potentially save an organization countless time and resources in developing a campaign that was not needed in the first place.

Additionally, formative research can also help health communication campaign designers to identify problem areas in societies where campaign efforts could potentially be useful.

Community Health Needs Assessments

One ready-made resource campaign designers can now reference to seek information about their populations to determine if campaign efforts are necessary, and possibly as a resource to come up with topic areas for future campaigns, is through community health needs assessments (CHNAs). Since the inception of the Affordable Care Act, all tax-exempt hospitals are required to conduct CHNAs every three years, or be subject to a $50,000 penalty (Department of the Treasury, 2014). The purpose of these CHNAs are to identify key health needs and areas of improvement in the communities in which these hospitals operate. Data are usually collected and synthesized from multiple sources, including publicly available state- and county-level data, by conducting community meetings and forums, and through interviewing healthcare leaders and patients about potential gaps in care. Conducting a CHNA also requires the hospitals to develop an implementation plan to begin to remedy the problems outlined in the CHNA. The law requires these CHNAs be made publicly accessible, which means they are a great resource for campaign developers to frequent during the formative stage. These documents may also help campaign designers more easily find strategic partners, as hospitals will now likely be more willing to partner with groups and organizations who share a common mission toward remedying issues identified in the CHNAs.

Photovoice Technique

Photovoice is a way that campaign designers could utilize community members to identify the health needs that exist in their communities by giving underserved members cameras and asking them to document their communities’ strengths and weaknesses (Wang & Burris, 1997). The technique has been utilized in multiple regions and contexts—for example, with urban youth to identify problem areas in their neighborhoods (Strack, Magill, & McDonagh, 2004), to identify health needs in rural Kenyan villages (Kingery, Naanyu, Allen, & Patel, 2016), and to better understand the sexual and reproductive health challenges of adolescent women (Gill, Black, Dumont, & Fleming, 2016).

Campaign Focus

After determining whether a campaign needs to take place, or more generally what the topic of the campaign should be, formative evaluation also is necessary to then determine the focus or direction that the campaign will take and the target audiences needing priority (Atkin & Freimuth, 2013).

Audience K-A-Bs

Qualitative research techniques are some of the primary ways campaign designers get to know their target audiences’ core knowledge, attitudes, and behaviors (K-A-Bs) to better focus their campaign messages and efforts (Valente, 2002). This is because qualitative techniques (e.g., focus groups, in-depth interviews) can better capture some of the deeper, more nuanced meaning behind people’s attitudes and behaviors than traditional survey measures can (Hesse-Biber & Leavy, 2011). They can be useful when not much is known about peoples’ perceptions of novel threats (e.g., hookah use among adolescents—see Cornacchione et al., 2016), for sensitive topics (e.g., breast cancer causation—see Silk et al., 2006), and can be particularly helpful for understanding cultural beliefs that may be different from our own. For example, Low, Wong, Zulkifli, and Tan (2002) utilized focus groups to investigate Malaysian cultural differences in knowledge, attitudes, and behaviors regarding erectile dysfunction, and Au et al. (2008) utilized in-depth interviews to uncover Chinese cultural beliefs of cold and flu causation to build an educational intervention.

Audience Segmentation

A broad, one-size-fits-all approach to campaign construction is generally not going to be as successful as one that seeks to focus on multiple, specialized audience segments (Atkin & Freimuth, 2013). “It is very unlikely that relevant behavior of everyone in the intended audience is equally influenced by each of the possible determinants of that behavior” (Slater, 1996, p. 269). Therefore, another priority of formative evaluation should be, if possible, to try and break an entire population into more manageable population segments where different message appeals and dissemination strategies might be necessary and more effective.

Luckily, many health communication theories and models exist just for this reason. For example, on the topic of developing messages about genetically modified foods, Silk, Weiner, and Parrott (2005) utilized the theory of reasoned action as an audience segmentation tool, finding four different clusters of individuals that exist in society that might need different messages tailored to them. In developing a preconception campaign for women of childbearing age Lynch et al. (2014) used Prochaska and DiClemente’s (1983) stages of change model to split their target audience into two primary groups: intenders (i.e., those in the contemplation, preparation, and action stages) and nonintenders (i.e., precontemplators), and develop different message strategies for each. For greater detail on the evolution and importance of health message tailoring see Noar, Harrington, and Aldrich (2009) for an in-depth discussion.

Setting Campaign Objectives

Only once the campaign designers conduct formative, pre-production research, can they begin to formulate the objectives that will be used to guide the campaign. Campaign objectives should be based on the evidence uncovered during formative evaluation, and at a minimum should be specific, measurable, and realistically attainable (Wilson & Ogden, 2015).

Identifying Message Characteristics and Concept Testing

The final stage of formative research is the creative process—determining what is going to go into the campaign messages, and how they will be distributed to the target audience. This stage of the campaign process is inherently cyclical (see Figure 1), and could go on for a couple rounds before the campaign ultimately gets implemented.

Figure 1. Cyclical Nature of Campaign Message Development and Refinement

Understanding Audience Preferences

To help campaign creators design their messages, McGuire (2013) developed his now classic input-output framework. In essence, it is a clear, easy-to-follow pre-production checklist that message designers should utilize to determine which communication variables should be addressed in their campaigns. Simply put, it advocates that campaign designers should make sure to assess audience members’ source, message, and channel preferences in developing messages.

The source is the person or organization who delivers the message. Sources should be measured for their credibility (i.e., does the target audience find them to be experts, trustworthy, attractive?). For example, in focus groups of men with prostate cancer to help in developing a new website for a non-profit regarding treatment decision-making, Silk et al. (2013) found that participants thought the mock-up of the website looked too much like a government website, “and a lot of people have an awful lot of mistrust in the government right now,” said one participant (p. 713). Focus group participants recommended that adding names or logos of partner organizations to the website (e.g., the American Cancer Society) could help to increase this non-profit’s initial credibility.

Message characteristics are related to what is actually said in the message (e.g., are recommendations seen as realistic, is the terminology/language consistent with the culture?). For example, in focus groups of African-American women to help develop condom use messages, Hood, Shook, and Belgrave (2016) found that the women wanted messages that were funny, catchy, and evoked positive emotions. They thought messages with these characteristics would make it easier to bring up the serious topic of condom use with others.

Finally, channel preferences deal with the means through which the message is delivered to the target audience (e.g., television, radio, interpersonally). Campaign designers will want to choose message channels that people regularly consume, and through which they would be willing to receive health-related information. For example, Hood et al. (2016) found that their participants were willing to listen to ads on Internet radio (e.g., Pandora). Even seemingly small channel characteristics can potentially lead to message failure. For example, in focus groups of African-American women regarding messaging surrounding mammography screening, Springston and Champion (2004) found that while the women thought a brochure was a proper way to distribute messaging, they thought the paper they were printed on (i.e., slick, shiny paper) had a clinical and impersonal feel. Instead they said they would favor a more coarse paper, as it conveys a more personal feeling.

However, even though target audience feedback is integral to the campaign creation process, the large base of communication theory and previous research should not be ignored (Atkin & Rice, 2013; Fishbein & Yzer, 2003). Target audiences may make suggestions that theory and prior research have already indicated might lead to poor outcomes. For example, in seeking advice developing an adolescent anti-drug campaign, adolescents might say an effective strategy is to simply “scare us.” However, evidence from the fear appeal literature suggests that fear-only messages are not as successful as those that combine fear-evoking messages with messages promoting high levels of efficacy as well (Witte & Allen, 2000). It is therefore encouraged that campaign designers seek target audience input as they are developing campaigns, but always balance that input with guidance from theories and prior evidence that already exists in the research literature.

Developing and Testing Concepts

Message concepts are just that—simply designed, partially-formulated ideas that are brought to target audiences for feedback. They do not need to be of a highly-produced, final-quality caliber; however, they should encompass many of the key elements that you would like to see in the finished product (e.g., key phrases, slogans, pictures, headlines; Atkin & Freimuth, 2013). Often times campaign designers bring numerous taglines to their target audiences and have them rank order their most favorite to least favorite. For example, someone wanting to develop a stop-texting-and-driving campaign might bring numerous campaign slogans to their target audience members (e.g., “stay alive—don’t text and drive,” “stop before you text,” “you text, you die,” “wait to text”) and ask which they like best and worst, and potentially ask for additional suggestions.

Concept testing is also useful in assessing attention. A common question often asked of target audience members is, “Would you stop and read this message entirely? Why/why not?” Unless people will actually stop to read a message, it does not matter how good or persuasive a message is.

Focus Groups

One of the most common ways campaign designers conduct concept testing is through focus groups. Focus groups usually consist of 5–10 people in a room who discuss the concepts for around 60–90 minutes. The groups are led by a trained moderator who should follow a detailed moderator guide, be well informed on the issue being discussed, but should not share personal opinions (Krueger & Casey, 2009).

Concept testing can save an organization both time and money because it will be able to identify which messages and dissemination strategies will work best with target audiences (National Institutes of Health, 2002). This testing will allow the campaign designer to determine strengths and weaknesses of the various messages developed, especially in identifying potentially confusing terms or concepts. By nature, those developing campaign messages are usually highly educated and literate. It is extremely difficult for someone with a large vocabulary to understand what someone who is less educated will be able to understand. For example, Silk et al. (2014) uncovered in focus group testing of environmental breast cancer risk-reduction messages that many participants did not understand, or could not even pronounce, the word “susceptibility,” which was present in the messages. Additionally, participants thought the messages were too text-dense, and wanted more easy-to-read bulleted lists. Future iterations of the messages remedied these critiques, and were concept tested again before finally providing the final concept recommendations to a graphic design firm for final production. Despite the fact that the messages have now been produced, the agency behind the messages (i.e., the Breast Cancer and the Environment Research Program) is still seeking ways to improve the messages through its Breast Cancer and the Environment Communication Research Initiative (National Institutes of Health, 2015). This example highlights the fact that message design and revision is an ever-continuing process.

Focus group testing also can allow for the emergence of new ideas that can be useful in further refining messages prior to campaign implementation. For example, after extensive interviewing of target audience members to create low-literacy sensitive health messages, Seligman et al. (2007) conducted a second round of interviews with their draft message concepts and uncovered new, empowering quotations they were then able to use in their final messages.

Assessment of message recall is also a good tool formative researchers can use at the end of focus groups. If audience members cannot remember the message recommendations after a 90-minute conversation, it is highly unlikely they will remember the message recommendations in an even more media-saturated world outside of the focus group setting. At the end of focus groups, moderators often have participants write on a piece of paper the main ideas or recommendations they can recall from the concepts that they were exposed to during the last 90 minutes. If participants are unable to remember key elements you thought were important, or did not seem to understand the key recommendations you thought the messages were trying to make, further refinement of the messages is needed.

Finally, concept testing in focus groups can also reveal things that audience members may find distasteful, or even offensive, which could lead to outright message rejection. For example, in a concept developed to inform mothers of the dangers of excessive radiation exposure for their daughters, mothers reacted negatively to the following headline, “You wouldn’t let your daughter ride without a helmet. Don’t let her get unnecessary radiation.” As one participant stated, “It is an accusatory statement saying, ‘You’re not making your kid wear a helmet so you’re not a good parent” (Silk et al., 2014, p. 235). Participants were asked what kind of analogy they would find less accusatory, and many mentioned buckling a daughter into a seatbelt or car seat, which were subsequently incorporated into the final messages.

Limitations of Concept Testing

Campaign designers need to be careful not to overgeneralize their results. Results should be treated as indicative, not definitive (Atkin & Freimuth, 2013). For example, just because four of 20 people in focus groups did not understand something in the messages, does not mean that 20% of the population would find it difficult to understand. This finding should instead be interpreted as a warning sign, that there is something present in the message that is potentially causing confusion that similar people in the population might find difficult to understand, and therefore should be fixed.

Additionally, while strong pretest results may generate more substantial buy-in from program organizers and stakeholders, they cannot guarantee final campaign success. Just because everyone liked your messages in a highly artificial focus group setting does not mean similar groups of individuals will necessarily attend to them and be influenced by them in the real world. After this formative stage, campaign success ultimately will rest on continued extensive planning, and proper campaign execution.

Step 2—Process Evaluation

Process evaluation, as its name implies, is designed to evaluate how well the campaign is proceeding. In other words, is the campaign being implemented the way it was designed? More broadly, it is also concerned with documenting every stage of the campaign’s creation and implementation from start to finish.

Campaigns, by nature, are lengthy projects that often see people affiliated with them come and go—and often with varying degrees of dedication to the campaign. It is not uncommon for health communication campaigns to span months or years. In general, the longer the campaign, the more important it is to conduct process evaluation. However, despite their length, and considerable resources that are expended, there are very few studies on process evaluation, “because evaluators often treat programs and campaigns as black boxes, with an emphasis on whether they are effective rather than on the reasons for their success” (Valente, 2002, p. 73). However, if program implementation is not documented throughout the duration of the campaign, it will be extremely difficult to determine whether a campaign that failed to meet objectives was a result of a poorly designed campaign, or not implementing the campaign as designed (Dehar, Casswell, & Duignan, 1993).

Process evaluation is generally seen to perform four primary functions: to document program implementation, to make mid-program revisions, to help in performing outcome analysis, and to replicate the program in the future (Valente, 2002).

Documenting Program Implementation

Documenting program implementation is simply tallying the activities and processes that have taken place as the campaign is carried out (Issel, 2014). Often times campaigns will have constructed a Gantt chart, documenting at what times and locations various portions of the campaign should be implemented. As part of funding requirements, many funding agencies require campaigns to document that the activities outlined in their Gantt charts actually occurred (e.g., brochures were distributed at the farmer’s market the week of October 15, the billboard was put up January 10 and removed two months later). This documentation of adherence to the campaign’s schedule is called fidelity; is the campaign being implemented as planned?

Often times in ensuring the fidelity of a campaign, evaluators need to naturalistically observe the implementation of the campaign from the perspective of the target audience. For example, in a mental health campaign implemented by Silk et al. (2017) on the campus of Michigan State University, posters and table tents were distributed to residence hall staff to place in dorms and cafeterias. Instead of simply trusting the messages would actually be distributed by residence hall staff, campaign associates regularly conducted fidelity checks during the campaign by walking around residence and dining halls to ensure not only that messages were posted, but that the proper messages were in the proper locations.

Observing those who are in charge of delivering campaign messages is also another way implementation documentation can take place. In an educational campaign delivered to seventh graders by teachers to reduce bullying in schools (Meyer et al., 2004), a trained research assistant would randomly sit in on teachers’ classroom instruction to gauge whether the instructor delivered the curricular lessons as they were designed. These evaluators also gathered self-report questionnaires from the instructors, where the instructors also gauged how well they thought they adhered to the curriculum (Meyer et al., 2004).

Measuring the dose of campaign exposure on target audiences is also another element of implementation documentation. Dose can be measured in a few ways. If radio or television advertisements were purchased, evaluators can estimate how many people had the potential to be exposed to the messages during the campaign on the basis of viewer or listener ratings (Romer et al., 2009). Evaluators can also ask the target audience themselves how often they were exposed to campaign messages. For example, on a website that campaign messages directed target audience members to visit during the execution of a youth mental health campaign, Wright et al. (2006) placed a “pop-up” questionnaire on the home page asking visitors how they were referred to the website (e.g., movie advertisement, newspaper). They also utilized the demand for minor media resources (i.e., brochures, posters) by schools and community agencies as a measure of campaign dose.

Allowing for Mid-Program Revisions

One of the most important reasons for conducting process evaluation is to determine if changes need to be made during the course of the campaign to ensure a greater likelihood of success before the campaign ends. Despite extensive formative research, and possibly what appeared to be extremely well-received messages in concept testing, when a campaign is finally implemented in the “real world” you never know how the audience is going to respond to it. Campaigners do not want to wait until the campaign has concluded to realize that the campaign had no effect on the target audience.

A simple way to determine whether the campaign is reaching those you thought it would is by performing quick “person on the street” intercept interviews while the campaign is being conducted. For example, the evaluator could show a person a message, and ask the person if they have seen it. Results of these interviews need to be able to be turned around quickly so that findings can be used to fix message or dissemination strategies mid-campaign.

Another way to determine the effectiveness of a campaign while it is going on is to monitor whether target audience members are performing the behaviors that are advocated. For example, a campaign to increase condom usage on college campuses could track condom sales at the campus pharmacy to determine if the campaign is having any impact. If the campaign advocates that people follow something on Twitter or “like” something on Facebook, those metrics should be tracked during the campaign, and not afterward. For example, Blake et al. (1987) monitored participation counts at organized fitness activities advocated by their campaign to determine if their various channels of campaign distribution were being effective at motivating people to action.

On the flip side, tracking campaign success during the campaign can ensure that there is an adequate supply of materials in case a campaign winds up being extremely successful (Valente, 2002). For example, if a campaign to promote bike helmet usage is efficacious, the designer will definitely want to ensure local stores have enough helmets on hand so patrons influenced by the campaign are not turned away empty-handed. Similarly, a campaign to promote mental health help seeking should ensure that there are an adequate number of trained counselors on-staff in the area in case the campaign leads to an influx of patients.

An Aid to Outcome Evaluation

Process evaluation can also provide helpful perspective at the end of a campaign to assist evaluators in determining why a campaign succeeded—and even more importantly why a campaign may have failed. Evaluators should be documenting anything that could have a potential impact on the campaign’s effectiveness while the campaign is being implemented. Valente (2002) calls this part of process evaluation documenting the “information environment” (p. 79). For example, a major news story in the month of October about the ineffectiveness of this year’s flu vaccine could have a large impact on the success of a campaign to encourage people to receive the flu shot. By not documenting other outside activities that may interfere with the campaign’s message, the outcome evaluation will only show that the campaign was not a success.

Program Replication

Much like tasty recipes, people like to replicate things that worked well and like to avoid things that did not. A really useful reason for conducting process evaluation is to ensure that if the campaign goes great, others can do exactly as the campaign designers did. A campaign manager should keep a written log of the entire campaign creation process. Campaign designers should document all the personnel and organizations who were used in creating and disseminating the campaign, and also document all the iterations of message drafts that were used and subsequently revised. Odds are, this will not be the only campaign that an organization develops. For example, if a really helpful graphic designer created the campaign’s materials, the campaign manager might want to use this person again in future campaigns. If one organizational partner did not pull his or her own weight, campaign organizers might want to seek new partners for future campaigns. By documenting all the steps that were taken in making the current campaign a reality, it should help in accelerating the creation of any future campaigns targeted to similar audiences about similar topics.

Step 3—Outcome Evaluation

Even though this section on outcome evaluation is near the end of the entry, it could easily come first. Campaign developers need to start thinking about, and planning for, outcome evaluation at the very first beginning stages of a campaign.

One primary goal of conducting outcome evaluation is to determine whether the campaign’s objectives were met. Another key part of outcome evaluation is in trying to determine how much impact the campaign had on influencing the target audience. This is often easier said than done. However, if proper planning has been implemented from the start via an effective campaign design, more valid conclusions about the success of the campaign should be able to be drawn from the data.

Were Objectives Achieved?

In order to determine if campaign objectives were achieved, measurable and attainable objectives needed to have been delineated prior to the campaign being implemented. If campaign designers are looking for changes in the population as a result of the campaign (e.g., changes in knowledge, attitudes, or behaviors), it is important that there be baseline measurements prior to the campaign in which to compare post-campaign results.

Measurement consistency (i.e., being able to make apples-to-apples comparisons from pre-campaign to post-campaign) is key in being able to determine if objectives were achieved. This is why large amounts of time and resources should be taken prior to implementing the campaign to construct valid and reliable measures of interest. If not enough care was taken prior to the campaign to develop measures to accurately assess pre-campaign K-A-Bs, even the best measures administered post-campaign will not be able to be validly compared with those from before the campaign to judge its overall success.

Can Other Possibilities Be Ruled Out?

One important claim that most campaign implementers would like to make at the end of any campaign is that the campaign “caused” an increase/decrease in X. However, much caution should be taken in ever using the term “cause” in combination with a campaign’s effort. Due to the highly collaborative nature of campaigns (e.g., reliance on outside groups to deliver messages), length of time campaigns take to complete, and the saturated media environment that exists, it would be nearly impossible to ever prove that a campaign solely caused an impact on the target audience. What campaign designers want to do, however, is try to reduce as many other outside factors as possible that could potentially have an impact on the target audience other than the campaign. Developers must do this through proper campaign design.

While an in-depth look at various study designs is outside the scope of this entry (see Issel & Handler, 2014, or Valente & Kwan, 2013, for a comprehensive look at the various study designs that campaign designers can utilize) the main priority of implementing a rigorous study design is to try and minimize threats to validity (i.e., things that can provide alternate explanations). However, even with the most rigorous study design, it is still possible for there to be campaign effects that were unintended (which could be viewed as both positive or negative), which might or might not ultimately be related to the campaign’s overall effectiveness (see Salmon & Murray-Johnson, 2013, for a more detailed explanation regarding the differences between campaign effects and effectiveness).

At a bare minimum, it is always advocated to have a control condition (i.e., people who were not exposed to the campaign), that is as similar as possible to the group who is exposed to the campaign. For example, in evaluating a social marketing initiative to improve healthier eating among preschoolers, Johnson et al. (2007) utilized a quasi-experimental design where they implemented the curriculum at two Head Start centers (one urban and one rural), but also evaluated students at two similar control centers (one urban and one rural) who did not receive the curriculum. Results found that those who received the specialized curriculum had increased preferences for, and willingness to try new foods as compared to those who did not receive the curriculum (i.e., control condition). Without a control condition, or a repeated-measures time series design where individual participants serve as their own controls, campaign designers have limited ways of knowing if changes they observe at the end of the campaign are because of the campaign or due to some other extraneous factors.

Because of the reach that some mass media components of a campaign can have (e.g., radio signals can travel hundreds of miles), ideally evaluators would find a similar control site far away from where the campaign was implemented to be able to compare those who were exposed to the campaign and those who were not. However, due to budgetary and time restrictions, sometimes finding a suitable control site is not practical or even possible, especially if a campaign is national in scope (Campion et al., 1994; Valente & Saba, 1998). In this instance, evaluators might instead try and “create” a control by asking target audience members through recall if they can remember seeing/hearing any campaign-related messages, and then separating individuals into campaign exposed and non-exposed groups. Niederdeppe (2005) evaluated two possible campaign recall measures—confirmed recall versus aided recall—finding no added benefits for conducting the more rigorous confirmed message recall procedures in determining campaign exposure.

Mistakes in Only Performing Evaluation at the End

When campaign developers think of evaluation, many likely think about it as only occurring at the end of the campaign—that is if time and resources allow. While some evaluation is better than no evaluation, only performing evaluation at the end of a campaign is analogous to remembering to close the barn door only after all of the animals have already escaped. Sure, a campaign designer learns something for the next time (if there is a next time), but it does not do much to help with the campaign that just had significant time and resources dedicated to it. One example is an evaluation performed by Witte et al. (1998) of an HIV/AIDS prevention campaign along the trans-Africa highway in Kenya. In this example, campaign materials (e.g., posters, pamphlets) were already created and distributed in the region. Results from focus groups revealed that participants thought many of the messages were missing explicit recommendations, and some provided unrealistic messages that men would outright reject (e.g., advocating abstinence when men stated that it is impossible to live without sex). All of this information would have been useful to have gathered in the formative evaluation stage—thereby saving resources in printing materials that the target audience did not find effective.

Table 1. Types and Uses of Evaluation

Evaluation Types

When to Use

What it Shows

Why it is Useful

Formative Evaluation

During the development of a new campaign

When an existing campaign is being modified or is being used in a new setting or with a new population

Whether the proposed campaign elements are likely to be needed, understood, and accepted by the population you want to reach

It allows for modifications to me made to the plan before full implementation begins.

Maximizes the likelihood that the campaign will succeed

Process Evaluation

As soon as campaign implementation begins

During operation of an existing campaign

How well the campaign is working

The extent to which the campaign is being implemented as designed

Whether the campaign is accessible and acceptable to its target population

Provides an early warning for any problems that may occur

Allows campaigns to monitor how well their plans and activities are working.

Outcome Evaluation

After the campaign has concluded

The degree to which the campaign had an effect on the target population

Tells whether the program was effective in meeting its objectives

Source: Adapted from CDC: National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention.

Importance of Evaluation

Campaign evaluation should not be something that is thought of as simply an “add-on” to a campaign. Evaluation needs to be integrated at all stages of campaign development. Without iterative formative evaluation, a campaign will likely distribute messages that are of limited use to a target audience. Without process evaluation, no one will know why a campaign may have failed to find effects, or be able to replicate a successful campaign. And, without outcome evaluation, it will be impossible to determine if the campaign was a success. If even just one of these stages of evaluation is minimized in campaign design, or relegated to an afterthought, developers need to realize that the ultimate effectiveness of their campaigns is likely to be minimized as well.

Further Reading

  • Cho, H., & Salmon, C. T. (2007). Unintended effects of health communication campaigns. Journal of Communication, 57(2), 293–317.
  • National Institutes of Health. (2002). Making health communication programs work: A planner’s guide. Bethesda, MD: National Institutes of Health.
  • Rice, R. E., & Atkin, C. K. (2013). Public communication campaigns (4th ed.). Thousand Oaks, CA: SAGE.
  • Valente, T. W. (2002). Evaluating health promotion programs. New York: Oxford University Press.
  • Witte, K., Meyer, G., & Martell, D. (2001). Effective health risk messages: A step-by-step guide. Thousand Oaks, CA: SAGE.

References

  • Atkin, C. K., & Freimuth, V. (2013). Guidelines for formative evaluation research in campaign design. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th ed., pp. 53–68). Thousand Oaks, CA: SAGE.
  • Atkin, C. K., & Rice, R. E. (2013). Theory and principles of public communication campaigns. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th ed., pp. 53–68). Thousand Oaks, CA: SAGE.
  • Au, T. K. F., Chan, C. K., Chan, T. K., Cheung, M. W., Ho, J. Y., & Ip, G. W. (2008). Folkbiology meets microbiology: A study of conceptual and behavioral change. Cognitive Psychology, 57, 1–19.
  • Blake, S. M., Jeffery, R. W., Finnegan, J. R., Crow, R. S., Pirie, P. L., Ringhofer, K. R., . . . & Mittelmark, M. B. (1987). Process evaluation of a community-based physical activity campaign: The Minnesota Heart Health Program experience. Health Education Research, 2, 115–121.
  • Campion, P., Owen, L., McNeill, A., & McGuire, C. (1994). Evaluation of a mass media campaign on smoking and pregnancy. Addiction, 89, 1,245–1,254.
  • Centers for Disease Control and Prevention. (n.d.). Types of evaluation. Retrieved from https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf.
  • Cornacchione, J., Wagoner, K. G., Wiseman, K. D., Kelley, D., Noar, S. M., Smith, M. H., & Sutfin, E. L. (2016). Adolescent and young adult perceptions of hookah and little cigars/cigarillos: Implications for risk messages. Journal of Health Communication, 21, 818–825.
  • Dehar, M. A., Casswell, S., & Duignan, P. (1993). Formative and process evaluation of health promotion and disease prevention programs. Evaluation Review, 17, 204–220.
  • Department of the Treasury. (2014). Internal Revenue Service, 26 CFR Parts 1, 53, and 602: Additional requirements for charitable hospitals; Community health needs assessments for charitable hospitals; Requirement of a section 4959 excise tax return and time for filing the return; Final rule. Federal Register, 79(250), 1–64.
  • Fishbein, M., & Yzer, M. C. (2003). Using theory to design effective health behavior interventions. Communication Theory, 13, 164–183.
  • Fournier, D. M. (2005). Evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 139–140). Thousand Oaks, CA: SAGE.
  • Gill, R., Black, A., Dumont, T., & Fleming, N. (2016). Photovoice: A strategy to better understand the reproductive and sexual health needs of young mothers. Journal of Pediatric and Adolescent Gynecology, 29, 467–475.
  • Hesse-Biber, S. N., & Leavy, P. (2011). The practice of qualitative research (2d ed.). Thousand Oaks, CA: SAGE.
  • Hood, K. B., Shook, N. J., & Belgrave, F. Z. (2016). “Jimmy cap before you tap”: Developing condom use messages for African American women. The Journal of Sex Research. Advance online publication.
  • Issel, L. M. (2014). Health program planning and evaluation: A practical, systematic approach for community health. Burlington, MA: Jones & Bartlett Learning.
  • Issel, L. M., & Handler, A. (2014). Choosing designs for effect evaluations. In L. M. Issel (Ed.), Health program planning and evaluation: A practical, systematic approach for community health (pp. 393–427). Burlington, MA: Jones & Bartlett Learning.
  • Johnson, S. L., Bellows, L., Beckstrom, L., & Anderson, J. (2007). Evaluation of a social marketing campaign targeting preschool children. American Journal of Health Behavior, 31, 44–55.
  • Kingery, F. P., Naanyu, V., Allen, W., & Patel, P. (2016). Photovoice in Kenya using a community-based participatory research method to identify health needs. Qualitative Health Research, 26, 92–104.
  • Krueger, R. A., & Casey, M. A. (2009). Focus groups: A practical guide for applied research. Thousand Oaks, CA: SAGE.
  • Low, W. Y., Wong, Y. L., Zulkifli, S. N., & Tan, H. M. (2002). Malaysian cultural differences in knowledge, attitudes, and practices related to erectile dysfunction: Focus group discussions. International Journal of Impotence Research, 14, 440–445.
  • Lynch, M., Squiers, L., Lewis, M. A., Moultrie, R., Kish-Doto, J., Boudewyns, V., . . . & Mitchell, E. W. (2014). Understanding women’s preconception health goals audience segmentation strategies for a preconception health campaign. Social Marketing Quarterly, 20, 148–164.
  • McGuire, W. J. (2013). McGuire’s classic input-output framework for constructing persuasive messages. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th ed., pp. 133–145). Thousand Oaks, CA: SAGE.
  • Meyer, G., Roberto, A. J., Boster, F. J., & Roberto, H. L. (2004). Assessing the Get Real about Violence® curriculum: Process and outcome evaluation results and implications. Health Communication, 16, 451–474.
  • National Institutes of Health. (2002). Making health communication programs work: A planner’s guide. Bethesda, MD: National Institutes of Health.
  • National Institutes of Health. (2015). Breast cancer and the environment communication research initiative. Retrieved from: http://grants.nih.gov/grants/guide/rfa-files/RFA-ES-15-015.html.
  • Niederdeppe, J. (2005). Assessing the validity of confirmed ad recall measures for public health communication campaign evaluation. Journal of Health Communication, 10, 635–650.
  • Noar, S. M., Harrington, N. G., & Aldrich, R. S. (2009). The role of message tailoring in the development of persuasive health communication messages. Annals of the International Communication Association, 33, 73–133.
  • Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: SAGE.
  • Prochaska, J. Q., & DiClemente, C. C. (1983). Stages and processes of self-change in smoking: Toward an integrative model of change. Journal of Consulting and Clinical Psychology, 5, 390–395.
  • Rogers, E. M., & Storey, J. D. (1987). Communication campaigns. In C. Berger & S. Chaffee (Eds.), Handbook of communication science (pp. 817–846). Newbury Park, CA: SAGE.
  • Romer, D., Sznitman, S., DiClemente, R., Salazar, L. F., Vanable, P. A., Carey, M. P., . . . & Fortune, T. (2009). Mass media as an HIV-prevention strategy: Using culturally sensitive messages to reduce HIV-associated sexual behavior of at-risk African American youth. American Journal of Public Health, 99, 2,150–2,159.
  • Roper, W. L. (1993). Health communication takes on new dimensions at CDC. Public Health Reports, 108, 179–183.
  • Salmon, C. T., & Murray-Johnson, L. (2013). Communication campaign effectiveness and effects. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th ed., pp. 99–112). Thousand Oaks, CA: SAGE.
  • Seligman, H. K., Wallace, A. S., DeWalt, D. A., Schillinger, D., Arnold, C. L., Shilliday, B. B., . . . & Davis, T. C. (2007). Facilitating behavior change with low-literacy patient education materials. American Journal of Health Behavior, 31, S69–S78.
  • Silk, K. J., Bigbsy, E., Volkman, J., Kingsley, C., Atkin, C., Ferrara, M., & Goins, L. A. (2006). Formative research on adolescent and adult perceptions of risk factors for breast cancer. Social Science & Medicine, 63, 3,124–3,136.
  • Silk, K. J., Perrault, E. K., Nazione, S., Pace, K., & Collins-Eaglin, J. (2017). Evaluation of a social norms approach to a suicide prevention campaign. Journal of Health Communication, 22, 135–142.
  • Silk, K. J., Perrault, E. K., Nazione, S., Pace, K., Hager, P., & Springer, S. (2013). Localized prostate cancer treatment decision-making information online: Improving its effectiveness and dissemination for nonprofit and government-supported organizations. Journal of Cancer Education, 28, 709–716.
  • Silk, K. J., Perrault, E. K., Neuberger, L., Rogers, A., Atkin, C., Barlow, J., & Duncan, D. M. (2014). Translating and testing breast cancer risk reduction messages for mothers of adolescent girls. Journal of Health Communication, 19, 226–243.
  • Silk, K. J., Weiner, J., & Parrott, R. L. (2005). Gene cuisine or frankenfood? The theory of reasoned action as an audience segmentation strategy for messages about genetically modified foods. Journal of Health Communication, 10, 751–767.
  • Slater, M. (1996). Theory and method in health audience segmentation. Journal of Health Communication, 1, 267–283.
  • Springston, J. K., & Champion, V. L. (2004). Public relations and cultural aesthetics: Designing health brochures. Public Relations Review, 30, 483–491.
  • Strack, R. W., Magill, C., & McDonagh, K. (2004). Engaging youth through photovoice. Health Promotion Practice, 5, 49–58.
  • Valente, T. W. (2002). Evaluating health promotion programs. Oxford: Oxford University Press.
  • Valente T. W., & Kwan, P. P. (2013). Evaluating communication campaigns. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th ed., pp. 83–97). Thousand Oaks, CA: SAGE.
  • Valente, T. W., & Saba, W. P. (1998). Mass media and interpersonal influence in the Bolivia National Reproductive Health Campaign. Communication Research, 25, 96–124.
  • Wang, C., & Burris, M. A. (1997). Photovoice: Concept, methodology, and use for participatory needs assessment. Health Education & Behavior, 24, 369–387.
  • Wilson, L. J., & Ogden, J. D. (2015). Strategic communications planning for effective public relations and marketing. Dubuque, IA: Kendall.
  • Witte, K., & Allen, M. (2000). A meta-analysis of fear appeals: Implications for effective public health campaigns. Health Education & Behavior, 27, 591–615.
  • Witte, K., Cameron, K. A., Lapinski, M. K., & Nzyuko, S. (1998). A theoretically based evaluation of HIV/AIDS prevention campaigns along the trans-Africa highway in Kenya. Journal of Health Communication, 3, 345–363.
  • Wright, A., McGorry, P. D., Harris, M. G., Jorm, A. F., & Pennell, K. (2006). Development and evaluation of a youth mental health community awareness campaign—The Compass Strategy. BMC Public Health, 6, 1–13.