Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Global Public Health. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 10 December 2023

Surveys in Low- and Middle-Income Countriesfree

Surveys in Low- and Middle-Income Countriesfree

  • Madeleine Short Fabic, Madeleine Short FabicU.S. Agency for International Development
  • Yoonjoung ChoiYoonjoung ChoiiSquared
  •  and Fredrick MakumbiFredrick MakumbiSchool of Public Health, Makerere University

Summary

Sexual and reproductive health (SRH) surveys around the world, especially in low- and middle-income countries, have been and continue to be the primary sources of data about individual-, community-, and population-level sexual and reproductive health. Beginning with the Knowledge, Attitudes, and Practices surveys of the late 1950s, SRH surveys have been crucial tools for informing public health programming, healthcare delivery, public policy, and more. Additionally, major demographic and health modeling and estimation efforts rely on SRH survey data, as have thousands of research studies. For more than half a century, surveys have met major SRH information needs, especially in low- and middle-income countries. And even as the world has achieved impressive information technology advances, increasing by orders of magnitude the depth and breadth of data collected and analyzed, the necessity and importance of surveys have not waned.

As of 2021, four major internationally comparable SRH survey platforms are operating in low- and middle-income countries—the Demographic and Health Surveys Program (DHS), Multiple-Indicator Cluster Survey (MICS), Population-Based HIV Impact Assessment (PHIA), and Performance Monitoring for Action (PMA). Among these platforms, DHS collects the widest range of data on population, health, and nutrition, followed by MICS. PHIA collects the most HIV-related data. And PMA’s family planning data are collected with the most frequency. These population-based household surveys are rich data sources, collecting data to measure a wide range of SRH indicators—from contraceptive prevalence to HIV prevalence, from cervical cancer screening rates to skilled birth delivery rates, from age at menarche to age at first sex, and more.

As with other surveys, SRH surveys are imperfect; selection bias, recall bias, social desirability bias, interviewer bias, and misclassification bias and error can represent major concerns. Furthermore, thorny issues persist across the decades, including perpetual historic, measurement, and methodological concerns. To provide a few examples with regard to history, because the major survey programs have historically been led by donors and multilateral organizations based in the Global North, survey content and implementation have been closely connected with donor priorities, which may not align with local priorities. Regarding measurement, maternal mortality data are highly valued and best collected through complete vital registration systems, but many low- and middle-income countries do not have complete systems and therefore rely on estimates collected through household surveys and censuses. And regarding methods, because most surveys offer only a snapshot in time, with the primary purpose of monitoring key indicators using a representative sample, most analyses of survey data can only show correlation and association rather than causation. Opportunities abound for ongoing innovation to address potential biases and persistent thorny issues.

Finally, the SHR field has been and continues to be a global leader for survey development and implementation. If past is prelude, SRH surveys will be invaluable sources of knowledge for decades to come.

Subjects

  • Global Health
  • Sexual and Reproductive Health

Introduction

In 2020, an estimated 936 million women of reproductive age were contraceptive users worldwide, of whom 65 million were living in sub-Saharan Africa (United Nations Department of Economic and Social Affairs. Population Division, 2020). In 2019, 38 million people were living with HIV worldwide, more than half of whom were living in East and Southern Africa (UNAIDS, n.d.). And in 2017, nearly 300,000 women died due to maternal causes; 94% of those deaths occurred in low- and lower-middle-income countries (UNFPA, World Health Organization, UNICEF, World Bank Group, the United Nations Population Division, 2019).

Ever wondered how these numbers are known, what data informed these estimates and others, and how these data were collected and why? Answering these questions requires an understanding of sexual and reproductive health (SRH) surveys, and this article provides an overview of that topic.

Surveys around the world, especially in low- and middle-income countries, have been and continue to be the primary source of data about individual-, community-, and population-level sexual and reproductive health. Beginning with the Knowledge, Attitudes, and Practices (KAP) surveys of the late 1950s, SRH surveys have been crucial tools for informing public health programming, healthcare delivery, public policy, and more. Most major demographic and health modeling and estimation efforts rely on data collected through surveys, and thousands of research studies have been conducted thanks to the data collected and made available through surveys.

For more than half a century, surveys have met major SRH information needs, especially in low- and middle-income countries. Even as the world has achieved impressive information technology advances, increasing by orders of magnitude the depth and breadth of data collected and analyzed, the necessity and importance of surveys have not waned. This article addresses three key topics:

a history of SRH surveys in low- and middle-income countries

current methodological strengths and limitations of SRH surveys

challenges, opportunities, and directions for the future

While there are many types of SRH surveys, this overview focuses predominantly on the most ubiquitous type, population-based household surveys. To a limited degree, the article also discusses health facility-based surveys and other SRH surveys, such as school-based surveys. Attention centers on the Global South. To anchor the overview, the article begins with a bit of history.

History: Surveys Over the Decades

Knowledge, Attitudes, and Practices (KAP) surveys sprung from the field of demography in the 1950s, with a focus on understanding fertility intentions, attitudes toward family planning, and contraceptive practices of married women of reproductive age. Such surveys quickly expanded. By 1969, more than 400 KAP surveys had been conducted worldwide, many in low- and middle-income countries (Cleland, 1973). A major interest in the early days was understanding whether perceived hostility toward family planning services was a reality. Soon enough, data from KAPs accumulated showing that overt hostility to organized family planning services was a myth. KAP surveys proved that stereotypes—many of them overtly sexist, racist, and classist—about gendered, cultural, and religious opposition to family planning were simply wrong (Hill, 1968; Box 1). Thanks to the frequency of KAP surveys, some of which were longitudinal, researchers also began to measure the fluidity of fertility intentions around the world. Time and again, researchers found that at the individual level, fertility intentions were flexible and changed in response to life events and economic downturns. At the aggregate level, however, fertility intentions were far more stable, a result of “a series of canceling errors” at the individual level (Goldberg et al., 1959). In essence, the KAP surveys were measuring a stable population-level phenomenon that had limited applicability to individual-level outcomes (Mukherjee, 1975).

Box 1. KAP Surveys Debunk Myths about Family Planning Opposition

It has been alleged that the traditionalism of the lower classes and peasants would lead them to oppose family planning services. These and many other such assertions made by the élite about the population as a whole have been disproved by the KAP-type surveys. Herewith are some examples of these elite diagnoses of their fellow countrymen, which the surveys have disproved:

1.

that the Latin American male wants many children to prove his virility (disproved in Puerto Rico and Mexico)

2.

that good Catholics want all the children God sends (a very small minority respond this way, regardless of the country being surveyed)

3.

that in Asia, couples wish for many children in order to be cared for in their old age (rarely expressed in any survey)

4.

that no [Muslim] male will use a condom (on the contrary, this is a widely used method)

5.

that men want more children than their wives (on the contrary, in the countries surveyed, husband and wife tend to be in basic agreement about desired family size, but without knowing it)

6.

that illiterate peasants have not the intelligence or the foresight to use modern methods of contraception (let no man underestimate the intelligence of the unlettered!)

Note. From Hill (1968).

Thanks to the confluence of multiple factors—intensified concern of the coming “population bomb,” expanded availability to modern forms of contraception like the birth control pill, focused interest from newly established national family planning programs, and increased funding from new donors like the U.S. Agency for International Development (USAID)—interest in family planning data for programming, evaluation, and advocacy grew throughout the 1960s and 1970s. As data demands surged, especially for monitoring national trends and making international comparisons, the limitations of KAP surveys became more apparent. Calls grew louder for standardized, rigorous methodology, sampling, and questionnaire design (Cleland, 1973).

In 1971, the International Statistical Institute, USAID, and the United Nations Fund for Population Activities (UNFPA) joined together in identifying a common area of interest, the establishment of the World Fertility Survey (WFS). Funded from 1972 to 1984, the WFS provided technical assistance to 64 participating countries in their implementation of a standardized, population-based household survey designed to collect cross-nationally comparable data on population and family planning. Surveys generally included ever-married women ages 15–49, and samples ranged from 3,000 to 10,000 women in a country (International Statistical Institute, 1985). Collected survey data allowed for the measurement of total fertility, child mortality, contraceptive prevalence, fertility and contraceptive intentions, and more.

Meanwhile, in the mid-1970s, USAID brought another population-based household survey to the fore, the Contraceptive Prevalence Survey (CPS). Similar to the WFS, the CPS was designed to collect information on fertility, contraceptive use, and attitudes toward fertility and family planning. Unlike WFS, however, the primary goal of the CPS was to produce information for the management and evaluation of family planning programs using a much shorter questionnaire with the aim of producing results “soon after interviewing is complete, when they can have the most impact on program operations” (Anderson & Cleland, 1984). Though smaller in scope than the WFS, the CPS generally included all women of reproductive age regardless of marital status. The CPS was implemented from 1977 to 1985, and the CPS project (also known as the Family Planning/Maternal Child Health Surveys or Reproductive Health Surveys) was implemented from 1975 to 2011 (Morris, 2000).

With the end of the WFS in 1984, USAID launched the Demographic and Health Surveys Program (DHS) designed to provide technical assistance to partner countries to conduct high-quality, nationally representative health surveys that collected cross-nationally comparable data. The DHS Program took lessons learned from the WFS and CPS experiences to develop a robust survey program, including a rigorous methodology and sampling design, a standardized and tested core questionnaire, and a mandate to work with local institutions to strengthen local capacity. To date, the DHS Program has provided technical assistance to more than 400 surveys in 90 countries, making it the world’s largest and longest enduring national household survey program.

Like the CPS, the DHS Program generally provides data for all women of reproductive age. Today, nearly all surveys also include male respondents. In the early years of the DHS Program, the sample size of each survey was typically about 5,000 households. As a result of increasing demand for subnational-level estimates, the sample size of most DHS Program surveys has increased. For example, standard DHS implemented between 2007 and 2011 had a median sample size of 11,000 households, while those implemented between 2017 and 2021 had a median size of 15,000 households.1 The decades have brought numerous other changes to the DHS Program in terms of survey scope and complexity (Short Fabic et al., 2012; Figure 1). New technologies have also changed survey implementation. By 2005, the DHS Program was experimenting with using computer-assisted personal interviewing (CAPI) in place of data entry using paper questionnaires. Nearly all DHS over the past decade have been conducted via tablet (e.g., CAPI) and this trend away from paper-based implementation is mirrored in other household and facility-based surveys.

Figure 1. DHS adaptations over time. Blue font represents country-specific adaptation and black font represents DHS program-wide adaptation. Timeline was created by the authors using information from the DHS program’s survey characteristics search (ICF, n.d.).

Ten years after the establishment of the DHS Program, the United Nations established its own population-based household survey platform, UNICEF’s Multiple Indicator Cluster Surveys (MICS), designed to respond to the World Summit for Children’s call to “measure progress towards an internationally agreed set of mid-decade goals” (UNICEF. Multiple Indicator Cluster Surveys, Accessed on: March 11, 2022). The first round of MICS was conducted around 1995. By 2015, more than 280 MICS had been conducted in approximately 100 countries. The MICS have a number of similarities with the DHS, but differentiate themselves by their focus on children. For example, MICS include questions designed to measure early-child development, learning, and child labor. Unlike DHS, MICS have not measured adult mortality or collected health-related biomarkers.

More recently, in the mid-2010s, the President’s Emergency Plan for AIDS Relief (PEPFAR) and the Bill and Melinda Gates Foundation (BMGF) established their own respective household survey programs. PEPFAR’s Population-Based HIV Impact Assessments (PHIA) focus on HIV behaviors and outcomes, including HIV incidence, in 15 PEPFAR-supported countries (ICAP, n.d.). Meanwhile, BMGF’s Performance Monitoring for Action (PMA) surveys (formerly known as Performance Monitoring and Accountability 2020 surveys) initially focused on family planning attitudes and practices in 11 countries in the Global South who had made commitments to the Family Planning 2020 agenda (Johns Hopkins University, n.d.-a). Akin to the CPS of years past, PMA prioritizes collecting a limited set of key family planning data meant to quickly provide information to drive programming. Such data are also intended for monitoring progress toward achieving national goals set through the Family Planning 2020 initiative (FP, 2020).

These four platforms—DHS, MICS, PHIA, and PMA—represent the current major SRH-focused household surveys in low- and middle-income countries that are internationally comparable. Among these platforms, DHS collects the widest range of data on population, health, and nutrition, followed by MICS. PHIA collects the most HIV-related data. And PMA’s family planning data are collected with the most frequency.

Of course, many countries have their own national SRH-focused surveys that are not designed to be internationally comparable (United Nations Department of Economic and Social Affairs Population Division, 2021). For example, China’s One-per-Thousand Population Fertility Sample Survey of 1982 long held the record for being the world’s largest sample survey (State Family Planning Commission, n.d.) and set the stage for many SRH surveys China subsequently implemented. Additionally, in the 1990s, the League of Arab States led the Pan Arab Project for Child Development (PAPCHILD) survey program to collect information on reproductive and child health in nine countries in the Middle East and North Africa (Anonymous, 1994a, 1994b, 1995; Global Health Data Exchange, n.d.).

Myriad SRH-related research and evaluation surveys are underway around the world, including the Global Early Adolescent Study, which “seeks to better understand how gender socialization in early adolescence occurs around the world, and how it shapes health and wellness for individuals and their communities” (Johns Hopkins University and World Health Organization, n.d.). There are also several large-scale, nationally representative school-based and facility-based surveys that measure SRH. For example, the World Health Organization and U.S. Centers for Disease Control and Prevention’s Global School-Based Student Health Survey (GSHS) collects data from school-based adolescents ages 13–17 on a range of issues, including sexual behaviors that contribute to HIV, other sexually transmitted infections, and unintended pregnancy (World Health Organization and U.S. Centers for Disease Control and Prevention, n.d.). Begun in 2003 and implemented in more than 100 countries, the GSHS builds from lessons and methods developed in the United States through decades of adolescent school-based research studies (Brener et al., 2013). When it comes to facility-based SRH surveys, the DHS Program’s Service Provision Assessment (SPA) and World Health Organization’s Service Availability and Readiness Assessment (SARA), which evolved in 2021 to become the Harmonized Health Facility Assessment (HHFA), are the two most widely used standardized surveys measuring indicators related to SRH service delivery (World Health Organization, n.d.). For ease of reference, this article refers to the HHFA as its predecessor, SARA.

The SPA began in 1997 and the SARA began in 2010, having evolved from the Service Availability Mapping tool that WHO developed in 2004. The SPA is a nationally representative survey of formal health facilities that provides a comprehensive, cross-sectional overview of availability of services, capacity to provide quality services, and quality of services and care provided to clients. Meanwhile, the SARA provides data on availability of services and capacity to provide quality services; its questions are harmonized with a subset of the SPA. Both SPA and SARA evolved from the Population Council’s Situation Analysis (SA) studies, which began in 1989 with a focus on family planning and reproductive health services. SAs were motivated, at least in part, by DHS findings, which indicated that supply-side weaknesses were important drivers of low contraceptive prevalence (Lindelow & Wagstaff, 2013). Based on ongoing supply-side questions, USAID introduced SPAs with the aim to expand data collection to explore a broader set of service delivery issues and health areas, including and especially child health. Both SPAs and SARAs are implemented in a variety of low- and middle-income countries, though SARAs are far more widespread. The PMA surveys also include a facility component where health facilities, pharmacies, and retail outlets providing contraception in the sample catchment areas are surveyed in order to understand service availability and client satisfaction.

During the COVID-19 pandemic, and specifically developed in the context of the pandemic, the World Health Organization began implementing health facility assessments through telephone interviews in order to understand continuity of health services, including SRH services (World Health Organization, 2020). Such surveys are much faster and far less expensive than in-person surveys that require a facility visit. However, they are also far less comprehensive and more prone to biases. At the household level during the pandemic, the PMA surveys and MICS began implementing computer-assisted telephone interview (CATI) surveys (i.e., telephone interview with data entry using applications on tablets or phones). PMA’s surveys span four focal countries to understand COVID-related knowledge, attitudes, and practices, as well as the impact of the pandemic on contraceptive access and use (PMA, 2021). The new MICS Pulse surveys are implemented in three countries and assess a range of issues, from children’s nutritional status to COVID risk perception (UNICEF, n.d.-a). While CATI surveys are not new—some KAP surveys of even the 1950s were phone-based—their widespread application in response to the pandemic may offer another avenue of accepted and supported SRH data collection well beyond the COVID crisis.

Current Methodological Strengths and Limitations

As the history of sexual and reproductive health (SRH) surveys illustrates, during the 1950s and 1960s, fertility and family planning knowledge gaps drove the development and implementation of health surveys in high- and low-income countries alike. Over the intervening years, surveys in low- and middle-income countries had to take on the added mandate of providing fertility, mortality, and other indicator estimates due to ongoing weaknesses in civil and vital registration systems and inadequate routine health information management systems. This reliance on household surveys for key fertility and mortality estimates, coupled with increasing demands for more health-related data, have kept many surveys especially long and complex (Table 1). Today, key household surveys collect data to measure a wide range of SRH indicators—from contraceptive prevalence to HIV prevalence, from cervical cancer screening rates to skilled birth delivery rates, from age at menarche to age at first sex, and more. All four major surveys make their data publicly accessible in standardized form ready for analysis in common statistical software.

Table 1. Characteristics of Household Surveys That Cover SRH Topics

DHS

MICS

PHIA

PMA

Respondents

Women

(15–49)

Men

(15–59)

Women

(15–49)

Men

(15–49)

Mothers of children 0–17

Women

(15–64)

Men

(15–64)

Adolescents

(12–14)

Women

(15–49)

Average

sample size

15,000 households

6,000 households

11,500 households

6,000 households2

Frequency

4–5 years

3–4 years

Unknown;

first round just completed

Yearly

Sampling methodology

Two-stage sample design:

1.

Selection of census areas proportional to population size

2.

Selection of a random sample of households within the enumeration area

(de facto population,

slept in the household the night before)

Three-stage sample design:

1.

Selection of census areas proportional to population size;

2.

Selection of a random sample of households within the enumeration area

3.

Selection of a random individual within the selected household

(de jure population)

Two-stage sample design:

1.

Selection of census areas proportional to population size

2.

Selection of a random sample of households within the enumeration area

(de facto population)

Two-stage sample design:

1.

Selection of census areas proportional to population size

2.

Selection of a random sample of households within the enumeration area

(de facto population)

Questionnaire sections

Five questionnaires:

household, women’s, men’s, biomarkers, fieldworker’s

Women’s questionnaire:

-

Respondent’s background

-

Reproduction

-

Contraception

-

Pregnancy and postnatal care

-

Child immunization

-

Child health and nutrition

-

Marriage and sexual activity

-

Fertility preferences

-

Husband’s background and women’s work

-

HIV/AIDS

-

Other health issues

Optional modules:

-

Accident and injury

-

Adult and maternal mortality

-

Chronic disease

-

Disability

-

Domestic violence

-

Female genital cutting

-

Fistula

-

Food insecurity experience

-

Newborn care

-

Out-of-pocket health expenditures

-

Supplemental module on maternal health

Source: The DHS Program (2020)

Six questionnaires:

household, women’s, men’s, children <5, children age 5–17

Women’s questionnaire:

-

Respondent’s background

-

Mass media and ICT

-

Fertility and birth history

-

Desire for last birth

-

Maternal and newborn health

-

Postnatal health checks

-

Contraception

-

Unmet need for family planning

-

Female genital mutilation

-

Attitudes toward domestic violence

-

Victimization

-

Marriage/union

-

Adult functioning

-

Sexual behavior

-

HIV/AIDS

-

Maternal mortality

-

Tobacco and alcohol use

-

Life satisfaction

Optional modules:

-

Water quality testing

-

Vaccination records

Source: UNICEF (2020)

Three questionnaires: household, adult, adolescent

Adult questionnaire:

-

Respondent’s background

-

Marriage

-

Reproduction

-

Children

-

Male circumcision

-

Female circumcision and traditional body modification

-

Sexual activity

-

HIV testing

-

Attitudes toward HIV disclosure

-

HIV status, care, and treatment

-

Tuberculosis and other health issues

-

Gender norms

-

Alcohol use/nonprescription drug use

-

Violence

Source: ICAP (n.d.)

Two questionnaires:

household, female

Female questionnaire:

-

Respondent’s background

-

Reproduction, pregnancy, and fertility preferences

-

Contraception

-

Sexual activity

-

Women and girl’s empowerment

Source: Zimmerman et al. (2017)

Biomarkers

Yes

No

Yes

Not in the core survey, but collected in select, independent special topic surveys

GIS data

Yes

Yes

Yes

Yes

Note. PMA surveys also include a facility component, not described in this table.

To ensure that the collected data are nationally and subnationally representative, all major household survey programs use a stratified multistage sampling strategy. Key sample size determinations are made based on data needs, including desired precision of estimates and administrative levels of interest, which determine the number of sampling strata. Thereafter, census enumeration areas are randomly selected proportional to population size. The survey team (generally the National Statistics Institute) conducts a household mapping and listing exercise to identify all residential structures in the selected enumeration areas. After the mapping and listing exercise, households within the selected enumeration areas are randomly selected for inclusion. Survey implementation begins and interview teams visit selected households. After interviewers receive informed consent, they implement the household questionnaire to identify all household members eligible for the individual questionnaire and to collect key household background data.

In order to collect data that are comparable across countries and over time, standard core questionnaires are used alongside survey manuals that describe the rationale for inclusion of certain questions or sections. The survey manuals generally include topics such as sampling, field staff training, interview procedures, and quality control, ensuring that similar questionnaire and survey procedures are followed in each country. Survey questionnaires can be tailored to meet local data needs by adding or deleting questions based on relevance to a particular country context. To accommodate requests for information on special topics and to achieve some level of comparability, the Demographic and Health Surveys (DHS), and Multiple-Indicator Cluster Survey (MICS) to a more limited degree, offer optional questionnaire modules on a series of topics. Additionally, to meet the need for prevalence data on myriad health conditions, including HIV, anemia, malaria, and more, DHS also frequently makes use of biomarker testing. Every Population-Based HIV Impact Assessment (PHIA) also includes complex HIV-related biomarker testing. While these approaches have enabled the large survey platforms to meet data needs for multiple program areas, they have the potential disadvantage of leading to lengthy questionnaires and complex biomarker collection and testing approaches that are daunting for respondents and interviewers alike. To date, there has been no solid evidence that respondent or interviewer fatigue negatively affects data quality; however, as questionnaires lengthen, this possibility is of increasing concern.

To counter this concern, surveys have explored different methodological approaches to reduce respondent and interviewer burden. For example, due to its wide scope, the 2017 Benin DHS included a split-sample design where half of households were eligible for women’s height and weight measurements, anemia testing, malaria testing, and domestic violence questions, and the other half of households were eligible for noncommunicable disease questions, blood pressure measurements, and men’s interviews (Institut National de la Statistique et de l’Analyse Économique INSAE and ICF, 2019). Another approach DHS has explored is the continuous survey, which both Peru and Senegal have implemented. The continuous DHS is implemented on an ongoing basis. In Senegal, specific data elements are collected at alternating intervals in order to reduce the burden on respondents and interviewers. The approach is also designed to strengthen local institutional capacity and provide more regular and frequent data for program monitoring. Both Peru and Senegal have had major successes in their continuous approaches; however, neither has been successful at constraining survey expansion, including sample size expansion (Becker et al., 2019). Performance Monitoring for Action (PMA) has taken a different approach: Surveys on non-SRH topics (such as nutrition and schistosomiasis) have been conducted between the annual family planning surveys using independently sampled households within the same enumeration areas (Johns Hopkins University, n.d.-b).

These experiences, coupled with the historic experience of the Contraceptive Prevalence Survey (CPS), which started relatively lean in scope and ended quite sizable, highlight the reality that when stakeholders have the opportunity to collect data, they use that opportunity to collect as much as is possible, in the process expanding both survey content and sample size. This reality creates major challenges to both budgeting and implementation. Increasing survey costs and complexity threaten survey implementation and quality.

In the context of SRH data, the expansion of survey size and scope has advantages and disadvantages. First, such expansion allows for more complex and nuanced analyses, including analyses that highlight the interconnected nature of SRH with other elements of individual, community, and national-level health, education, environment, and wealth. Second, sample size growth can allow for more granular estimates at subregional levels and can improve precision of indicators estimated in a survey and model-based estimates using survey data. For example, the DHS Program offers spatial models for a range of key indicators, allowing users to identify geographic variation in SRH behaviors and outcomes and geographies that would likely benefit from increased programmatic focus (Burgert-Brucker et al., 2018). On the flip side, expansion of survey size and scope can limit data collection opportunities. The increased cost associated with increased survey complexity means that funding negotiations in low- and middle-income countries between partner governments and donors can be protracted, leading to survey delays. Second, the expansion in survey scope typically benefits the broader health space rather than SRH data collection directly; that is, as questionnaire space is increasingly occupied by non-SRH topics, the “real estate” for SRH questions is either unchanged or diminished. For example, the DHS Program’s core questionnaire has fewer family planning-specific questions today than it did in the mid-1990s (Fabic & Choi, 2015).

Cross-sectional household surveys are rich sources of data, but as with other surveys, they also have numerous data quality issues. As exemplified in Box 2, selection bias, recall bias, social desirability bias, interviewer bias, and misclassification bias and error can represent major concerns. With most SRH surveys in low- and middle-income countries having very high response rates, nonresponse bias is less of a concern, though it too can negatively impact survey representativeness. These biases can be reduced through a variety of efforts, including effective survey and questionnaire design, extensive interviewer training and supervision, integrated use of technologies like computer-assisted self-interviewing for sensitive questions, and analytic techniques to incorporate various effects into research study design.

Box 2. Examples of Common Types of Bias in SRH Household Survey

Selection bias: DHS collects many prenatal and postnatal care indicators only for last-born children. Because fewer births of non-last-born children are delivered in a health facility, biased conclusions about pre- and postnatal care may result from this selection bias (Rutstein, 2014).

Recall bias: SHR household surveys rely on self-reported events, some occurring many years prior. Recent research indicates that for some maternal and newborn health interventions, the longer the recall period, the less likely women are to report maternal complications (Zimmerman et al., 2019). In addition, for some types of interventions, such as those that occurred during the intrapartum or immediate postnatal period (within an hour of birth), many women are simply unable to accurately recall the event, regardless of the duration of the recall period (McCarthy et al., 2020).

Social desirability bias: Research indicates that women ages 15–19 are less likely to report marriages and first births before the age of 15 as compared with women from the same birth cohort asked 5 years later at ages 20–24 (Neal & Hosegood, 2015).

Interviewer bias: The background and demeanor of the person who is conducting the interview can impact an individual’s reports of various behaviors, attitudes, and experiences, especially those that carry social stigma. Researchers analyzed DHS data to examine the interviewer’s effect on women’s reports of abortion, finding that interviewer effects accounted for “between 0.2% and 50% of the variance in the odds of a woman reporting ever having an abortion, after controlling for women’s demographic characteristics” (Leone et al., 2021).

Misclassification bias/error: This bias may be intentional or unintentional (i.e., error). An example of intentional misclassification occurs when an interviewer or interviewee chooses to backdate a child’s birthdate so that questions about young children’s health are omitted in order to shorten the questionnaire duration. Such bias may also be unintentional, as when a birthdate is unknown and a respondent’s age is misplaced (Pullum, 2006).

Non-response bias: While major SRH surveys in low- and middle-income countries have very high response rates at household and individual-levels, non-response bias may influence representativeness, especially of certain factors. For example, a study of the impact of refusals on HIV prevalence estimates in Malawi found that “variation in the estimated HIV prevalence across different estimators is due largely to those who already know their HIV test results” (Adegboye et al., 2020).

Turning to facility-based surveys, the Service Provision Assessment (SPA) consists of four survey instruments, which are adapted to local context—the facility inventory, provider questionnaire, observation protocol, and exit interview questionnaire. The facility inventory includes the core World Health Organization’s Service Availability and Readiness Assessment (SARA) questionnaire and is used to assess facilities’ capacity to provide basic services. Meanwhile, the provider interview questionnaire is used to assess human resources, the observation protocol is used to assess provider adherence to provision of care standards, and the exit interview questionnaire is used to assess client satisfaction and experience. The four instruments together provide comprehensive data on service availability and quality of care.

The sampling design of facility-based surveys generally depends on three factors: the level and type of disaggregation required, the type of health services to be assessed, and the level of precision desired. One of the main challenges to sample design is the difficulty of getting a good quality sampling frame: the number, location, and type of health facilities in a country. Such information is required to ensure a representative sample. Many countries, however, do not keep this information accessible or up to date. Recognizing this limitation, a number of low- and middle-income countries over the past decade have increased investment in developing and maintaining master facility lists. Such lists generally include all public healthcare facilities in a country and some lists also include private facilities (World Health Organization, 2017).

When it comes to data collection and use, facility surveys offer unique insight into SRH quality of care. While household surveys or routine health management information systems collect information on what services were provided or received, only facility-based surveys collect comprehensive information on the quality of those services. For example, a 2018 analysis of SPA data across 10 low- and middle-income countries offered empirical evidence that at the primary level, quality of care for sick-child care, family planning, and antenatal care was low in all study countries; postnatal family planning counseling was especially poor (Macarayan et al., 2018). Another example can be found in the 2018–2019 Afghanistan SPA (Ministry of Public Health/Afghanistan and ICF, 2019), which provided evidence to better answer a major question raised from the 2015 Afghanistan DHS (Central Statistics Organization (CSO), Ministry of Public Health (MoPH), and ICF, 2017): Despite increased investment, why didn’t maternal mortality improve in recent years?

The Afghanistan SPA showed that only 15% of all first-visit antenatal care clients were assessed for all four key client history factors, only one in four women reported that the provider counseled them on pregnancy risk factors, and only 35% of facilities offering normal delivery services performed all seven emergency obstetric and neonatal care (EmONC) functions (Ministry of Public Health/Afghanistan and ICF, 2019). The SPA also found huge differences in EmONC between public and private facilities, with 80% of public facilities offering comprehensive EmONC compared with only 13% of private facilities. These and other data on service availability and quality allowed program managers and policymakers to identify service provision weaknesses that need to be addressed in order to improve maternal health outcomes in Afghanistan. In essence, the DHS helped to identify the problem (health outcome)—here, high levels of maternal mortality—while the SPA helped to identify the cause (health outputs)—here, limited availability of comprehensive and good-quality maternal healthcare services.

Although quality of care measurement is a major benefit of facility-based surveys, it is also a major stumbling block. First, there is no common definition of quality or global set of key indicators of quality. Without such definition and agreement on key indicators, the ability of international SRH-focused facility survey programs to garner sufficient interest and funding for widespread implementation remains more notional than reality. Second, survey teams are limited by time and sample size in their ability to adequately capture quality of care data for direct observation of key SRH-related events such as labor and delivery; this limits the utility of facility-based surveys to measure quality of care of certain SRH cases that present to health facilities relatively infrequently (e.g., management of obstetric complications). To measure provider knowledge and competencies for managing such cases, clinical vignettes have been used (Ministry of Health and Population/Malawi, 2019), but the interpretation of results is challenging as it must take into consideration the “know-do” gap (Mohanan et al., 2015). Third, observation and client exit interviews often offer conflicting estimates. For example, an analysis of data from five SPAs found that data collected through client exit interviews systematically overestimated counseling on contraceptive side effects as compared with direct observation, a discrepancy indicative of courtesy bias (Choi, 2018). Similarly, an analysis of self-reported client satisfaction comparing estimates derived from facility-based surveys to those derived from household surveys also found systematically higher levels of satisfaction reported at the facility level, indicating potential courtesy bias in facility-based surveys (Glick, 2007). These findings highlight the importance of robust questionnaire design and interviewer training to minimize courtesy bias, especially since client exit interview data remain invaluable for understanding and improving client-centered care (Giessler et al., 2021; Holt et al., 2019; Srivastava et al., 2017).

Finally, although most health facility surveys’ primary purpose is monitoring service readiness and quality of care, there is strong interest in using population-based and health facility surveys jointly in order to understand the impact of the service environment on service use and even health outcomes at the population level. However, methodological limitations for linking two sample surveys present another set of analytic challenges (Skiles et al., 2013), and even when the linkage is possible, as in PMA (Choi et al., 2019), analytical challenges remain for a variety of reasons, including temporality of data collection and potentially flawed assumptions about geographic proximity and service utilization (Peters et al., 2020). Taken together, these challenges limit the ability to draw conclusions about the relationship between service availability, service quality, and service utilization.

These challenges also represent ongoing opportunities for methodological and theoretical grounding of key public health and healthcare measurement, which can and should drive survey refinement and, where needed, survey development.

Challenges, Opportunities, and Directions for the Future

As this article has highlighted, sexual and reproductive health (SRH) surveys in low- and middle-income countries have evolved over the years to meet ongoing, new, and emerging data needs. SRH surveys will, no doubt, continue to evolve into the future. Despite the consistency of change, a few thorny issues persist, including perpetual measurement and methodological concerns. First thing first is the thorn of history.

Thorny History

The major survey programs have historically been led by donors and multilateral organizations based in the Global North. As a result, survey content and implementation have been closely connected with donor priorities. The iterative evolution of SRH surveys—with surveys in the 2020s having been built on the methods and questionnaires of the 1970s and before, plus the continued technical direction and financial support from researchers and donors in the Global North—has meant that some SRH areas, such as family planning and maternal health, are well represented in survey questionnaires. Meanwhile, other SRH areas, such as sexual function, infertility, miscarriage, and induced abortion, are not. It is difficult to parse whether data collection choices are the result of undue influence of the past and of the Global North both past and present, or whether they represent present-day priorities shared by both the Global North and Global South.

Regardless of the answer to that question, as efforts to decolonize global health gain traction, it is increasingly imperative that major SRH survey programs implemented in the Global South divest authority from the Global North. This ongoing restructuring of power dynamics will have long-term benefits and short- and immediate-term challenges. For example, by removing donors and other Global North actors from survey content decision making, surveys will likely be better positioned to meet local information needs in support of the populations being surveyed. On the flip side, focusing on local data needs will likely mean that surveys will lose some degree of international and temporal comparability. Additionally, if low- and middle-income countries refuse to share survey methods or do not allow their data to be openly accessible, local and global stakeholders alike will lose the opportunity to assess data quality, as well as the ability to analyze the data collected to answer key questions and solve public health problems. Ongoing engagement will be required from multiple corners to advocate for data openness and transparency around the world, including but not limited to low- and middle-income countries. Finally, the sheer cost of implementing a comprehensive population-based household survey of good quality could impede local financial ownership of SHR survey programs, prolonging reliance on the Global North. Efforts are required to contain survey costs—largely driven by survey scope and sample size—which will mean reducing the amount of data collected and collecting it from fewer households.

Thorny Measurement

Turning to other thorny issues, if history is prelude, then surveys will continue to face ongoing measurement challenges, including and beyond the “quality of care” issues described earlier. In the family planning field, for example, the great aspiration, the “holy grail” of measurement, is “demand for contraception.” In other words, how many individuals (typically women of reproductive age) want to use contraception? How many individuals want to use contraception, but are not using it? Where are these individuals concentrated? Relatedly, why are they not using contraception? The proxy indicator constructed from 15 household survey questions, “unmet need for family planning,” begins to get at these issues, but in an indirect way that is only applicable at the aggregate, population level. With individual-level fertility intentions fluctuating over time based on myriad internal and external factors, accurately capturing demand for fertility regulation, let alone demand for contraception, remains a major challenge. Moreover, with many individuals in low- and middle-income countries fatalistic in their fertility intentions and/or aware of only a limited number of contraceptive options, teasing apart demand for a concept—being able to control one’s own fertility—and demand for a product—contraceptives and/or a specific contraceptive method—is even more challenging (Fabic, 2022). Add to that the ill-suited nature of household surveys to answer “why” questions, and opportunities abound for qualitative and mixed-methods research to deeply understand demand for contraception. Opportunities also abound for development of new indicators that aim to better capture key family planning concepts (Speizer et al., 2022). New measures have been put forward, like “alternative contraceptive prevalence” (Fabic & Becker, 2017), “contraceptive autonomy” (Senderowicz, 2020), and “method satisfaction” (Rominski & Stephenson, 2019). although none has cracked the “demand” nut.

For maternal health, a key aspiration is measurement of maternal mortality. All mortality data are best collected through complete vital registration systems, but many low- and middle-income countries do not have such systems and therefore rely on estimates collected through household surveys and censuses. Maternal mortality is a relatively rare event, which makes it challenging to adequately capture through surveys. Indicator estimates generally have wide confidence intervals and can vary dramatically based on questionnaire structure, interviewer training and supervision, sampling frame, and women’s recall, among other factors (Ahmed et al., 2014). Over time, as more maternal mortality estimates accrue from various sources, including multiple household surveys, understanding trends in a given country becomes even more challenging. One solution is to invest in vital registration systems; however, this type of investment goes well beyond public health. Civil and vital registration systems are systems of good governance. To make such systems effective, civilians must have faith in their local government, which means that multisectoral approaches to expand and improve civil and vital registration systems are required (Suthar et al., 2019).

For sexual reproductive health, direct measurement of induced abortion remains yet another thorny challenge. Self-reported induced abortion is consistently underreported in household surveys, including in contexts where abortion is legal (Jones & Kost, 2007; Leone et al., 2021). As a result, indirect estimation methods are oftentimes employed (Sedgh et al., 2012). Such estimates, however, have their own limitations; oftentimes they are nonrepresentative and do not allow for more complex demographic analyses. Increased focus on mitigating interviewer effects through interviewer selection, training, and supervision could improve self-report (Leone et al., 2021), as could increased use of audio computer-assisted self-interview (ACASI) (Lindberg & Scott, 2018) and cognitive testing and refinement of related survey questions. In this vein, the Demographic and Health Surveys (DHS) Program introduced the Fieldworker Questionnaire in 2015 to begin to better assess interviewer effects (ICF International, n.d.-a), and in the late 2010s, Performance Monitoring for Action (PMA) began testing alternative abortion-related questions (Bell et al., 2019, 2020).

Thorny Respondent Eligibility Parameters

Another measurement and methodological issue is respondent age. Interest continues to grow in understanding SRH-related knowledge, attitudes, and behaviors of young adolescents, ages 10–14 (Igras et al., 2014). However, when it comes to modifying the four major SRH household surveys to better address adolescence, it remains unclear whether advocates and program managers want to gather data directly from 10- to 14-year-olds or whether they are comfortable with data collected from slightly older adolescents (ages 15–19) about experiences in early adolescence. Adding to the complications is that aside from a handful of indicators, such as child marriage and adolescent birth rate, there are very few commonly agreed upon SRH indicators for early adolescence. Nevertheless, the call for more data has resulted in recent household survey changes. For example, in response to data needs for the Sustainable Development Goals Indicator 3.7.2, adolescent birth rate, the DHS Program reports age-specific adolescent fertility for ages 10–14 using methods honed in the late 2010s (Pullum et al., 2018). Looking to the future, more widespread use of school-based survey platforms and data, such as the Global School-Based Student Health Survey (GSHS), will be imperative for better understanding early adolescence. Study platforms, like the Global Early Adolescent Study, can continue to develop and test key SRH indicators applicable to early adolescence.

On the other end of the spectrum, interest in understanding the SRH-related experiences of older individuals, especially as related to gender-based violence, has been growing (HelpAge Internatioinal, n.d.). Calls to increase household survey age limits have also become louder as the need to monitor noncommunicable diseases in low- and middle-income countries has grown. Over time, household surveys will likely continue expanding their age ranges to include respondents from older ages, as the Population-Based HIV Impact Assessment (PHIA) has already done and as the DHS Program has already incorporated on a country-by-country basis. This expansion will have cost implications and will increase survey complexity. It will also offer opportunities to expand knowledge of SRH across the life course, deepening the understanding of issues like menopause, age-related sexual dysfunction, prevalence of reproductive tract infections in older age, and more. Even with such expansion, disaggregation by older age groups will be limited due to relatively smaller numbers of older age members in sampled households.

Thorny Methods

The cross-sectional nature of surveys is another persistent thorn. Most surveys offer only a snapshot in time, with the primary purpose of monitoring key indicators using a representative sample. As such, most analyses of survey data can only show correlation and association rather than causation. To elaborate, because the outcome and exposure variables are measured at the same time, it is difficult to use surveys to establish causal relationships. For analyses teasing apart bidirectional relationships, such as the relationship between a woman’s education level and her use of contraception, cross-sectional surveys are inadequate. The DHS Program has mitigated this challenge by including retrospective data collection on birth and pregnancy histories as well as of conceptive histories. Another solution to this challenge is to move to longitudinal data collection where survey respondents are followed over time. Demographic surveillance sites offer these types of data, as do research-based cohort studies. In 2019, PMA surveys moved into this space, setting up a panel design to enable “improved measurement of changing contraceptive use dynamics and causal factors” (Johns Hopkins University, n.d.-b). In the late 2010s, the DHS Program also began conducting mixed methods research to follow up with survey respondents with qualitative interviews on fertility, contraception, and a host of other issues (Staveteig, 2016, 2017; Staveteig et al., 2018). As previously mentioned, Multiple Indicator Cluster Surveys (MICS) is using phone numbers previously collected from household survey respondents to follow up with a small set of COVID-related questions via telephone surveys (UNICEF, n.d.-a). Looking to the future, surveys may more often adopt this approach, especially as cell phone ownership becomes more ubiquitous among both men and women. The act of observation can, however, influence the observed (Hawthorn or observer effect), which may limit the representativeness of any given cohort over time. Additionally, if surveys demand more of participants, the ethical calculus may change such that it may be necessary to offer some form of participant remuneration.

Speaking of remuneration, survey participant financial incentives may become a factor in SRH household survey design if the issue plaguing SHR surveys in high-income countries—decreasing response rates (Guo et al., 2016; Meyer et al., 2015)—takes hold in low- and middle-income countries. While response rates across all major household surveys in low- and middle-income countries remain high, will they continue to do so as incomes rise? If yes, surveys will need to mitigate the challenge through modifications to survey planning and implementation, enhanced interviewer training and supervision, robust analytic techniques, and additional or new participant incentives.

Additional Opportunities for Innovation

Other changes will also create the need for innovation. For example, recognizing the need for more data and more data at lower administrative units, household surveys must adapt to the following: (a) improve population representativeness in the context of rapid and temporary migration as well as rapid urbanization and slum growth; (b) minimize respondent burden; (c) expand opportunities for effective modeling to provide data at lower administrative levels, like counties; and (d) minimize the financial cost of expanding surveys (USAID, 2018). Technological advances will also create new opportunities. For example, new biomarker technologies could allow for rapid testing of a variety of SRH issues beyond HIV and pregnancy status, including, for example, biomarkers related to maternal health and preterm birth (World Health Organization, 2019). Age measurement may be improved using machine learning image analysis of pictures taken during interviews (Helleringer et al., 2019). Electronic data collection tools, like phones and tablets, coupled with expanded 4G and 5G network access, will allow for enhanced data quality monitoring and reporting, providing opportunities for more real-time monitoring as well as opportunities to use timestamps to identify questionnaire areas that are challenging for interviewers and respondents (Kreuter et al., 2010).

Expansion of social media and other electronic communication forums will increase opportunities for online research surveys, like Facebook’s Data for Good, to reach a wider, increasingly diverse set of respondents (Facebook, n.d.). These new and expanded data sources could complement existing surveys or compete with them, depending on how new surveys are rolled out, perceived, and received. Demands for more data have already resulted in the expansion of DHS and MICS and have led to the more recent development and implementation of the PHIA and PMA surveys. If resources for SRH data remain constrained, such expansion and development threaten to create competition for limited resources, which could result in survey delays, confusion over survey indicators, and ultimately could have the perverse impact of diminishing data use for informed decision making.

These types of concerns have been raised over the years with regard to the role of surveys in the broader health data landscape. A number of experts have questioned whether an overreliance on surveys has led to limited investments in and use of routine health information systems (Wagenaar et al., 2016). Given the proliferation in recent years of DHIS2, the world’s largest health information system, coupled with increased investment in and focus on health management information systems’ data quality improvement, this critique holds little water. Nevertheless, questions about the costs and benefits of survey expansion will persist well into the future.

Additional Opportunities for Capacity Strengthening

Even if such perceived challenges never materialize, the proliferation of data does not ensure its widespread use. SRH survey programs need to facilitate effective interpretation and use of the data they collect in order to encourage evidence-informed decision making. Technological advances make virtual learning opportunities more readily accessible to key stakeholders worldwide, and open-source tools make data visualization and presentation all the easier to create and all the more visually striking. Advances in data visualization approaches have also expanded opportunities to convey SRH survey findings. Recent use of chord and Sankey diagrams, for example, have provided visualizations to help program managers better understand the dynamics of contraceptive switching and discontinuation (Choi, 2019; Finnegan et al., 2019; iSquared, 2019; Population Reference Bureau, 2019; Figure 2).

Figure 2. Sankey diagram to visualize 12-month contraceptive continuation, switching, and discontinuation, Ethiopia (2016). DHS source: Interactive Visualization of Contraceptive Dynamics: Twelve-month discontinuation rates from Demographic and Health Surveys (iSquared, 2019).

Capacity strengthening for data use has expanded analytic opportunities and increased data use, but the need for further strengthening continues, especially as increasingly available and accessible survey data and nonsurvey data (i.e., health information and management systems data, nontraditional data such as big data from other sectors) allow for increasingly innovative analyses. Data use, however, is but one of many areas of capacity strengthening SRH survey programs must continue to support. Capacity strengthening is also a thorny area, since technical assistance has been a component of survey programs since at least the early 1980s. Has capacity been strengthened? Is technical assistance still required?

The answer is yes, capacity has been strengthened and yes, technical assistance in many low- and middle-income countries is still invited and welcomed. Over the years, however, the types of capacity needs have changed, as have the approaches. For example, the initial focus of the DHS Program’s capacity-strengthening approach was individual-level, on-the-job training coupled with material support for implementing organizations. Over the years, the Program added other types of activities such as postdoctoral training, short-term fellowships, and faculty fellowships to support individual development and expand DHS data use (Wang et al., 2021). The Program developed e-learning courses, codification of the DHS curriculum, and even a Key Indicators Survey as a “do-it-yourself” survey package to facilitate widespread collection, analysis, and use of household health survey data. Since the 2010s, attention has focused on capacity strengthening of institutions rather than individuals, as guided by rigorous capacity assessments for identifying organizational needs, directing limited resources, and monitoring change. In 2021, most countries can plan and implement effective and good-quality SRH surveys, but assistance is still required for several key areas, such as sampling design, data processing, and sample weighting. It should be expected that as long as “international development” is a field, technical assistance for various survey processes will continue.

Conclusion

The sexual and reproductive health (SRH) field has been a global leader in survey development and implementation from its earliest advances in the 1950s. In low- and middle-income countries, SRH-specific surveys of early years have expanded into broad, multitopic surveys and this trend is expected to continue. At the risk of stating a cliché, an abundance of public health data collection, modeling, and research rests on the shoulders of the SRH field, including, and especially, the field of family planning. These data have been transformed into information, which has supported public health and heath care advocacy, budgeting, policies, and programming worldwide (Nolan et al., 2017).

Importantly, SRH and broader public health knowledge derived from surveys is only made possible thanks to the openness and willingness of the millions of individuals who have participated in these surveys, answering in-depth, personal, probing questions about their lives and lived experiences. Each of the respondents is owed a debt of gratitude. Deep appreciation also belongs to the collective—the committed communities of experts, funders, implementers, and advocates who over eight decades have made SRH surveys a force for good. Thanks to the hard work of the collective and the willingness and openness of individual participants, one can precisely estimate how many contraceptive users are in the world, how many people are living with HIV, and how many mothers died due to maternal complications, among other data. One can identify public health and healthcare problems, identify communities in need, monitor and evaluate the impact of public health work, and ultimately drive better resource, programming, and policy decisions.

As this article illustrates, SRH surveys in low- and middle-income countries are complex, imperfect, and durable. They have also proven to be invaluable sources of knowledge historically, in the present-day, and for the foreseeable future.

Acknowledgement

This article was produced and prepared independently by the authors. The contents of this manuscript are the authors’ sole responsibility and do not necessarily reflect the views of the U.S. Agency for International Development, the U.S. Government, or any other organization with which the authors are affiliated

Further Reading

  • Anderson, J. E., & Cleland, J. G. (1984). The world fertility survey and contraceptive prevalence surveys: A comparison of substantive results. Studies in Family Planning, 15(1), 1–13.
  • Arnold, F., & Khan, S. M. (2018). Perspectives and implications of the improving coverage measurement core group's validation studies for household surveys. Journal of Global Health, 8(1), 010606.
  • Choi, Y.-J., Fabic, M. S., & Adetunji, J. (2016). Measuring access to family planning: Conceptual frameworks and DHS data. Studies in Family Planning, 47(2), 145–161.
  • Cleland, J. (1973). A critique of KAP studies and some suggestions for their improvement. Studies in Family Planning, 4(2), 42–47.
  • Khan, S., & Hancioglu, A. (2019). Multiple indicator cluster Surveys: Delivering robust data on children and women across the globe. Studies in Family Planning, 50(3), 279–286.
  • Lindelow, M., & Wagstaff, A. (2013). Health facility surveys: An introduction. World Bank Policy Research Working Papers.
  • Morris, L. (2000). History and current status of reproductive health surveys at CDC. American Journal of Preventive Medicine, 19(Suppl. 1), 31–34.
  • Porter, L., Bello, G., Nkambule, R., & Justman, J. (2021). HIV general population surveys: Shedding light on the status of HIV epidemics and informing future actions. JAIDS Journal of Acquired Immune Deficiency Syndromes, 87(Suppl.), S2–S5.
  • Short Fabic, M., Choi, Y.-J., & Bird, S. (2012). A systematic review of demographic and health surveys: Data availability and utilization for research. Bulletin of the World Health Organization, 90(8), 604–612.
  • Zimmerman, L., Olson, H., Tsui, A., & Radloff, S. (2017). PMA2020: Rapid turnaround survey Data to monitor family planning service and practice in ten countries. Studies in Family Planning, 48(3), 293–303.

References

  • Adegboye, O. A., Fujii, T., & Leung, D. H. Y. (2020). Refusal bias in HIV data from the demographic and health surveys: Evaluation, critique and recommendations. Statistical Methods in Medical Research, 29(3), 811–826.
  • Ahmed, S., Li, Q., Scrafford, C., & Pullum, T. W. (2014). An assessment of DHS maternal mortality data and estimates: DHS methodological reports no. 13. ICF International.
  • Anderson, J. E., & Cleland, J. G. (1984). The world fertility survey and contraceptive prevalence surveys: A comparison of substantive results. Studies in Family Planning, 15(1), 1–13.
  • Anonymous. (1994a). Algeria 1992: Results from the PAPCHILD survey. Studies in Family Planning, 25(3), 191–195.
  • Anonymous. (1994b). Syria 1993: Results from the PAPCHILD survey: Pan Arab project of child development. Studies in Family Planning, 25(4), 248–252.
  • Anonymous. (1995). Sudan 1992/93: Results from the PAPCHILD survey. Studies in Family Planning, 26(2), 116–120.
  • Becker, S., Mbacké, C., & Padis, O. (2019). Evaluation of the continuous demographic and health survey in Senegal 2012–2017. USAID, UNICEF.
  • Bell, S. O., OlaOlorun, F., Shankar, M., Ahmad, D., Guiella, G., Omoluabi, E., Khanna, A.,Kouakou Hyacinthe, A., & Moreau, C. (2019). Measurement of abortion safety using community-based surveys: Findings from three countries. PLOS ONE, 14(11), e0223146.
  • Bell, S. O., Shankar, M., Omoluabi, E., Khanna, A., Kouakou Hyacinthe, A., OlaOlorun, F.,Ahmad, D., Guiella, G., Ahmed, S., & Moreau, C. (2020). Social network-based measurement of abortion incidence: Promising findings from population-based surveys in Nigeria, Cote d’Ivoire, and Rajasthan, India. Population Health Metrics, 18(1), 28.
  • Brener, N. D., Kann, L., Shanklin, S., Kinchen, S., Eaton, D. K., Hawkins, J., & Flint, K. H. (2013). Methodology of the youth risk behavior surveillance system. MMWR Recommendations and Reports, 62, 1–23.
  • Burgert-Brucker, C. R., Dontamsetti, T., & Gething, P. W. (2018). The DHS program’s modeled surfaces spatial datasets. Studies in Family Planning, 49(1), 87–92.
  • Central Statistics Organization (CSO), Ministry of Public Health (MoPH), and ICF. (2017). Afghanistan demographic and health survey 2015. Central Statistics Organization.
  • Choi, Y. (2018). Estimates of side effects counseling in family planning using three data sources: Implications for monitoring and survey design. Studies in Family Planning, 49(1), 23–39.
  • Choi, Y., Safi, S., Nobili, J., PMA 2020 Principal Investigators Group, & Radloff, S. (2019). Levels, trends, and patterns of contraceptive method availability: Comparative analyses in eight sub-Saharan African countries (Performance Monitoring and Accountability 2020 ——Methodological Report No. 5). Bill & Melinda Gates Institute for Population and Reproductive Health, Johns Hopkins University Bloomberg School of Public Health.
  • Cleland, J. (1973). A critique of KAP studies and some suggestions for their improvement. Studies in Family Planning, 4(2), 42–47.
  • Fabic, M. S. (2022). What Do We Demand? Responding to the Call for Precision and Definitional Agreement in Family Planning’s “Demand” and “Need” Jargon. Global Health: Science and Practice, 10(1).
  • Fabic, M. S., & Becker, S. (2017). Measuring contraceptive prevalence among women who are at risk of pregnancy. Contraception, 96(3), 183–188.
  • Fabic, M. S., & Choi, Y. (2015). Capturing family planning data through population based household surveys: A systematic review of world fertility survey, contraceptive prevalence survey, reproductive health survey, demographic and health survey and PMA2020 survey questionnaires [Poster presentation]. Annual Meeting of the Population Association of America 2015, Washington, DC.
  • Facebook. (n.d.). Data for good.
  • Finnegan, A., Sao, S., & Huchko, M. J. (2019). Using a chord diagram to visualize dynamics in contraceptive use: Bringing data into practice. Global Health, Science and Practice, 7(4), 598–605.
  • Giessler, K., Seefeld, A., Montagu, D., Phillips, B., Mwangi, J., Munson, M., Green, C., Opot, J., & Golub, G. (2021). Perspectives on implementing a quality improvement collaborative to improve person-centered care for maternal and reproductive health in Kenya. International Journal for Quality in Health Care, 32(10), 671–676.
  • Glick, P. (2007). Are client satisfaction surveys useful? Evidence from matched facility and household data in Madagascar (Cornell Food and Nutrition Policy Program Working Paper No. 226). Cornell University.
  • Global Health Data Exchange. (n.d.). Survey series and systems: Pan Arab project for Child development. Institute for Health Metrics and Evaluation.
  • Goldberg, D., Sharp, H., & Freedman, R. (1959). The stability and reliability of expected family size data. The Milbank Memorial Fund Quarterly, 37(4), 369–385.
  • Guo, Y., Kopec, J. A., Cibere, J., Li, L. C., & Goldsmith, C. H. (2016). Population survey features and response rates: A randomized experiment. American Journal of Public Health, 106(8), 1422–1426.
  • Helleringer, S., You, C., Fleury, L., Douillot, L., Diouf, I., Ndiaye, C. T., Delaunay, V., &Vidal, R. (2019). Improving age measurement in low- and middle-income countries through computer vision: A test in Senegal. Demographic Research, 40(219), 260.
  • HelpAge International. (n.d.). The inclusion agenda: Older people, displacement, and gender-based violence[HelpAge.org].
  • Hill, R. (1968). Research on human fertility. International Social Science Journal, 20(2), 226–262.
  • Holt, K., Zavala, I., Quintero, X., Hessler, D., & Langer, A. (2019). Development and validation of the client-reported quality of contraceptive counseling scale to measure quality and fulfillment of rights in family planning programs. Studies in Family Planning, 50(2), 137–158.
  • ICAP. (n.d.). PHIA project: Guiding the global health response. Columbia University, Mailman School of Public Health.
  • ICF. (n.d.). The DHS program: Survey search. USAID.
  • ICF International. (n.d.-a). DHS model questionnaire Phase 7 (English, French). USAID.
  • ICF International. (n.d.-b). The DHS program, methodology, survey characteristics search. USAID.
  • Igras, S. M., Macieira, M., Murphy, E., & Lundgren, R. (2014). Investing in very young adolescents’ sexual and reproductive health. Global Public Health, 9(5), 555–569.
  • Institut National de la Statistique et de l’Analyse Économique INSAE and ICF. (2019). Enquête Démographique et de Santé Au Bénin, 2017–2018. INSAE & ICF.
  • International Statistical Institute. (1985). The world fertility survey: Final report. International Statistical Institute.
  • Johns Hopkins University. (n.d.-a). Performance monitoring for action.
  • Johns Hopkins University. (n.d.-b). PMA survey methodology.
  • Johns Hopkins University and World Health Organization. (n.d.). Global early adolescence study.
  • Jones, R. K., & Kost, K. (2007). Underreporting of induced and spontaneous abortion in the United States: An analysis of the 2002 national survey of family growth. Studies in Family Planning, 38(3), 187–197.
  • Kreuter, F., Couper, M., & Lyberg, L. (2010). The use of paradata to monitor and manage survey data collection. JSM Proceedings of the Survey Research Methods Section, American Statistics Association, JSM2010, 282–296 (Session 151).
  • Leone, T., Sochas, L., & Coast, E. (2021). Depends who’s asking: Interviewer effects in demographic and health surveys abortion data. Demography, 58(1), 811–826.
  • Lindberg, L., & Scott, R. H. (2018). Effect of ACASI on reporting of abortion and other pregnancy outcomes in the US national survey of family growth. Studies in Family Planning, 49(3), 259–278.
  • Lindelow, M., & Wagstaff, A. (2013). Health facility surveys: An introduction. World Bank Policy Research Working Papers.
  • Macarayan, E. K., Gage, A. D., Doubova, S. V., Guanais, F., Lemango, E. F., Ndiaye, Y.,Waiswa, P., & Kruk, M. E. (2018). Assessment of quality of primary care with facility surveys: A descriptive analysis in ten low-income and middle-income countries. The Lancet Global Health, 6(11), e1176–e1185.
  • McCarthy, K. J., Blanc, A. K., Warren, C., Bajracharya, A., & Bellows, B. (2020). Validating women’s reports of antenatal and postnatal care received in Bangladesh, Cambodia and Kenya. BMJ Global Health, 5, e002133.
  • Meyer, B. D., Mok, W. K. C., & Sullivan, J. X. (2015). Household surveys in crisis. Journal of Economic Perspectives, 29(4), 199–226.
  • Ministry of Health and Population/Malawi. (2019). Malawi harmonized health facility assessment 2018–2019 report. MoH.
  • Ministry of Public Health/Afghanistan and ICF. (2019). Afghanistan service provision assessment 2018–19. MoPH and ICF.
  • Mohanan, M., Vera-Hernández, M., Das, V., Giardili, S., Goldhaber-Fiebert, J. D., Rabin, T. L., Raj, S. S., Schwartz, J. I., & Seth, A. (2015). The know-do gap in quality of health care for childhood diarrhea and pneumonia in rural India. JAMA Pediatrics, 169(4), 349–357.
  • Morris, L. (2000). History and current status of reproductive health surveys at CDC. American Journal of Preventive Medicine, 19(Suppl. 1), 31–34.
  • Mukherjee, B. N. (1975). Reliability estimates of some survey data on family planning. Population Studies, 29(1), 127–142.
  • Neal, S. E., & Hosegood, V. (2015). How reliable are reports of early adolescent reproductive and sexual health events in demographic and health surveys? International Perspectives on Sexual and Reproductive Health, 41(4), 210–217.
  • Nolan, L. B., Lucas, R., Choi, Y., Fabic, M. S., & Adetunji, J. A. (2017). The contribution of demographic and health survey data to population and health policymaking: Evidence from three developing ccountries. African Population Studies, 31(1), 3394–3407.
  • Peters, M. A., Mohan, D., Naphini, P., Carter, A., & Marx, M. A. (2020). Linking household surveys and facility assessments: A comparison of geospatial methods using nationally representative data from Malawi. Population Health Metrics, 18(1), 30.
  • PMA. (2021). COVID-19 and PMA.
  • Population Reference Bureau. (2019). Choices and challenges: Dynamics of contraceptive use.
  • Pullum, T. W. (2006). An assessment of age and date reporting in the DHS surveys, 1985–2003 (DHS Methodological Reports No. 5). USAID.
  • Pullum, T. W., Croft, T., & MacQuarrie, K. L. D. (2018). Methods to estimate under-15 fertility using demographic and health surveys data (DHS Methodological Reports No. 23). USAID.
  • Rominski, S. D., & Stephenson, R. (2019). Toward a new definition of unmet need for contraception. Studies in Family Planning, 50(2), 195–198.
  • Rutstein, S. O. (2014). Potential bias and selectivity in analyses of children born in the past five years using DHS data (DHS Methodological Reports No. 14).
  • Sedgh, G., Singh, S. S., Shah, I. H., Ahman, E., Henshaw, S. K., & Bankole, A. (2012). Induced abortion: Incidence and trends worldwide from 1995 to 2008. Lancet, 379(9816), 625–632.
  • Senderowicz, L. (2020). Contraceptive autonomy: Conceptions and measurement of a novel family planning indicator. Studies in Family Planning, 51(2), 161–176.
  • Short Fabic, M., Choi, Y.-J., & Bird, S. (2012). A systematic review of demographic and health surveys: Data availability and utilization for research. Bulletin of the World Health Organization, 90(8), 604–612.
  • Skiles, M. P., Burgert, C. R., Curtis, S. L., & Spencer, J. (2013). Geographically linking population and facility surveys: Methodological considerations. Population Health Metrics, 11(1), 14.
  • Speizer, I. S., Bremner, J., & Farid, S. (2022). Language and measurement of contraceptive need and making these indicators more meaningful for measuring fertility intentions of women and girls. Global Health: Science and Practice, 10(1), 1–8.
  • State Family Planning Commission. (n.d.). China one-per-thousand population fertility sample survey 1982. Global Health Data Exchange.
  • Staveteig, S. (2016). Understanding unmet need in Ghana: Results from a follow-up study to the 2014 Ghana demographic and health survey (DHS Qualitative Research Studies No. 20).
  • Staveteig, S., Shrestha, N., Gurung, S., & Kampa, K. T. (2018). Barriers to family planning use in Eastern Nepal: Results from a mixed methods study. DHS Qualitative Research Studies 21. USAID.
  • Suthar, A. B., Khalifa, A., Yin, S., Wenz, K., Ma Fat, D., Mills, S. L., Nichols, E., AbouZahr, C., & Mrkic, S. (2019). Evaluation of approaches to strengthen civil registration and vital statistics systems: A systematic review and synthesis of policies in 25 countries. PLOS Medicine, 16(9), e1002929.
  • The DHS Program. (2020). DHS-8 core women’s questionnaire. USAID.
  • UNAIDS. (n.d.). AIDS info.
  • UNFPA, World Health Organization, UNICEF, World Bank Group, the United Nations Population Division. (2019). Trends in maternal mortality 2000 to 2017: Estimates by WHO, UNICEF, UNFPA, World Bank Group and the United Nations Population Division. United Nations.
  • UNICEF. (n.d.-a). MICS plus.
  • UNICEF. (n.d.-b). Multiple indicator cluster surveys. Accessed on: March 11, 2022.
  • United Nations Department of Economic and Social Affairs Population Division. (2020). World family planning 2020 highlights: Accelerating action to ensure universal access to family (ST/ESA/SER.A/450).
  • United Nations Department of Economic and Social Affairs Population Division. (2021). World contraceptive use 2021.
  • Wagenaar, B. H., Sherr, K., Fernandes, Q., & Wagenaar, A. C. (2016). Using routine health information systems for well-designed health evaluations in low- and middle-income countries. Health Policy and Planning, 31(1), 129–135.
  • Wang, W., Assaf, S., Pullum, T., & Kishor, S. (2021). The demographic and health surveys faculty fellows program: Successes, challenges, and lessons learned. Global Health, Science and Practice, 9(2), 390–398.
  • World Health Organization. (2017). Master facility list resource package: Guidance for countries wanting to strengthen their master facility list.
  • World Health Organization. (n.d.). Harmonized health facility assessment.
  • World Health Organization and US Centers for Disease Control and Prevention. (n.d.). Global school-based student health survey.
  • Zimmerman, L. A., Shiferaw, S., Seme, A., Yi, Y., Grove, J., Mershon, C. H., & Ahmed, S. (2019). Evaluating consistency of recall of maternal and newborn care complications and intervention coverage using PMA panel data in SNNPR, Ethiopia. PLOS ONE, 14(5), e0216612.
  • Zimmerman, L., Olson, H., Tsui, A., & Radloff, S. (2017). PMA2020: Rapid turnaround survey data to monitor family planning service and practice in ten countries. Studies in Family Planning, 48(3), 293–303.

Notes

  • 1. Based on authors’ calculation, using sample sizes from The DHS Program, Survey Search (ICF, n.d.-b).

  • 2. Based on authors’ calculation, using only the latest national-level surveys available to the public across six countries.