Benefit Transfer for Ecosystem Services
Summary and Keywords
Benefit transfer is the projection of benefits from one place and time to another time at the same place or to a new place. Thus, benefit transfer includes the adaptation of an original study to a new policy application at the same location or the adaptation to a different location. The appeal of a benefit transfer is that it can be cost effective, both monetarily and in time. Using previous studies, analysts can select existing results to construct a transferred value for the desired amenity influenced by the policy change. Benefit transfer practices are not unique to valuing ecosystem service and are generally applicable to a variety of changes in ecosystem services. An ideal benefit transfer will scale value estimates to both the ecosystem services and the preferences of those who hold values. The article outlines the steps in a benefit transfer, types of transfers, accuracy of transferred values, and challenges when conducting ecosystem transfers and ends with recommendations for the implementation of benefit transfers to support decision-making.
Benefit transfer has a long history in applied environmental research and policy design. Presidential Executive Order 12866 (1993) requires federal agencies to design “cost-effective” regulations and assess “costs and benefits” of these regulations based “on the best reasonably obtainable scientific, technical, economic, and other information.” In practice, the costs of government actions are relatively tractable to calculate, while the benefits can be more difficult to tabulate. In addition, government analysts often face tight time constraints and limited budgets, which preclude opportunities to conduct original valuation studies to support estimation of nonmarket benefits. Thus, analysts typically resort to benefit transfers, a process where existing value estimate are used as a proxy for an original valuation study.
This issue is particularly germane for the estimation of ecosystem service values where markets often do not exist to value these services. Common approaches to estimating ecosystem values, whether they be related to human or passive use motivations, require the use of surveys to document recreational decisions to estimate travel-cost models or to administer stated-preference questions where respondent’s choices in response to survey questions provide the basis for econometric models to estimate ecosystem values (see Champ, Boyle, & Brown, 2017). The time and cost of original value estimation to support benefit-cost analyses is an artifact of the need for extensive data collection efforts.
An alternative approach is to apply values calculated elsewhere to the policy under consideration. This is a benefit transfer, the use of benefits from one place and time as data to estimate the benefits of a proposed action at another place or time.1 More formally, a benefit transfer occurs when an estimated value, based on one or more original studies (study sites), is transferred to support a new policy decision (referred to as the policy site).2 Thus, benefit transfers can be implemented through time and space; the key feature is that study site value(s) are used to estimate a value for a policy that is different from the original policy objective.
To understand benefit transfer heuristically, think of it as an exercise in prediction, only the truth is never revealed, unlike predicting exchange rates or gasoline prices where, if we wait until tomorrow, we can observe the true outcome. With ecosystems, we only have estimates of the value that the public places on any ecosystem service. Thus, benefit transfers seek to use existing primary study predictions of values elsewhere, to make predictions for the current site and policy. Given that prediction is already a challenging endeavor with observed data (Silver, 2012), using existing estimates to make a further prediction is even more challenging. Thus, the conduct of benefit transfer, to provide meaningful insights, needs to be judiciously applied. Benefit transfer applications in the context of ecosystem services is even more difficult because of the challenging issues involved in the identification and quantification of ecosystem services (Bingham et al., 1995; Boyd & Banzhaf, 2007). Adhering to best-practice guidance for the transfer process is highly relevant for analysts who will conduct ecosystem service benefit transfers to support decision-making.
The first guidance for the conduct of benefit transfers was set forth by Boyle and Bergstrom (1992) and Desvousges, Naughton, and Parsons (1992), who suggested that one should identify existing, relevant studies for quality and applicability, which includes having the value prediction match the study site policy conditions. This entails having the baseline and extent of change be similar and the affected populations be similar to serve as a proxy to control for preferences that are not observable. Bennett (2006) provides a set of more detailed criteria to assist with establishing the validity of a benefit transfer:
1. The biophysical conditions at the study and policy sites should be similar.
2. The scale of the change in the environmental amenity should be similar at the study and policy sites.
3. The socioeconomic characteristics of the population at the policy site should be similar to the population at the study site.
4. The setting in which the valuation was conducted at the study site should be similar at the policy site.
5. The valuation at the study site should have been conducted in a scientifically suitable manner.
Additional guidance is presented by Boyle, Kuminoff, Parmeter, and Pope (2009), who provide more structure on the theoretical and econometric conditions sufficient for a valid benefit transfer, while Rosenberger and Loomis (2017) summarize the state of the art in benefit transfer. A broad discussion of benefit transfer is provided in Johnston, Rolfe, Rosenberger, and Brouwer (2015a). A fundamental insight is that study sites and policy sites need not be identical or similar, but that the collective information at the study sites must be sufficient that value predictions can be calibrated to policy site conditions.
This article is an overview of the large literature on benefit transfer to provide guidance to establish the content validity of ecosystem service benefit transfers. Content validity is established by following a prescribed set of procedures to support implementation of any empirical method (Bishop & Boyle, 2017; Carmines & Zeller, 1979). This article outlines such procedures.
A Historical Perspective on the Development of Benefit Transfer
Benefit transfer as a valuation technique gained prominence with the publication of a special issue of Water Resources Research (1992, Vol. 28, No. 3). This issue was motivated by the expanding use of benefit transfers by the U.S. Environmental Protection Agency (EPA) to conduct regulatory impact assessments (Wheeler, 2015). Nearly all the benefit transfers conducted by the EPA in that time frame were mostly simple transfers of an average value from a study site or an average of values from several study sites, and it was obvious that scientific insight was needed to advance the credibility of benefit transfer value estimates.
This occurred at the same time when researchers were seriously asking how accurate study site value estimates were. Much of the criticism of original value estimates arose from the ongoing debate about the credibility of value estimates from stated preference studies (Diamond & Hausman, 1994). Others were asking if travel-cost methods were providing noise or systematic signals across studies (Smith & Kaoru, 1990). The current literature has put many of the concerns to rest or have helped to formulate a research agenda to address the concerns (Carson & Groves, 2007; Champ et al., 2017; Johnston et al., 2017).
On the application side, the special issue of Water Resources Research, and the research this issue stimulated, laid the foundation for a codified set of benefit transfer guidelines for economic analyses conducted by the U.S. Environmental Protection Agency (2000, pp. 86–87):
1. Describing the policy case
2. Identifying existing, relevant studies
3. Reviewing available studies for quality and applicability:
(i) Basic commodities must be essentially equivalent
(ii) Baseline and extent of change should be similar
(iii) Affected populations should be similar
4. Transfer the benefit estimates
5. Address uncertainty
The Office of Management and Budget (OMB) subsequently developed similar guidelines for benefit transfers (U.S. Office of Management and Budget, 2003). A notable difference between the EPA and OMB guidelines is that the OMB guidelines advocated that “you should transfer the entire demand function (referred to as benefit function transfer) rather than adopting a single point estimate (referred to as benefit point transfer)” (p. 25).
The U.S. Environmental Protection Agency (2014) subsequently revised their guidelines with more generic recommendations:
1. Describe the policy case
2. Select study cases
3. Transfer values
4. Report the results
Some of the elements from the previous guidance is in the text that explains each of these four steps. This guidance, while not extensive, is based on insights from the peer-reviewed literature and helped to provide consistency across transfers and credibility to benefit transfers conducted by the EPA and others who used benefit transfer estimates to support decision-making.
A continuing challenge has been that the literature on the relative strengths and weaknesses of benefit transfers has been growing faster than that guidance has been updated (Johnston et al., 2015a). Applications to ecosystem valuation are specifically challenging where any policy action may affect many services, which requires special consideration for the conduct of the multiple benefit transfers needed to support decision-making to maintain or enhance ecosystem services.
How Benefit Transfers Work
The benefit transfer literature is progressing toward commonly accepted guidance for the conduct of a benefit transfer. Here we work from the 10 steps listed in Johnston, Rolfe, Rosenberger, and Brouwer (2015b), which provides enough coverage to be generally applicable to almost any benefit transfer:
1. Define the context
2. Establish the need for a benefit transfer
3. Define the policy, the amenity to be valued, and the affected population
4. Quantify policy options and changes to the amenity
5. Gather and evaluate valuation data/evidence
6. Select the benefit transfer method
7. Design and implement the transfer
8. Aggregate values over space, time, and population
9. Conduct sensitivity analysis and test reliability
10. Report results
The first and third steps are fundamental to any benefit transfer and are virtually identical to what would be undertaken if a primary study was to be conducted. This involves identifying the policy change to be valued, the consequences (changes in ecosystem services here), and the affected population(s). The complexity when valuing ecosystem services, whether it is an original study or a benefit transfer, is that multiple services may be affected and the changes in different services may affect different populations (Burkhard, Kroll, Nedkov, & Müller, 2012). For example, protection of a coastal wetland may affect use services, such as recreational fishing and waterfowl hunting, where each may have a different constituent population. Then again, each of these affected populations may hold passive use values for the protection of the wetland area over and above their use values. Further, these populations may not be distinct, i.e., some individuals may participate in both recreational fishing and waterfowl hunting in the protected marshes.
This leads to a second related issue: careful delineation of service changes and affected populations is required to avoid double counting of benefits when transfers estimates are aggregated for benefit and cost assessment. Double counting may be more of an issue for benefit transfers values for ecosystems services than for original studies. That is, original studies can be designed such that double counting is avoided, but a benefit transfer must rely on estimated values as is. For example, if original studies estimate total values for wetland protection, one study applied to anglers and the other study applied to hunters, the people who participate in both activities likely include their values for the other activity in their responses to valuation questions and, ex post, it would be difficult to purge estimates of these components of value to avoid double counting when computing the full value of the change in ecosystem services from the wetland protection action. Thus, careful delineation of the policy change, impacted ecosystem services, and affected populations is critical to set up how an analyst undertakes the subsequent steps in a benefit transfer.
The second step involves the decision of whether to go forward with a benefit transfer or not. A priori, a decision can be made to proceed with a benefit transfer if funds and time are not available to conduct original studies for the affected ecosystem services. This is not sufficient. For example, when completing the fifth step it may found that there is not sufficient existing evidence to support a benefit transfer for all affected services. Thus, while the steps are numbered, they do not imply a sequence where the completion of one step provides clear sailing to latter steps, but rather recursive consideration of earlier decisions may be necessary.
The fourth step requires the economic analyst to work with ecologists and other natural scientists to carefully review the affected ecosystem services to identify their baseline or current conditions and magnitudes of change as a result of the policy change. This requires careful and thoughtful interactions between disciplines; it is crucial to obtain the correct magnitudes of changes in the ecosystem as this is the foundation of the valuation exercise to translate these changes into services that people value. This is a step that would occur in an original study or in a benefit transfer. Once again, the design of an original study can be customized to value these defined changes in services, while a benefit transfer requires searching the empirical literature for studies that have valued the identified changes, which is the fifth step.
The fifth step is where the analysis unique to benefit transfers truly begins. This step requires identifying and gathering all empirical studies that are pertinent to the ecosystem services identified in the first and third steps. This is perhaps the most important step, as it is this data collection process that will support value predictions. This step requires investigator decisions, which includes deciding what studies provide values that are relevant to the benefit transfer and similarly deciding what value estimates in the selected studies should be used to support the benefit transfer prediction (many studies report two or more value estimates). It is often the case that the value estimates are not reported in a common unit, e.g., per person versus per household values, one-time payments versus continuous payments, etc. The investigator must make choices on how to convert the estimates to a common unit. Decisions must also be made on what covariates will be collected from the selected studies that can be used to support value predictions. It is common for these decisions to be made by a research team that carefully discusses the choices. Having set a framework for study and data collection, the typical process is to have two people identify and code studies. This process avoids errors in the data collection process. When the two individuals have disagreements on data selection or coding, the analyst will typically review the arguments and make the choice of the appropriate selection or coding for the data.
Here is where the benefit transfer process can be recursive back to the first and third steps. It is possible that the data collection process indicates that there is insufficient evidence in the literature to support a benefit transfer for all ecosystem services affected by the desired action. Another consideration here is the credibility of the value estimates. One way to look at data quality is to ask if the selected empirical studies provide evidence of value estimate validity. This is challenging because there are three types of validity—content, construct, and criterion—and multiple approaches for investigating each type of validity (Bishop & Boyle, 2017). No study can address all elements of validity, but some have also proposed that tests of scope and adding up should be relevant when considering whether value estimates are included in the data for benefit transfer predictions (for an ecosystem valuation application, see Morse-Jones et al., 2012). However, these tests impose investigator assumptions about preferences that go beyond the basic axioms of choice, which may or may not be appropriate (Haab, Interis, Petrolia, & Whitehead, 2013). Thus, if a collection of estimates passes scope and adding-up tests, this is evidence in favor of validity, but failure to pass these tests cannot be interpreted as unquestioned evidence that the value estimates are invalid. The bottom line is that consideration should be given to the overall quality of the value estimates; currently no consensus exists in the literature on how to assess quality.
The sixth step asks the analyst to decide what type of benefit transfer to conduct. There are two basic types of benefit transfer: value and function. Function transfers can be parsed into a function estimated from an individual study (Loomis, Kent, Strange, Fausch, & Covich, 2000) or a meta-analysis of value estimates from multiple studies (Woodward & Wui, 2001). It is usually assumed that a function transfer is more accurate than a value transfer (Kaul, Boyle, Kuminoff, Parmeter, & Pope, 2013). The reason for this is that a function allows the value prediction to be calibrated to policy site conditions for the ecosystem service. A value transfer is more restrictive in that a single study site value with some adjustment (e.g., for differences in income) must match the policy site conditions. With a meta-analysis function, for example, no single study site may match the policy site, but the study sites, collectively, can provide variation, which allows predictions calibrated to the policy site. Given that ecosystem valuation often involves valuing more than one service, it may be the case that the analyst will need to consider using more than one benefit transfer approach (Troy & Wilson, 2006).
The seventh and eighth steps are the actual implementation of the benefit transfer. Based on the decisions in the fifth and sixth steps, value estimates for the policy site are predicted. As discussed in the preceding paragraph, these predictions can be transfers of a single value, perhaps a study site estimate of the mean value adjusted for inflation and income, or functions to predict calibrated estimates for a policy site. If a meta-analysis function is used, the analyst must estimate the function before proceeding with prediction. While there have been many meta-analyses conducted in the nonmarket valuation literature to summarize bodies of empirical studies, little attention has been given to the use of meta-analysis for prediction; Boyle and Wooldridge (2017) have started to explore the foundations for such predictions. It is often desirable in benefit transfers, and perhaps imperative in ecosystem service applications, to augment the data from existing studies with spatial data on ecosystems, landscapes, and people using GIS and census databases. This information may be useful in value predictions as well as in the aggregation of benefit transfer predictions to compute societal benefits and costs (Brander et al., 2012).
The ninth step of the benefit transfer involves ensuring that the results are meaningful, which requires that the benefit transfer predictions be subjected to sensitivity analyses or robustness checks that are often related to data limitations and analyst assumptions. Nelson and Kennedy (2009, p. 372) proposed 10 best practice standards for the conduct of meta-analyses. The eighth practice is to “report results for sensitivity analyses of the final model, including results from use of different estimation methods, deletion of outliers, homogeneous subset regressions, different functional forms or panel methods, and specification of moderator variables. Report the statistical significance and substantive significance of the regression coefficients … (and) assess the fragility of the final results. Consider the implications for policy analysis of fragile versus robust estimates.” A common approach is to use a leave-one-out analysis. For example, Londoño and Johnston (2012) drop study site value estimates and subsequently predict them with the remaining value estimates to generate prediction errors. Boyle, Parmeter, Boehlert, and Paterson (2013) provide guidelines on vertical and horizontal robustness of meta-analytic function transfer. Further, even though scope and adding-up are discussed in the context of original valuation studies in the literature, particularly stated preference studies, such tests of validity can be applied to value predictions from meta-analyses.
The tenth and final step of a benefit transfer is to report results. This might seem inane to list as a formal step of a benefit transfer, but clear and careful documenting is crucial for the scientific credibility of benefit transfer predictions to support decision-making. Reporting should document all steps in the analysis, not just outcomes, and report analyst assumptions and decisions that were necessary to complete the benefit transfer.
Data Selection—What You Use Matters
Perhaps the most crucial component of a benefit transfer is the selection of existing value studies and the subsequent data compilation from these studies. The first step is an extensive literature review that includes all empirical studies in the gray and peer-reviewed literature. Some have argued that omitting gray literature studies creates a publication bias, while others have claimed that the peer-review process acts as a form of quality control (see, e.g., Rosenberger & Stanley, 2006). While much of this discussion has occurred in the context of estimating meta-analysis functions, a broad search of the literature is appropriate for all benefit transfers as the data are the foundation of the transfer.
It is important to consider that the selection is more complex than whether a study is or is not published in the peer-reviewed literature. Selection starts with the decision to conduct an original empirical study, and this can be a function of where there are policy questions seeking valuation information to support decision-making, availability of funding, and implicit interests of the investigator. The latter is most relevant for methodological studies, and these studies comprise most of the original valuation studies in the environment and resource economics literature. This is followed by the value estimates an investigator chooses to report and then continues into the levels of selection in compiling the metadata (study and value selection from available studies). Thus, the broad search of the literature provides the best opportunity for spatial resource representation of the ecosystem values in the benefit transfer data and, therefore, the ability to calibrate predictions to policy site conditions.
This broad search is particularly important because the valuation literature is limited in both the number of available empirical studies and the total amount of observations reported in these studies. For example, Woodward and Wui (2001) identified 39 wetland valuation studies that provided 65 observations. The more care that is taken to ensure that value estimates are aligned with the policy and value definitions identified in the first and third steps, and the more careful the analyst is in requiring that value estimates be measured in a common unit, the smaller will be the sample of studies and value estimates obtained from those selected studies.
A secondary challenge is the ability to identify a consistent set of covariates across original studies to use to adjust value estimates and to explain variation in value estimates across studies. This is challenging because there is not a consistent protocol for reporting the result of nonmarket valuation studies.
Some investigators have started to augment study data with GIS, census, and other types of spatial data (Johnston, Besedin, & Stapler, 2016; Siriwardena, Boyle, Holmes, & Wiseman, 2016). This is useful for two reasons. First, these spatial data can provide a consistent set of covariates across studies. Second, these data are helpful in characterizing the spatial dimensions of ecosystems for value prediction and aggregation to societal estimates of benefits and costs (Brander et al., 2012).
Types of Benefit Transfer
As the sixth step makes clear, one must choose the method with which to conduct the benefit transfer—either a value or function transfer. For ecosystem valuation applications where multiple services are affected by an action, one or both approaches might be used. A value transfer uses a single value from a study site or perhaps an average of means from multiple study sites. A function transfer uses an estimated valuation function at a study site to compute a transfer estimate that is calibrated to policy site conditions. A function transfer could be an estimated preference function from a single study site or a meta-analysis of results across multiple study sites. Two types of meta-analyses will be discussed: a traditional meta-analysis and a preference function.
Value transfers apply a single statistic (usually an average from one or more study sites) to the policy site. For example, in the case of a single study site, aggregate benefits for a comparable change in an ecosystem service at the policy site may be predicted by , where are the number of people at the policy and is the mean willingness to pay for a change in the ecosystem service from study site . An alternative is to average the means from different study sites, in which case the benefit transfer estimate is . The. EPA used a variant of this approach to estimate the annualized benefit of reduced mortality due to limits on particulate matter imposed by the Clean Air Act (U.S. Environmental Protection Agency, 1999).
A more general approach allows customization of value transfer predictions. For example, the researcher may want to adjust for costs of living, inflation, income differences, or other sources of monetary variation that can be accounted for between the policy and study sites. In this case the benefit transfer value estimate is , where represents the adjustment mechanism used to calibrate the study site value to the policy site and is the factors used to make the adjustment.
This approach to benefit estimation is appealing for its simplicity, but value transfers are quite limited in the ability to account for differences between the study sites and the policy site. These may be biophysical differences or differences in the affected populations who value the change in the ecosystem services. Thus, value transfers require that a single study or a small number of studies closely align with conditions at the policy site, which is unlikely to be met in many transfer applications.
There are several types of function transfer. The simplest type uses an econometric model, such as a travel cost model, to predict a calibrated value for a new policy as a function of variables describing characteristics of, and people at, the policy site. In this case you might think of a function transfer as the transfer of the econometric parameter estimates from the study site to the policy site rather than the transfer of an estimated value. The more prominent function transfer is to estimate a meta-equation using the results from many studies and then use the estimated function to predict values at the policy site. This can be a reduced form specification that lets the data tell the story or a preference function that has a utility theoretic specification. Compared to a value transfer, function transfers provide greater opportunities to calibrate value predictions to conditions at the policy site.
General Function Transfer
This approach takes a function estimated at a study site and transfers the function to the policy site. This could be, for example, a function of preference parameters estimated in a random-utility-model (RUM) travel-cost model or based on an estimated function from responses to a stated-preference choice experiment (Parsons, 2017; Holmes, Adamowicz, & Carlsson, 2017). More specifically, assume that estimated willingness to pay is a function of consumer characteristics at the study site, , study site-specific characteristics, , and the ecosystem services studied, , along with a vector of estimated parameters (stemming from an original study), . Then, rather than transferring an estimated willingness to pay, the estimated function is transferred, where the characteristics from the study site are replaced with the characteristics from the policy site to make the prediction:
Here the function could be a demand or utility function, the parameters of which are estimated from an original study. A prominent environmental function transfer was undertaken by the U.S. Department of Agriculture to predict the water quality and wildlife habitat benefits associated with “environmentally friendly” farming practices subsidized by the Conservation Reserve Program (U.S. Department of Agriculture, 2005).
A meta-function integrates findings from multiple primary studies of a common amenity.3 Some examples of these types of studies in the literature that can potentially support benefit transfers for ecosystems services are:
1. Coastal recreation (Ghermandi & Nunes, 2013)
3. Mangroves (Brander et al., 2012)
4. Rare, threatened, and endangered species (Richardson & Loomis, 2009)
An attractive feature of meta-analysis is the ability to control for features that are fixed for any given study but vary across studies (Nelson & Kennedy, 2009). This data richness is what provides a meta-function with the best opportunity to calibrate value predictions to policy site conditions.
When estimating a meta-function the dependent variable is a vector of value estimates, typically estimates of mean willingness-to-pay (MWTP), drawn from the selected primary studies. In the metadata, a single study could provide multiple observations of MWTP. The independent variables in a meta-function characterize the attributes of the resource valued and characteristics of the people whose preferences were quantified. The standard meta-function allows for within-study correlation across the different observations of estimated values (Bateman & Jones, 2003; Johnston et al., 2005). For each observation included in the metadata, the left-hand variable is a reported estimate of welfare, denoted , for value estimate reported in study . To assess the impact of specific biophysical variables on welfare, the matrix includes site-specific characteristics, while the matrix includes demographic characteristics at the study site, and represents the ecosystem service affect by the policy change. Similarly, the methodological approaches used to arrive at estimate in study are denoted . The general specification of the meta-function is:
where , and are vectors of parameters to be estimated to discern the impact that research methods, site characteristics, demographics, and policy changes have on estimated values, respectively, captures systemic study level effects, and is a standard observation-specific error with constant variance. It is through the inclusion of the biophysical (captured through ), personal characteristics (captured through ), and ecosystem services values (captured through ) that the meta-functions enables prediction of value estimates customized to the policy site. The common assumption is that and . Clustering by study is standard in the meta regression literature, though alternative clustering strategies may be deployed (clustering by author or region, for example).
It is not clear the importance of having methodological characteristics in the meta-function (Stapler & Johnston, 2009). One purpose may be to discern use, nonuse, and total values that may be specific to the valuation methods deployed. Revealed preference methods are only capable of measuring use values, while stated-preference methods can measure all three of these value categories (see Johnston, Besedin, & Ranson, 2006, for a more thorough discussion of this issue).
A more specific type of meta-function transfer is referred to as a preference-function transfer. Unlike a traditional meta-function, which is typically a reduced-form specification that is not aligned with any theoretical structure of preferences, preference functions are a special type of meta-analysis where the function satisfies theoretical properties imposed by the analyst.
Preference-function transfers seek to estimate structural preference parameters based on results from study sites providing utility (or demand) parameter estimates that can be used to estimate the preference function, which provides predictions of policy site values (Smith, van Houtven, & Pattanayak, 2000; Zanderson, Termansen, & Jensen, 2007). The estimated preference function is combined with information on site quality, income, and other policy-site information to predict a policy site value consistent with the assumed specification of preferences and calibrated to policy site conditions. It is the focus on recovering demand or utility parameters to estimate the function that makes the transfer consistent with economic theory. This is unlike a traditional meta-function, which observes a relationship between estimated values and study characteristics but does not impose any structure on the estimated function.
An advantage of a preference-function transfer is that the analyst can impose conditions to ensure that value predictions do not exceed income. Yet, theoretical consistency comes at a cost. The preference function that is transferred is typically based on a set of studies that provide the requisite demand or utility parameters, which tends to provide a much smaller sample than is observed in a traditional meta-analysis. In contrast, however, preference calibration guarantees not only that the transfer process is consistent with economic theory but that apples and apples are being combined as a basis for prediction, whereas many traditional meta-analyses combine apples, oranges, and other fruit in the estimation, which makes prediction for benefit transfers suspect.4
Assessing the Accuracy/Sensitivity of Benefit Transfer
As with any empirical method, it is important and relevant to ask how accurate the method is. To quantify accuracy, researchers conduct investigations of convergent validity where benefit transfer errors are computed as the difference between a benefit transfer prediction for a policy site and a value estimated from an original value at the policy is (Rosenberger & Stanley, 2006). The percentage transfer error (PTE) between the transfer estimate and known estimate is a common measure of transfer accuracy:
Small transfer errors imply accurate benefit transfers
Among other things, reliability hinges on a properly specified transfer function. To see why, suppose a researcher uses a specific parametric travel cost function to estimate the value at the policy site, . Consider Equation (1), the general valuation function, based on an original travel cost study, say. is dependent upon the explicit functional form assumed for the travel cost function, . If the transfer function is misspecified or the coefficient estimates are biased, then the transfer estimates will also be biased, compromising accuracy. Note that all estimates contain some bias, and the purpose of investigation validity is to ask if the bias is of such magnitude that it would compromise the use of benefit transfer predictions to support decision-making. There is no bright line to say when an estimate is or is not sufficiently valid (i.e., contains an acceptable amount of bias), so it is important to carefully consider what is known about benefit transfer accuracy and take that information into consideration in any real-world application.
Kaul et al. (2013) investigated the magnitude of benefit transfer prediction errors from 31 convergent validity studies that provided 1,071 transfer error observations. They found that the average error was 172%, which was reduced to 42% when data outliers were removed. Even with data outliers removed, 42% likely overestimates the error for a well-designed and properly conducted benefit transfer prediction. The convergent validity studies that have been undertaken are those where original values were available to conduct convergent validity investigations. None of the studies conditioned transfer error computations on conditions where a transfer would be conducted and have minimal error. From a research perspective, this is not an issue as investigators wish to probe the limits of transfer applicability. Unfortunately for the practitioner, limits of applicability have yet to be identified; we can simply say that an average prediction error of 42% is an overestimate. The glass-half-full side of the discussion is that Murphy, Allen, Stevens, and Weatherhead (2005) found that original studies contain a median overestimation error of 35%, and the median without outliers for benefit transfer predictions in Kaul et al.’s (2013) study is 33%. Thus, benefit transfer predictions appear to be comparable to what would be observed from an original stated-preference study at a policy site.
Given that benefit transfer predictions contain error (true of any estimation procedure), the study site data is not a random draw from a population of study sites (as discussed above), and the appropriate specification of transfer functions is unknown, it is important to conduct robustness analyses to assess how sensitive the transfer predictions are to changes in the underlying structure of the data and the transfer function. This can take many different forms, but Boyle et al. (2013) provide one avenue to assess sensitivity based on the vertical and horizontal structure of data, which they term due diligence.5 Vertical robustness amounts to checking how sensitive benefit transfer predictions are if a variable is omitted, for example, dropping income from a function transfer. Horizontal robustness captures the sensitivity of benefit transfer predictions if an observation or set of observations (e.g., a study site that provides multiple observations) is omitted from the metadata.
While robustness is commonly thought of normatively as being good, Boyle et al. (2013) suggest that this is not always the correct interpretation. Due diligence is designed to identify observations, studies, variables, and assumptions that lead to similar (robust) insights. Once the sensitivity analyses have been undertaken across the data, then the importance of robustness can be assessed. For example, a transfer could be horizontally robust simply because it reflects the general collective insights of scientists and represents a stylized fact. Lack of robustness—fragility—could reflect an area where the literature has not yet evolved rather than a transfer estimate that is poor. Further, the sensitive results may be those that best characterize policy site conditions. It is incumbent upon the analyst to identify the variables, studies, and assumptions that are most important for the specific transfer. It is certainly possible that a transfer function may be considered fragile, but the estimates leading to the fragility may be those that are most appropriate for a specific policy site context. Thus, Boyle et al. (2013) stress that due diligence represents a process for organizing and thinking about how study site data, function estimation, and various modeling assumptions influence benefit transfers and making the best choice according to the insights from the robustness analyses vis-á-vis policy site conditions.
Navrud and Brouwer (2007) suggest a categorization of benefit transfer errors. For a transfer error less than 20%, they suggest this is a very good fit between the policy and study sites. Transfer errors within 50% are good fits, while transfer errors within 100% are poor fits between the study and policy sites. Transfer errors larger than 100% are very poor fits, and it is recommended that the primary study data be discarded and only a meta-analysis be used to estimate benefits. These error bounds in benefit transfer predictions need to be viewed with caution, however, as the analyst must also remember to consider the errors that exist in original value estimates. For example, the 20% mentioned by Navrud and Brouwer (2007) is not magical, but ad hoc. In fact, from study to study, and from policy to policy, the level of acceptable error can vary considerably. This can be driven by several considerations such as the impact of the decision, large versus small benefits or costs, the magnitude of net benefits (e.g., errors may need more careful consideration when the net benefit margin is small), or the decision-making context (e.g., more consideration may be needed when many people are affected by a public decision). The following section proposes a more general framework for considering benefit transfer errors in decision-making.
Mitigating Bias in Transfer Predictions
While it is impossible to eliminate all biases in transfer predictions, Boyle et al. (2009) proposed a set of conditions that, when satisfied, will ensure that benefit transfer predictions will be econometrically consistent and common sources of bias minimized. These conditions are referred to as the “4S” conditions:
Separability implies that any conditions unobserved at the study and policy sites must be additively separable in the transfer function to the relevant explanatory variables for calibrating predictions to policy site conditions, eliminating omitted variable biases. Specification means that the transfer function is correctly specified in terms of explanatory variables and the functional form of the prediction equation. Sorting implies that individuals do not sort between the study and policy sites based on characteristics that are germane to the policy valuation questions and unobserved by the analyst. Selection requires that the observations used to estimate the transfer function be representative to the population of potential values that might apply to the policy site.
These four conditions, when they hold, are sufficient to ensure that a benefit transfer (be it a value or function transfer) will be consistent in the sense that as more data are obtained, the estimated benefit of a proposed policy will converge to the true benefit. These conditions should not be taken to imply that each one must be satisfied completely and simultaneously. In fact, meeting all four conditions is likely to be impossible in practice. Rather, these conditions provide a basis for an analyst to evaluate the potential credibility of a benefit transfer. When any of these conditions are likely to be violated, this should motivate relevant robustness checks to investigate how benefit transfer predictions might be impacted. The conditions Nelson and Kennedy (2009) suggest for evaluating any meta-function can augment the four Ss and help to identify potential robustness checks that should be implemented.
Challenges in Using Benefit Transfer to Value Ecosystem Services
We suggest that the greatest challenge to analysts attempting to conduct a benefit transfer is finding values that match the identified ecosystem services to be valued. Mapping changes in ecosystems to services valued by humans is a challenge in and of itself, as has been discussed by numerous authors (e.g., Bingham et al., 1995; Boyd & Banzhaf, 2007; Heal et al., 2005; Howarth & Farber, 2002; Turner, Morse-Jones, & Fisher, 2010). If service endpoints have been appropriately identified, original studies can then be designed that map values directly to the changes in services provided. This is not the case for benefit transfers, as the analyst must use existing value estimates that may not match perfectly with the ecosystem services impacted by a policy or other actions. Benefit transfers face a further challenge when valuing changes in ecosystem services—the fact that any action can affect multiple ecosystem services implies that a benefit transfer may include more than one benefit transfer analysis. However, benefit transfers can only go as far as there are services that have been valued in the empirical literature. Even when there are empirical studies available, values or the units of measurement may not be reported consistently across studies or in units that match the units needed at the policy site. These are not insignificant challenges.
The best way to consider this issue is to consider several meta-analyses of nonmarket values given that the challenges faced in constructing metadata are the same as would be faced in conducting a benefit transfer to value ecosystem services. The variables that ultimately are used as explanatory variables in meta-equations demonstrate the limited domain of values that an analyst may have to work with. Here we discuss three studies that have all compiled metadata for various ecosystem services. These studies were selected to broadly demonstrate the types of data available to support benefit transfer predictions of ecosystem values.
“Economic Valuation of Regulating Services Provided by Wetlands in Agricultural Landscapes: A Meta-Analysis” (Brander et al., 2013)
This meta-analysis investigates values for ecosystem services provided by wetlands in agricultural landscapes focusing on flood control, water supply, and nutrient recycling services. It focuses on studies conducted in the United States and developing countries using a variety of nonmarket valuation methods and includes:
1. 400 wetland valuation studies
2. 38 that provide estimates of values for the services
3. 66 observations
The ecosystem service variables include:
1. If a constructed wetland valued
2. If water supply valued
3. If water quality valued
4. Study site wetland area (hectares)
5. Wetland abundance within 50 km (hectares)
This study shows that only a small proportion of the existing empirical studies (10%) provided usable value estimates and there were a small number of usable value observations (66). This study allows values for changes in wetland area—site specific and in the greater area of the site—conditioned on whether the wetland is constructed or not and provides a value for whether water supply or water quality are valued. Values could not be predicted, however, for changes in water support or changes in water quality, which may be the primary wetland services that humans value.
“Enhanced Geospatial Validity for Meta-analysis and Environmental Benefit Transfer: An Application to Water Quality Improvements” (Johnston et al., 2016)
This meta-analysis investigates changes in surface water quality in the United States. The data consists of:
1. Many stated-preference studies identified but not reported
2. 51 studies conducted in the United States
3. 140 observations
The ecosystem service variables include:
1. If survey respondents from the USDA Northeast
2. If survey respondents from the USDA Midwest or Mountain Plains
3. If survey respondents from the USDA Southeast or Southwest
4. If value for multiple water body types (e.g., lakes and rivers)
5. If value for rivers
6. If changes in swimming uses valued
7. If changes in fishing valued
8. If changes in boating valued
9. Natural log of the proportion of the improved resource area that is agricultural
10. Natural log of sampled area divided by area of counties that intersect improved water resource(s)
11. Natural log of sampled area divided by area of all watersheds that intersect improved water resource(s)
12. Index of the size of the improved water body (defined by shoreline length) relative to the size of the sampled area
13. Proportion of water bodies improved by the water quality change, within affected state(s)
14. Natural log of the change in mean water quality (using a 100-point water quality index)
15. Natural log of the baseline water quality (using a 100-point water quality index)
For this study, it is not possible to address the study and observation shrinkage, but the reader can see that there is a limited number of observations just as for the other two meta-analyses discussed here. This study allows for predicted values based on the size of water bodies, change in water quality, and the types of recreational use. Water quality measured on a 100-point index requires mapping the change in policy site water quality to this index.
“The Implicit Value of Tree Cover in the U.S.: A Meta-Analysis of Hedonic Property Value Studies” (Siriwardena et al., 2016)
This meta-analysis investigated hedonic property value studies that valued forest proximate to single-family residences in the United States. The data consist of:
1. 56 hedonic property value studies
2. 44 studies conducted in the United States
3. 15 studies measuring the forest (ecological) variable in a consistent manner (tree canopy cover)
4. 106 observations with 13 studies providing more than one observation
5. 68 observations usable for estimation
The ecosystem service variables include:
1. Percentage tree cover on or near a property (linear and square terms)
2. Percentage county tree cover (linear and square terms)
3. Percentage of county tree cover aged 40–119 years
4. Percentage of county tree cover aged 120 years or older
5. Presence of invasive forest pathogens
6. Annual days with temperature above 90°F
7. If observation in eastern United States
8. If observation in Pacific Northwest
These summary data demonstrate the limited number of observations available to support transfers, and not all observations are applicable—a 73% reduction in the number of studies and a 36% reduction in observations among the usable studies. The ecological service valued is the aesthetic contribution of tree cover to property value (both near the home and in the county where the home is located), which likely does not include the value of all forest ecosystem services if the tree cover were increased or decreased (e.g., perhaps watershed and carbon storage services). Of note, auxiliary data were used to include county tree cover and the presence of forest pathogens to augment the metadata. However, there is no way to ensure that the full value of all forest ecosystem services is captured in the implicit value of an adjacent forest in the purchase prices of homes.
The current discussion is not intended to be exhaustive of the meta-analyses of ecosystem service valuation studies or the extensive reviews of these three studies. Rather, these three studies were selected to demonstrate the types of data available to support benefit transfer predictions of value changes in ecosystem services and the hurdles one might face in data collection for an alternative amenity. Note that these are applications where there are relatively more data observations than might be observed for many other ecosystem services.
In conducting a benefit transfer prediction for ecosystem services, we make the following recommendations:
1. Search the literature on meta-analyses that address the ecosystem services to be valued. This serves at least two important purposes: it gives the analyst a jump-start on reviewing the literature on empirical value estimates and, if appropriate, may provide the basis for a function transfer for at least some of the services to be valued.
2. Keep in mind that values excluded from a meta-analysis may be the best fit for the transfer prediction. Just because they did not meet the conditions for inclusion in the meta-analysis does not imply that they are not appropriate for the specific transfer prediction. See recommendation (4) below.
3. It is good to review all original studies and not just rely on summary articles. This, again, is useful for two reasons. It helps the analyst ensure that value estimates are appropriate for the policy and can avoid, or at least minimize, double counting in computing aggregate benefit estimates if some value estimates include estimated values for two or more of the services to be valued.
4. Even if a meta-analysis exists, it may be useful to go back to the original data, decide whether some studies/values should be deleted or added to the metadata, and re-estimate the meta-function.
5. Finally, while we have heavily focused on function transfers here, and specifically addressed meta-functions, the analyst valuing ecosystem services may find it useful to consider all four types of transfer approaches discussed. This is because different approaches may be best for valuing different ecosystem services given the available data.
Value transfers may be useful where there are a limited number of value estimates, but study value(s) must closely match or allow simple adjustments to match policy site conditions. The same is true of the transfer of functions from original studies—there may be adjustments to coefficient estimates to enhance the match with policy site conditions. For meta- and preference functions, calibration comes through the assignment of policy site data for the regressors when making the transfer predictions. For more on the nuts and bolts of constructing benefit transfer predictions, the reader is referred to Johnston et al. (2015a) and Rosenberger and Loomis (2017).
Using Benefit Transfers to Support Decision-Making
Once the analyst has calculated the value(s) for the change(s) in ecosystem services at the policy site, the important task of deciding if one should go forward with the policy begins. While one number from any economic analysis is unlikely to be the sole deciding factor in many decision-making processes, economic values can be important inputs; this is true for decisions to protect or enhance ecosystem services and when benefit transfers are used to provide empirical value evidence to support such decision-making.
Given that benefit transfers contain errors associated with the predictions, analysts may want to consider a range of value predictions, culminating with upper and lower bounds on the transfer values, when advising decision-makers. As noted in the introduction to this article, Presidential Executive Order 12866 (1993) requires federal agencies to evaluate changes in public policies based “… on the best reasonably obtainable scientific, technical, economic, and other information” (emphasis added). This closing section addresses how the best reasonable evidence might be presented to decision-makers.
Let us assume, based on Kaul et al. (2013), that benefit transfers have a median error of 35%. This is a large error and must be considered carefully, but there is an approach to explicitly include the potential error in reporting to decision-makers. If benefits exceed cost by more than 35%, then there is a reasonable likelihood that net benefits exceed cost and moving forward with the decision is supported by the economic evidence. If, on the other hand, cost exceeds benefits by more than 35%, this should be strong evidence that a decision to move forward with an action must be based on considerations other than net benefits being positive because that is likely not the case. When the difference between benefits and costs falls within the 35% range, it becomes tricky because the analyst cannot say with any confidence that benefits do or do not exceed costs—more information is required. If the consequences of the action are substantial, this uncertainty indicates an original study should be conducted. If the consequences are not large—say the cost of an original study exceeds net benefits—then work to refine the benefit transfer prediction is warranted. In addition, if robustness analyses suggest that the error rate for any specific benefit transfer prediction may exceed 35%, or is much less than 35% for that matter, such evidence should be considered when considering these bounds. Further, what is an acceptable transfer error is application specific, and guidance must be done on a case-by-case basis—the bounding discussion above helps provide a constructive structure for such a conversation that avoids ad hoc distinctions by the analyst such as the 20% recommendation discussed earlier.
Another approach to addressing this uncertainty may be to consider using a decision framework, such as a “safe minimum standard” (Bishop, 1978), with adjustments on whether the policy goal is to remove a “bad” (e.g., contamination at a Superfund site) or to attain a “good” (e.g., prevention of future global warming effects) and the reversibility of the decision.6
The Future of Benefit Transfer
Over the last three decades benefit transfer has morphed from ad hoc reuse of simple value estimates to an accepted area of scientific inquiry to support practical decision-making. While the basic valuation questions parallel an original valuation study, the application of benefit transfer predictions involves the use of methods unique to this area of valuation. This maturation of benefit transfer has seen the development of a rigorous theoretical construct, data assimilation, and estimation advances to reduce biases and thereby enhance the credibility of transfer predictions. However, as should be clear from our discussion, more work remains.
One key outstanding issue is that benefit transfer predictions cannot be a panacea for a lack of primary studies valuing ecosystem services. Seppelt, Dormann, Eppink, Lautenbach, and Schmidt (2011) identified just over 460 studies conducted between 1990 and 2010. However, of these studies, only 153 were regional case studies, and nearly half of these were conducted in just six countries. The same studies cannot continue to be used over and over because preferences may change, which will influence value estimates, and original valuation methods continue to improve. Thus, original research to estimate ecosystem values is also critically important.
Bateman, I. J., & Jones, A. P. (2003). Contrasting conventional with multi-level modeling approaches to meta-analysis: Expectation consistency in UK woodland recreation values. Land Economics, 79(2), 235–258.Find this resource:
Bennett, J. (2006). Introduction. In J. Rolfe & J. Bennett (Eds.), Choice modelling and the transfer of environmental values (pp. 1–9). Cheltenham, U.K.: Edward Elgar.Find this resource:
Bingham, G., Bishop, R., Brody, M., Bromley, D., Clark, E. T., Cooper, W., et al. (1995). Issues in ecosystem valuation: Improving information for decision making. Ecological Economics, 14(2), 73–90.Find this resource:
Bishop, R. C. (1978). Endangered species and uncertainty: The economics of a safe minimum standard. American Journal of Agricultural Economics, 60(1), 10–18.Find this resource:
Bishop, R. C., & Boyle, K. J. (2017). Reliability and validity in nonmarket valuation. In P. A. Champ, K. J. Boyle, & T. C. Brown (Eds.), A primer on nonmarket valuation (pp. 463–497). Dordrecht, NL: Springer.Find this resource:
Boyd, J., & Banzhaf, S. (2007). What are ecosystem services? The need for standardized environmental accounting units. Ecological Economics, 63(2), 616–626.Find this resource:
Boyle, K. J., & Bergstrom, J. C. (1992). Benefit transfer studies: Myths, pragmatism, and idealism. Water Resources Research, 28(3), 657–663.Find this resource:
Boyle, K. J., & Wooldridge, J. M. (2017). Understanding error structures and exploiting panel data in meta-analytic benefit transfers. Environmental and Resource Economics. Forthcoming.Find this resource:
Boyle, K. J., Kaul, S., & Parmeter, C. F. (2014). Meta-analysis: Advances and new perspectives. In R. Johnston, R. Rolfe, R. Rosenberger, & R. Brouwer (Eds.), Benefit transfer of environmental and resource values: A handbook for researchers and practitioners (pp. 383–418). Dordrecht, NL: Springer.Find this resource:
Boyle, K. J., Kuminoff, N. V., Parmeter, C. F., & Pope, J. C. (2009). Necessary conditions for valid benefit transfer. American Journal of Agricultural Economics, 91(5), 1328–1334.Find this resource:
Boyle, K. J., Kuminoff, N. V., Parmeter, C. F., & Pope, J. C. (2010). The benefit-transfer challenges. In G. C. Rausser, V. K. Smith, & D. Zilberman (Eds.), Annual review of resource economics (Vol. 2, pp. 161–182). Malloy.Find this resource:
Boyle, K. J., Parmeter, C. F., Boehlert, B., & Paterson, R. (2013). Due diligence in meta-analyses to support benefit transfer. Environmental and Resource Economics, 55(3), 357–386.Find this resource:
Brander, L., Brouwer, R., & Wagtendonk, A. (2013). Economic valuation of regulating services provided by wetlands in agricultural landscapes: A meta-analysis. Ecological Engineering, 56(1), 89–96.Find this resource:
Brander, L. M., Eppink, F. V., Schagner, P., van Beukering, P. J. H., & Wagtendonk, A. (2015). GIS-based mapping of ecosystem services: The case of coral reefs. In R. Johnston, R. Rolfe, R. Rosenberger, & R. Brouwer (Eds.), Benefit transfer of environmental and resource values: A handbook for researchers and practitioners (pp. 465–485). Dordrecht, NL: Springer.Find this resource:
Brander, L. M., Wagtendonk, A. J., Hussain, S. S., McVittie, A., Verburg, P. H., de Groot, R. S., et al. (2012). Ecosystem service values for mangroves in Southeast Asia: A meta-analysis and value transfer application. Ecosystem Services, 11(1), 62–69.Find this resource:
Burkhard, B., Kroll, F., Nedkov, S., & Müller, F. (2012). Mapping ecosystem service supply, demand and budgets. Ecological Indicators, 21(1), 17–29.Find this resource:
Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. Newbury Park, CA: SAGE.Find this resource:
Carson, R. T., & Groves, T. (2007). Incentive and informational properties of preference questions. Environmental and Resource Economics, 37(1), 181–210.Find this resource:
Champ, P. A., Boyle, K. J., & Brown, T. C. (2017). A primer on nonmarket valuation. Dordrecht, The Netherlands: Kluwer.Find this resource:
Desvousges, W. H., Naughton, M. C., & Parsons, G. R. (1992). Benefits transfer: Conceptual problems in estimating water quality benefits using existing studies. Water Resources Research, 28(3), 675–683.Find this resource:
Diamond, P. A., & Hausman, J. A. (1994). Contingent valuation: Is some number better than no number? Journal of Economic Perspectives, 8(4), 45–64.Find this resource:
Fitzpatrick, L., Parmeter, C. F., & Agar, J. (2017). Threshold effects in meta-analyses with application to benefit transfer for coral reef valuation. Ecological Economics, 133(1), 74–85.Find this resource:
Ghermandi, A., & Nunes, P. A. (2013). A global map of coastal recreation values: Results from a spatially explicit meta-analysis. Ecological Economics, 86(1), 1–15.Find this resource:
Haab, T. C., Interis, M. G., Petrolia, D. R., & Whitehead, J. C. (2013). From hopeless to curious? Thoughts on Hausman’s “dubious to hopeless” critique of contingent valuation. Applied Economic Perspectives and Policy, 35(4), 593–612.Find this resource:
Heal, G. M., Barbier, E. B., Boyle, K. J., Covich, A. P., Gloss, S. P., Hershner, C. H., et al. (2005). Valuing ecosystem services. Washington, DC: The National Academies Press.Find this resource:
Holmes, T., Adamowicz, W., & Carlsson, F. (2017). Choice experiments. In P. A. Champ, K. J. Boyle, & T. C. Brown (Eds.), A primer on nonmarket (pp. 133–186) valuation. Dordrecht, NL: Springer.Find this resource:
Howarth, R. B., & Farber, S. (2002). Accounting for the value of ecosystem services. Ecological Economics, 41(3), 421–429.Find this resource:
Johnston, R. J., Besedin, E. Y., Iovanna, R., Miller, C. J., Wardwell, R. F., & Ranson, M. R. (2005). Systematic variation in willingness to pay for aquatic resource improvements and implications for benefit transfer: A meta-analysis. Canadian Journal of Agricultural Economics, 53(3), 221–248.Find this resource:
Johnston, R. J., Besedin, E. Y., & Ranson, M. H. (2006). Characterizing the effects of valuation methodology in function-based benefits transfer. Ecological Economics, 60(1), 407–419.Find this resource:
Johnston, R. J., Besedin, E. Y., & Stapler, R. (2016). Enhanced geospatial validity for meta-analysis and environmental benefit transfer: An application to water quality improvements. Environmental and Resource Economics. Available at https://link.springer.com/article/10.1007/s10640-016-0021-7.Find this resource:
Johnston, R. J., Boyle, K. J., Adamowicz, W., Bennett, J., Brouwer, R., Cameron, T. A., et al. (2017). Contemporary guidance for stated preference studies. Journal of the Association of Environmental and Resource Economists, 4(2), 319–405.Find this resource:
Johnston, R. J., Rolfe, J., Rosenberger, R. S., & Brouwer, R. (2015a). Benefit transfer of environmental and resource values: A guide for researchers and practitioners. New York: Springer.Find this resource:
Johnston, R. J., Rolfe, J., Rosenberger, R. S., & Brouwer, R. (2015b). Introduction to benefit transfer of environmental and resource values. In R. Johnston, R. Rolfe, R. Rosenberger, & R. Brouwer (Eds.), Benefit transfer of environmental and resource values: A handbook for researchers and practitioners (pp. 3–18). Dordrecht, NL: Springer.Find this resource:
Kaul, S., Boyle, K. J., Kuminoff, N. V., Parmeter, C. F., & Pope, J. C. (2013). What can we learn from benefit transfer errors? Evidence from 20 years of research on convergent validity. Journal of Environmental Economics and Management, 66(1), 90–104.Find this resource:
Londoño, L. M., & Johnston, R. J. (2012). Enhancing the reliability of benefit transfer over heterogeneous sites: A meta-analysis of international coral reef values. Ecological Economics, 78(1), 80–89.Find this resource:
Loomis, J., Kent, P., Strange, L., Fausch, K., & Covich, A. (2000). Measuring the total economic value of restoring ecosystem services in an impaired river basin: Results from a contingent valuation survey. Ecological Economics, 33(1), 103–117.Find this resource:
Morse-Jones, S., Bateman, I. J., Kontoleon, A., Ferrini, S., Burgess, N. D., & Turner, R. K. (2012). Stated Preferences for Tropical Wildlife Conservation Amongst Distant Beneficiaries: Charisma, endemism, scope and substitution effects. Ecological Economics, 78(1), 9–18.Find this resource:
Murphy, J. J., Allen, P. G,. Stevens, T. H., & Weatherhead, D. (2005). A meta-analysis of hypothetical bias in stated preference valuation. Environmental and Resource Economics, 30(3), 313–325.Find this resource:
Navrud, S., & Brouwer, R. (2007). Good practice guidelines in benefit transfer of forest externalities. COST Action E45, European Forest Externalities (EUROFOREX).Find this resource:
Nelson, J. P., & Kennedy, P. E. (2009). The use (and abuse) of meta-analysis in environmental and resource economics: An assessment. Environmental and Resource Economics, 42(3), 345–377.Find this resource:
Parsons, G. R. (2017). Travel cost. In P. A. Champ, K. J. Boyle, & T. C. Brown (Eds.), A primer on nonmarket valuation (pp. 187–234). Dordrecht, NL: Springer.Find this resource:
Richardson, L., & Loomis, J. (2009). The total economic value of threatened, endangered and rare species: An updated meta-analysis. Ecological Economics, 68(5), 1535–1548.Find this resource:
Rolfe, J., Brouwer, R., & Johnston, R. J. (2015). Meta-analysis: Rationale, issues, and applications. In R. Johnston, R. Rolfe, R. Rosenberger, & R. Brouwer (Eds.), Benefit transfer of environmental and resource values: A handbook for researchers and practitioners (pp. 357–382). Dordrecht, NL: Springer.Find this resource:
Rosenberger, R. S., & Loomis, J. B. (2017). Benefit transfer. In P. A. Champ, K. J. Boyle, & T. C. Brown (Eds.), A primer on nonmarket valuation (pp. 431–462). Dordrecht, NL: Springer.Find this resource:
Rosenberger, R. S., & Stanley, T. D. (2006). Measurement, generalization, and publication: Sources of error in benefits transfers and their management. Ecological Economics, 60(2), 372–378.Find this resource:
Seppelt, R., Dormann, C. F., Eppink, F. V., Lautenbach, S., & Schmidt, S. (2011). A quantitative review of ecosystem service studies: Approaches, shortcomings and the road ahead. Journal of Applied Ecology, 48, 630–636.Find this resource:
Silver, N. (2012). The signal and the noise: why so many predictions fail--but some don’t. New York: Penguin.Find this resource:
Siriwardena, S. D., Boyle, K. J., Holmes, T. P., & Wiseman, P. E. (2016). The implicit value of tree cover in the US: A meta-analysis of hedonic property value studies. Ecological Economics, 128(1), 68–76.Find this resource:
Smith, V. K., & Kaoru, Y. (1990). Signal or noise? Explaining the variation in recreation benefit estimates. American Journal of Agricultural Economics, 72(2), 419–433.Find this resource:
Smith, V. K., van Houtven, G., & Pattanayak, S. K. (2000). Benefit transfer via preference calibration: “Prudential algebra” for policy. Land Economics, 78(1), 132–152.Find this resource:
Stapler, R. W., & Johnston, R. J. (2009). Meta-analysis, benefit transfer, and methodological covariates: Implications for transfer error. Environmental and Resource Economics, 42(2), 227–246.Find this resource:
Troy, A., & Wilson, M. A. (2006). Mapping ecosystem services: Practical challenges and opportunities in linking GIS and value transfer. Ecological Economics, 60(2), 435–449.Find this resource:
Turner, R. K., Morse-Jones, S., & Fisher, B. (2010). Ecosystem valuation. Annals of the New York Academy of Sciences, 1185(1), 79–101.Find this resource:
U.S. Deparment of Agriculture. (2005). Conservation security program: Amendment to the interim final rule benefit cost assessment. Federal Register, 69 FR 34501, 34501–34532.Find this resource:
U.S. Environmental Protection Agency. (1999). The benefits and costs of the Clean Air Act 1990 to 2010. EPA-410-R-99-001.Find this resource:
U.S. Environmental Protection Agency. (2000). Guidelines for preparing economics analyses. EPA 240-R-00-003.Find this resource:
U.S. Environmental Protection Agency. (2014). Guidelines for preparing economics analyses.Find this resource:
U.S. Office of Management and Budget. (2003). Circular a-4: Regulatory analysis.Find this resource:
U.S. Office of the President. (1993). Executive order 12866: Regulatory planning and review. Federal Register, 58(190).Find this resource:
Wheeler, W. J. (2015). Benefit transfer for water quality regulatory rulemaking in the United States. In R. Johnston, R. Rolfe, R. Rosenberger, & R. Brouwer (Eds.), Benefit transfer of environmental and resource values: A handbook for researchers and practitioners (pp. 101–115). Dordrecht, NL: Springer.Find this resource:
Woodward, R. T., & Wui, Y. (2001). The economic value of wetland services: A meta-analysis. Ecological Economics, 37(2), 257–270.Find this resource:
Zanderson, M., Termansen, M., & Jensen, F. S. (2007). Testing benefits transfer of forest recreation values over a twenty-year time horizon. Land Economics, 83(3), 412–440.Find this resource:
(1.) Values refer to a gain (benefit) or loss (cost) experienced by people. A benefit transfer is the transfer of existing values to estimate either benefits or costs depending on the policy action being evaluated.
(2.) Policy and study sites can be the same if value estimates are used to support decision-making at the same site but the policy question differs from the original valuation context. However, in most cases, study sites differ from the policy site.
(5.) Their focus is on benefit transfer based on meta-analysis, but their main intuition holds across all forms of benefit transfer.