1-5 of 5 Results

  • Keywords: scale x
Clear all

Article

Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

Article

Syed Abdul Hamid

Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities. Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.

Article

José Luis Pinto-Prades, Arthur Attema, and Fernando Ignacio Sánchez-Martínez

Quality-adjusted life years (QALYs) are one of the main health outcomes measures used to make health policy decisions. It is assumed that the objective of policymakers is to maximize QALYs. Since the QALY weighs life years according to their health-related quality of life, it is necessary to calculate those weights (also called utilities) in order to estimate the number of QALYs produced by a medical treatment. The methodology most commonly used to estimate utilities is to present standard gamble (SG) or time trade-off (TTO) questions to a representative sample of the general population. It is assumed that, in this way, utilities reflect public preferences. Two different assumptions should hold for utilities to be a valid representation of public preferences. One is that the standard (linear) QALY model has to be a good model of how subjects value health. The second is that subjects should have consistent preferences over health states. Based on the main assumptions of the popular linear QALY model, most of those assumptions do not hold. A modification of the linear model can be a tractable improvement. This suggests that utilities elicited under the assumption that the linear QALY model holds may be biased. In addition, the second assumption, namely that subjects have consistent preferences that are estimated by asking SG or TTO questions, does not seem to hold. Subjects are sensitive to features of the elicitation process (like the order of questions or the type of task) that should not matter in order to estimate utilities. The evidence suggests that questions (TTO, SG) that researchers ask members of the general population produce response patterns that do not agree with the assumption that subjects have well-defined preferences when researchers ask them to estimate the value of health states. Two approaches can deal with this problem. One is based on the assumption that subjects have true but biased preferences. True preferences can be recovered from biased ones. This approach is valid as long as the theory used to debias is correct. The second approach is based on the idea that preferences are imprecise. In practice, national bodies use utilities elicited using TTO or SG under the assumptions that the linear QALY model is a good enough representation of public preferences and that subjects’ responses to preference elicitation methods are coherent.

Article

Denzil G. Fiebig and Hong Il Yoo

Stated preference methods are used to collect individual-level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article is data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.

Article

Ching-mu Chen and Shin-Kun Peng

For research attempting to investigate why economic activities are distributed unevenly across geographic space, new economic geography (NEG) provides a general equilibrium-based and microfounded approach to modeling a spatial economy characterized by a large variety of economic agglomerations. NEG emphasizes how agglomeration (centripetal) and dispersion (centrifugal) forces interact to generate observed spatial configurations and uneven distributions of economic activity. However, numerous economic geographers prefer to refer to the term new economic geographies as vigorous and diversified academic outputs that are inspired by the institutional-cultural turn of economic geography. Accordingly, the term geographical economics has been suggested as an alternative to NEG. Approaches for modeling a spatial economy through the use of a general equilibrium framework have not only rendered existing concepts amenable to empirical scrutiny and policy analysis but also drawn economic geography and location theories from the periphery to the center of mainstream economic theory. Reduced-form empirical studies have attempted to test certain implications of NEG. However, due to NEG’s simplified geographic settings, the developed NEG models cannot be easily applied to observed data. The recent development of quantitative spatial models based on the mechanisms formalized by previous NEG theories has been a breakthrough in building an empirically relevant framework for implementing counterfactual policy exercises. If quantitative spatial models can connect with observed data in an empirically meaningful manner, they can enable the decomposition of key theoretical mechanisms and afford specificity in the evaluation of the general equilibrium effects of policy interventions in particular settings. Several decades since its proposal, NEG has been criticized for its parsimonious assumptions about the economy across space and time. Therefore, existing challenges still require theoretical and quantitative models on new microfoundations pertaining to the interactions between economic agents across geographical space and the relationship between geography and economic development.