You are looking at 61-80 of 165 articles
Widely used modified least squares estimators for estimation and inference in cointegrating regressions are discussed. The standard case with cointegration in the I(1) setting is examined and some relevant extensions are sketched. These include cointegration analysis with panel data as well as nonlinear cointegrating relationships. Extensions to higher order (co)integration, seasonal (co)integration and fractional (co)integration are very briefly mentioned. Recent developments and some avenues for future research are discussed.
Mental illnesses are highly prevalent and can have considerable, enduring consequences for individuals, families, communities, and economies. Despite these high prevalence rates, mental illnesses have not received as much public policy commitment or funding as might be expected. One result is that mental illness often goes unrecognized and untreated. The resultant costs are felt not only in healthcare systems, but across many other sectors, including housing, social care, criminal justice, welfare benefits, and employment.
This article sets out the basic principles of economic evaluation, with illustrations in this mental health context. It also discusses the main practical challenges when conducting and interpreting evidence from such evaluations.
Decisions about whether to spend resources on a treatment or prevention strategy are based on whether it is likely to be effective in avoiding, reducing, or curing symptoms, improving quality of life, or achieving other individual-level outcomes. The economic evaluation question is whether the outcomes achieved are sufficient to justify the cost that is incurred in delivering the intervention.
An economic evaluation has five elements: clarification of the question to be addressed; specification of the intervention to be evaluated and with what alternative it is being compared; the outcomes to be measured; the costs to be measured (including the cost of implementing the intervention and any savings that might accrue); and finally, how outcome and cost findings are to be blended to make a recommendation to the decision-maker. Sometimes, if an evaluation finds that one intervention has better outcomes but higher costs, then the evaluation should also how one (the outcomes) might be trade-off for the other (the costs).
The article illustrates how economic evaluations have been undertaken and employed to address a range of questions, from the very strategic issue to the more specific clinical question. The purpose of the study can, to some extent, determine the type of evaluation that is needed.
Examples of evaluations are given in a number of areas: perinatal maternal mental illness; parenting programs for conduct disorder; anti-bullying programs in schools; early intervention services for psychosis; individual placement and support; collaborative care for physical health problems; and suicide prevention. The challenges of economic evaluation are discussed, specifically in the mental health field.
Knut Are Aastveit, James Mitchell, Francesco Ravazzolo, and Herman K. van Dijk
Increasingly, professional forecasters and academic researchers in economics present model-based and subjective or judgment-based forecasts that are accompanied by some measure of uncertainty. In its most complete form this measure is a probability density function for future values of the variable or variables of interest. At the same time, combinations of forecast densities are being used in order to integrate information coming from multiple sources such as experts, models, and large micro-data sets. Given the increased relevance of forecast density combinations, this article explores their genesis and evolution both inside and outside economics. A fundamental density combination equation is specified, which shows that various frequentist as well as Bayesian approaches give different specific contents to this density. In its simplest case, it is a restricted finite mixture, giving fixed equal weights to the various individual densities. The specification of the fundamental density combination equation has been made more flexible in recent literature. It has evolved from using simple average weights to optimized weights to “richer” procedures that allow for time variation, learning features, and model incompleteness. The recent history and evolution of forecast density combination methods, together with their potential and benefits, are illustrated in the policymaking environment of central banks.
Ingela Alger and Donald Cox
Which parent can be expected to be more altruistic toward their child, the mother or father? All else equal, can we expect older generation members to be more solicitous of younger family members or vice versa? Policy interventions often target recipients by demographic status: more money being put in the hands of mothers, say, or transfers of income from young to old via public pensions. Economics makes predictions about pecuniary incentives and behavior but tends to be agnostic about how, say, a post-menopausal grandmother might behave, just because she is a post-menopausal grandmother. Evolutionary theory fills this gap by analyzing how preferences of family members emerge from the Darwinian exigencies of “survive and reproduce.” Coin of the realm is so-called “inclusive fitness,” reproductive success of oneself plus that of relatives, weighted by closeness of the relationship. Appending basic biological traits onto considerations of inclusive fitness generates predictions about preferences of family members. A post-menopausal grandmother with a daughter just starting a family is predicted to care more about her daughter than the daughter cares about her, for example. Evolutionary theory predicts that mothers tend to be more altruistic toward children than fathers, and that close relatives would be inclined to provide more support to one another than distant relatives. An original case study is provided, which explains the puzzle of diverging marriage rates by education in terms of heterogeneity in preferences for commitment. Economists are justifiably loathe to invoke preferences to explain trends, since preference-based explanations can be concocted to explain just about anything. But the evolutionary approach does not permit just any invocation of preferences. The dictates of “survive and reproduce” sharply circumscribe the kinds of preference-related arguments that are admissible.
Eduardo Levy Yeyati
While traditional economic literature often sees nominal variables as irrelevant for the real economy, there is a vast body of analytical and empirical economic work that recognizes that, to the extent they exert a critical influence on the macroeconomic environment through a multiplicity of channels, exchange rate policies (ERP) have important consequences for development.
ERP influences economic development in various ways: through its incidence on real variables such as investment and growth (and growth volatility) and on nominal aspects such relative prices or financial depth that, in turn, affect output growth or income distribution, among other development goals. Additionally, ERP, through the expected distribution of the real exchange rate indirectly, influences dimensions such as trade or financial fragility and explains, at least partially, the adoption of the euro—an extreme case of a fixed exchange rate arrangement—or the preference for floating exchange rates in the absence of financial dollarization. Importantly, exchange rate pegs have been (and, in many countries, still are) widely used as a nominal anchor to contain inflation in economies where nominal volatility induces agents to use the exchange rate as an implicit unit of account. All of these channels have been reflected to varying degrees in the choice of exchange rate regimes in recent history.
The empirical literature on the consequences of ERP has been plagued by definitional and measurement problems. Whereas few economists would contest the textbook definition of canonical exchange rate regimes (fixed regimes involve a commitment to keep the nominal exchange rate at a given level; floating regimes imply no market intervention by the monetary authorities), reality is more nuanced: Pure floats are hard to find, and the empirical distinction between alternative flexible regimes is not always clear. Moreover, there are many different degrees of exchange rate commitments as well as many alternative anchors, sometimes undisclosed. Finally, it is not unusual that a country that officially declares to peg its currency realigns its parity if it finds the constraints on monetary policy or economic activity too taxing. By the same token, a country that commits to a float may choose to intervene in the foreign exchange market to dampen exchange rate fluctuations.
The regime of choice depends critically on the situation of each country at a given point in time as much as on the evolution of the global environment. Because both the ERP debate and real-life choices incorporate national and time-specific aspects that tend to evolve over time, so does the changing focus of the debate. In the post-World War II years, under the Bretton Woods agreement, most countries pegged their currencies to the U.S. dollar, which in turn was kept convertible to gold. In the post-Bretton Woods years, after August 1971 when the United States abandoned unilaterally the convertibility of the dollar, thus bringing the Bretton Woods system to an end, the individual choices of ERP were intimately related to the global and local historical contexts, according to whether policy prioritized the use of the exchange rate as a nominal anchor (in favor of pegged or superfixed exchange rates, with dollarization or the launch of the euro as two extreme examples), as a tool to enhance price competitiveness (as in export-oriented developing countries like China in the 2000s) or as a countercyclical buffer (in favor of floating regimes with limited intervention, the prevalent view in the developed world). Similarly, the declining degree of financial dollarization, combined with the improved quality of monetary institutions, explain the growing popularity of inflation targeting with floating exchange rates in emerging economies. Finally, a prudential leaning-against-the-wind intervention to counter mean reverting global financial cycles and exchange rate swings motivates a more active—and increasingly mainstream—ERP in the late 2000s.
The fact that most medium and large developing economies (and virtually all industrial ones) revealed in the 2000s a preference for exchange rate flexibility simply reflects this evolution. Is the combination of inflation targeting (IT) and countercyclical exchange rate intervention a new paradigm? It is still too early to judge. On the one hand, pegs still represent more than half of the IMF reporting countries—particularly, small ones—indicating that exchange rate anchors are still favored by small open economies that give priority to the trade dividend of stable exchange rates and find the conduct of an autonomous monetary policy too costly, due to lack of human capital, scale, or an important non-tradable sector. On the other hand, the work and the empirical evidence on the subject, particularly after the recession of 2008–2009, highlight a number of developments in the way advanced and emerging economies think of the impossible trinity that, in a context of deepening financial integration, casts doubt on the IT paradigm, places the dilemma between nominal and real stability back on the forefront, and postulates an IT 2.0, which includes selective exchange rate interventions as a workable compromise. At any rate, the exchange rate debate is still alive and open.
The uncovered interest parity (UIP) condition states that the interest rate differential between two currencies is the expected rate of change of their exchange rate. Empirically, however, in the 1976–2018 period, exchange rate changes were approximately unpredictable over short horizons, with a slight tendency for currencies with higher interest rates to appreciate against currencies with lower interest rates. If the UIP condition held exactly, carry trades, in which investors borrow low interest rate currencies and lend high interest rate currencies, would earn zero average profits. The fact that UIP is violated, therefore, is a necessary condition to explain the fact that carry trades earned significantly positive profits in the 1976–2018 period. A large literature has documented the failure of UIP, as well as the profitability of carry trades, and is surveyed here. Additionally, summary evidence is provided here for the G10 currencies. This evidence shows that carry trades have been significantly less profitable since 2007–2008, and that there was an apparent structural break in exchange rate predictability around the same time.
A large theoretical literature explores economic explanations of this phenomenon and is briefly surveyed here. Prominent among the theoretical models are ones based on risk aversion, peso problems, rare disasters, biases in investor expectations, information frictions, incomplete financial markets, and financial market segmentation.
Johanna Gereke and Klarita Gërxhani
Experimental economics has moved beyond the traditional focus on market mechanisms and the “invisible hand” by applying sociological and socio-psychological knowledge in the study of rationality, markets, and efficiency. This knowledge includes social preferences, social norms, and cross-cultural variation in motivations. In turn, the renewed interest in causation, social mechanisms, and middle-range theories in sociology has led to a renaissance of research employing experimental methods. This includes laboratory experiments but also a wide range of field experiments with diverse samples and settings. By focusing on a set of research topics that have proven to be of substantive interest to both disciplines—cooperation in social dilemmas, trust and trustworthiness, and social norms—this article highlights innovative interdisciplinary research that connects experimental economics with experimental sociology. Experimental economics and experimental sociology can still learn much from each other, providing economists and sociologists with an opportunity to collaborate and advance knowledge on a range of underexplored topics of interest to both disciplines.
Gerard J. van den Berg and Maarten Lindeboom
Modern-day famines are caused by unusual impediments or interventions in society, effectively imposing severe market restrictions and preventing the free movement of people and goods. Long-run health effects of exposure to famine are commonly studied to obtain insights into the long-run effects of malnutrition at early ages. This line of research has faced major methodological and data challenges. Recent research in various disciplines, such as economics, epidemiology, and demography, has made great progress in dealing with these issues. Malnutrition around birth affects a range of later-life individual outcomes, including health, educational, and economic outcomes.
Alfred Duncan and Charles Nolan
In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.
Financial protection is claimed to be an important objective of health policy. Yet there is a lack of clarity about what it is and no consensus on how to measure it. This impedes the design of efficient and equitable health financing. Arguably, the objective of financial protection is to shield nonmedical consumption from the cost of healthcare. The instruments are formal health insurance and public finances, as well as informal and self-insurance mechanisms that do not impair earnings potential. There are four main approaches to the measurement of financial protection: the extent of consumption smoothing over health shocks, the risk premium (willingness to pay in excess of a fair premium) to cover uninsured medical expenses, catastrophic healthcare payments, and impoverishing healthcare payments. The first of these does not restrict attention to medical expenses, which limits its relevance to health financing policy. The second rests on assumptions about risk preferences. No measure treats medical expenses that are financed through informal insurance and self-insurance instruments in an entirely satisfactory way. By ignoring these sources of imperfect insurance, the catastrophic payments measure overstates the impact of out-of-pocket medical expenses on living standards, while the impoverishment measure does not credibly identify poverty caused by them. It is better thought of as a correction to the measurement of poverty.
One of the most fundamental results in health economics is that a greater socio-economic status is associated with better health outcomes. However, the experience of financial pressure and lack of resources transcends the notion of low income and poverty. Families of all income categories can experience financial pressure and lack of resources. This article reviews the literature examining the relationship between financial strain and various health outcomes. There are three main approaches to the measurement of financial strain found in the research literature, each one capturing a slightly different aspect: the family’s debt position, the availability of emergency funds, and inability to meet current financial obligations.
There are two main hypotheses explaining how financial strain may affect health. First, financial strain indicates a lower amount of financial resources available to individuals and families. This may have a dual impact on health. On the one hand, lower financial resources may lead to a decrease in consumption of substances such as tobacco that are harmful to health. On the other hand, lower financial resources may also negatively affect healthcare access, healthcare utilization, and adherence to treatment, with each contributing to a decline in health. Second, financial strain may produce greater uncertainty with regard to the availability of financial resources at present as well as in the future, thereby resulting in elevated stress, which may, in turn, result in poorer health outcomes. Examining the relationship between financial strain and health is complicated because it appears to be bidirectional. It is not only the case that financial strain may impact health but that health may impact financial strain.
The research literature consistently finds that financial strain has a detrimental impact on a variety of mental health outcomes. This relationship has been documented for a variety of financial strain indicators, including non-collateralized (unsecure) debt, mortgage debt, and the inability to meet current financial obligations. The research on the association between financial strain and health behavior outcomes is more ambiguous. As one example, there are mixed results concerning whether financial strain results in a higher likelihood of obesity. This research has considered various indicators of financial strain, including credit card debt and the inability to meet current financial obligations. It appears that both among adults and children there is no consistent evidence on the impact of financial strain on body weight. Similarly, the results on the impact of financial strain on alcohol use and substance abuse are mixed.
A number of significant questions regarding the relationship between financial strain and health remain unresolved. The majority of the existing studies focus on health outcomes among adults. There is a lack of understanding regarding how family exposure to financial strain can affect children. Additionally, very little is known about the implications of long-term exposure to financial strain. There are also some very important methodological challenges in this area of research related to establishing causality. Establishing causality and learning more about the implications of the exposure to financial strain could have important policy implications for a variety of safety net programs.
Alexandrina Stoyanova and David Cantarero-Prieto
Long-term care (LTC) systems entitle frail and disabled people, who experience declines in physical and mental capacities, to quality care and support from an appropriately trained workforce and aim to preserve individual health and promote personal well-being for people of all ages. Myriad social factors pose significant challenges to LTC services and systems worldwide. Leading among these factors is the aging population—that is, the growing proportion of older people, the main recipients of LTC, in the population—and the implications not only for the health and social protection sectors, but almost all other segments of society. The number of elderly citizens has increased significantly in recent years in most countries and regions, and the pace of that growth is expected to accelerate in the forthcoming decades. The rapid demographic evolution has been accompanied by substantial social changes that have modified the traditional pattern of delivery LTC. Although families (and friends) still provide most of the help and care to relatives with functional limitations, changes in the population structure, such as weakened family ties, increased participation of women in the labor market, and withdrawal of early retirement policies, have resulted in a decrease in the provision of informal care. Thus, the growing demands for care, together with a lower potential supply of informal care, is likely to put pressure on the provision of formal care services in terms of both quantity and quality. Other related concerns include the sustainable financing of LTC services, which has declined significantly in recent years, and the pursuit of equity.
The current institutional background regarding LTC differs substantially across countries, but they all face similar challenges. Addressing these challenges requires a comprehensive approach that allows for the adoption of the “right” mix of policies between those aiming at informal care and those focusing on the provision and financing of formal LTC services.
Difei Geng and Kamal Saggi
Foreign direct investment (FDI) plays an important role in facilitating the process of international technology diffusion. While FDI among industrialized countries primarily occurs via international mergers and acquisitions (M&As), investment headed to developing countries is more likely to be greenfield in nature; that is, it involves the establishment or expansion of new foreign affiliates by multinational firms. M&As have the potential to yield productivity improvements via changes in management and organization structure of target firms, whereas greenfield FDI leads to transfer of novel technical know-how by initiating the production of new products in host countries as well as by introducing improvements in existing production processes.
Given the prominent role that multinational firms play in global research and development (R&D), there is much interest in whether and how technologies transferred by them to their foreign subsidiaries later diffuse more broadly in host economies, thereby potentially generating broad-based productivity gains. Empirical evidence shows that whereas spillovers from FDI to competing local firms are elusive, such is not the case for spillovers to local suppliers and other agents involved in vertical relationships with multinationals. Multinationals have substantially increased their investments in research facilities in various parts of the world and in R&D collaboration with local firms in developing countries, most notably China and India. Such international collaboration in R&D spearheaded by multinational firms has the potential to accelerate global productivity growth.
Marissa Collins, Neil McHugh, Rachel Baker, Alec Morton, Lucy Frith, Keith Syrett, and Cam Donaldson
Health and social care organizations work within the context of limited resources. Different techniques to aid resource allocation and decision-making exist and are important as scarcity of resources in health and social care is inescapable. Healthcare systems, regardless of how they are organized, must decide what services to provide given the resources available. This is particularly clear in systems funded by taxation, which have limited budgets and other limited resources (staff, skills, facilities, etc.) and in which the claims on these resources outstrip supply.
Healthcare spending in many countries is not expected to increase over the short or medium term. Therefore, frameworks to set priorities are increasingly required. Four disciplines provide perspectives on priority setting: economics, decision analysis, ethics, and law. Although there is overlap amongst these perspectives, they are underpinned by different principles and processes for priority setting. As the values and viewpoints of those involved in priority setting in health and social care will differ, it is important to consider how these could be included to inform a priority setting process. It is proposed that these perspectives and the consideration of values and viewpoints could be brought together in a combined priority setting framework for use within local health and social care organizations.
High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both the number of variables in the dataset and , the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of , a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows:
where the observable variables are driven by the white noise , which is common to all the variables, the common factor, and by the idiosyncratic component . The common factor is orthogonal to the idiosyncratic components , the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor affect the variable dynamically, that is through the lag polynomial . Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both and tending to infinity.
Model , generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter.
We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector . Dynamic principal components, based on the spectral density of the ’s, are then used to construct estimators of the common factors.
These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the ’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors.
Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension that are driven by a -dimensional white noise, with , that is, singular vector stochastic processes. The main features of this literature are described with some detail.
Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.
Mónica Hernández Alava
The assessment of health-related quality of life is crucially important in the evaluation of healthcare technologies and services. In many countries, economic evaluation plays a prominent role in informing decision making often requiring preference-based measures (PBMs) to assess quality of life. These measures comprise two aspects: a descriptive system where patients can indicate the impact of ill health, and a value set based on the preferences of individuals for each of the health states that can be described. These values are required for the calculation of quality adjusted life years (QALYs), the measure for health benefit used in the vast majority of economic evaluations. The National Institute for Health and Care Excellence (NICE) has used cost per QALY as its preferred framework for economic evaluation of healthcare technologies since its inception in 1999.
However, there is often an evidence gap between the clinical measures that are available from clinical studies on the effect of a specific health technology and the PBMs needed to construct QALY measures. Instruments such as the EQ-5D have preference-based scoring systems and are favored by organizations such as NICE but are frequently absent from clinical studies of treatment effect. Even where a PBM is included this may still be insufficient for the needs of the economic evaluation. Trials may have insufficient follow-up, be underpowered to detect relevant events, or include the wrong PBM for the decision- making body.
Often this gap is bridged by “mapping”—estimating a relationship between observed clinical outcomes and PBMs, using data from a reference dataset containing both types of information. The estimated statistical model can then be used to predict what the PBM would have been in the clinical study given the available information.
There are two approaches to mapping linked to the structure of a PBM. The indirect approach (or response mapping) models the responses to the descriptive system using discrete data models. The expected health utility is calculated as a subsequent step using the estimated probability distribution of health states. The second approach (the direct approach) models the health state utility values directly.
Statistical models routinely used in the past for mapping are unable to consider the idiosyncrasies of health utility data. Often they do not work well in practice and can give seriously biased estimates of the value of treatments. Although the bias could, in principle, go in any direction, in practice it tends to result in underestimation of cost effectiveness and consequently distorted funding decisions. This has real effects on patients, clinicians, industry, and the general public.
These problems have led some analysts to mistakenly conclude that mapping always induces biases and should be avoided. However, the development and use of more appropriate models has refuted this claim. The need to improve the quality of mapping studies led to the formation of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Mapping to Estimate Health State Utility values from Non-Preference-Based Outcome Measures Task Force to develop good practice guidance in mapping.
Marcus Berliant and Ping Wang
General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.
Albert N. Link and John T. Scott
Science parks, also called research parks, technology parks, or technopolis infrastructures, have increased rapidly in number as many countries have adopted the approach of bringing research-based organizations together in a park. A science park’s cluster of research and technology-based organizations is often located on or near a university campus. The juxtaposition of ongoing research of both the university and the park tenants creates a two-way flow of knowledge; knowledge is transferred between the university and firms, and all parties develop knowledge more effectively because of their symbiotic relationship.
Theory and evidence support the belief that the geographic proximity provided to the participating organizations by a science park creates a dynamic cluster that accelerates economic growth and international competitiveness through the innovation-enabling exchanges of knowledge and the transfer of technologies. The process of creating innovations is more efficient because of the agglomeration of research and technology-based firms on or near a university campus. The proximity of a park to multiple sources of knowledge provides greater opportunities for the creation and acquisition of knowledge, especially tacit knowledge, and the geographic proximity therefore reduces the search and acquisition costs for that knowledge.
The clustering of multiple research and technology-based organizations within a park enables knowledge spillovers, and with greater productivity from research resources and lower costs, prices for new technologies can be lower, stimulating their use and regional development and growth. In addition to the clustering of the organizations within a park, the geographic proximity of universities affiliated with a park matters too. Evidence shows that a park’s employment growth is greater, other things being the same, when its affiliated university is geographically closer, although evidence suggests that effect has lessened in the 21st century because of the information and communications technology revolution. Further stimulating regional growth, university spin-off companies are more prevalent in a park when it is geographically closer to the affiliated university. The two-way flow of knowledge enabled by clusters of research and technology-based firms in science parks benefits firms located on the park and the affiliated universities.
Understanding the mechanisms by which the innovative performance of research and technology-based organizations is increased by their geographic proximity in a science park is important for formulating public and private sector policies toward park formations because successful national innovation systems require the two-way knowledge flow, among firms in a park and between firms and universities, that is fostered by the science park infrastructure.
The geography of economic activity refers to the distribution of population, production, and consumption of goods and services in geographic space. The geography of growth and development refers to the local growth and decline of economic activity and the overall distribution of these local changes within and across countries. The pattern of growth in space can vary substantially across regions, countries, and industries. Ultimately, these patterns can help explain the role that spatial frictions (like transport and migration costs) can play in the overall development of the world economy.
The interaction of agglomeration and congestion forces determines the density of economic activity in particular locations. Agglomeration forces refer to forces that bring together agents and firms by conveying benefits from locating close to each other, or for locating in a particular area. Examples include local technology and institutions, natural resources and local amenities, infrastructure, as well as knowledge spillovers. Congestion forces refer to the disadvantages of locating close to each other. They include traffic, high land prices, as well as crime and other urban dis-amenities. The balance of these forces is mediated by the ability of individuals, firms, good and services, as well as ideas and technology, to move across space: namely, migration, relocation, transport, commuting and communication costs. These spatial frictions together with the varying strength of congestion and agglomeration forces determines the distribution of economic activity. Changes in these forces and frictions—some purposefully made by agents given the economic environment they face and some exogenous—determine the geography of growth and development.
The main evolution of the forces that influence the geography of growth and development have been changes in transport technology, the diffusion of general-purpose technologies, and the structural transformation of economies from agriculture, to manufacturing, to service-oriented economies. There are many challenges in modeling and quantifying these forces and their effects. Nevertheless, doing so is essential to evaluate the impact of a variety of phenomena, from climate change to the effects of globalization and advances in information technology.
Pao-Li Chang and Wen-Tai Hsu
This article reviews interrelated power-law phenomena in geography and trade. Given the empirical evidence on the gravity equation in trade flows across countries and regions, its theoretical underpinnings are reviewed. The gravity equation amounts to saying that trade flows follow a power law in distance (or geographic barriers). It is concluded that in the environment with firm heterogeneity, the power law in firm size is the key condition for the gravity equation to arise. A distribution is said to follow a power law if its tail probability follows a power function in the distribution’s right tail. The second part of this article reviews the literature that provides the microfoundation for the power law in firm size and reviews how this power law (in firm size) may be related to the power laws in other distributions (in incomes, firm productivity and city size).