141-160 of 379 Results

Article

Financial Bubbles in History  

William Quinn and John Turner

Financial bubbles constitute some of history’s most significant economic events, but academic research into the phenomenon has often been narrow, with an excessive focus on whether bubble episodes invalidate or confirm the efficient markets hypothesis. The literature on the topic has also been somewhat siloed, with theoretical, experimental, qualitative, and quantitative methods used to develop relatively discrete bodies of research. In order to overcome these deficiencies, future research needs to move beyond the rational/irrational dichotomy and holistically examine the causes and consequences of bubbles. Future research in financial bubbles should thus use a wider range of investigative tools to answer key questions or attempt to synthesize the findings of multiple research programs. There are three areas in particular that future research should focus on: the role of information in a bubble, the aftermath of bubbles, and possible regulatory responses. While bubbles are sometimes seen as an inevitable part of capitalism, there have been long historical eras in which they were extremely rare, and these eras are likely to contain lessons for alleviating the negative effects of bubbles in the 21st century. Finally, the literature on bubbles has tended to neglect certain regions, and future research should hunt for undiscovered episodes outside of Europe and North America.

Article

Financial Economics of United States Slavery  

Rajesh P. Narayanan and Jonathan Pritchett

Financial economics reveals that slaves were profitable investments and that the rate of return from owning slaves was at least as high as the return on comparable investments. The profitability of slavery depended on both the productivity and the market valuation of slaves. Owners increased the productivity of slaves by developing better strains of cotton, employing more efficient systems of production (gang labor), and using force and coercion (whippings). Efficient markets facilitated the interregional transfer of labor, and selective sales devastated slave families. Market studies show that slave prices reflected the capitalized value of labor and that they varied based on labor productivity. The profitability of slaves and the availability of efficient markets made slaves attractive investment vehicles for storing wealth. Their attractiveness as investments, however, may have had some other costs. Several studies argue and provide evidence that investment in slaves supplanted investment in other forms of physical and human capital, much to the detriment of southern industrialization and development. Besides serving as investment vehicles, slaves also facilitated financing. A growing body of work provides evidence that slaves were pledged as collateral to obtain credit.

Article

Financial Frictions and International Trade: A Review  

David Kohn, Fernando Leibovici, and Michal Szkup

This article reviews recent studies on the impact of financial frictions on international trade. We first present evidence on the relation between measures of access to external finance and export decisions. We then present an analytical framework to analyze the impact of financial frictions on firms’ export decisions. Finally, we review recent applications of this framework to investigate the impact of financial frictions on international trade dynamics across firms, across industries, and in the aggregate. We discuss related empirical, theoretical, and quantitative studies throughout.

Article

Financial Frictions in Macroeconomic Models  

Alfred Duncan and Charles Nolan

In recent decades, macroeconomic researchers have looked to incorporate financial intermediaries explicitly into business-cycle models. These modeling developments have helped us to understand the role of the financial sector in the transmission of policy and external shocks into macroeconomic dynamics. They also have helped us to understand better the consequences of financial instability for the macroeconomy. Large gaps remain in our knowledge of the interactions between the financial sector and macroeconomic outcomes. Specifically, the effects of financial stability and macroprudential policies are not well understood.

Article

Financial History of Sub-Saharan Africa  

Leigh Gardner

African financial history is often neglected in research on the history of global financial systems, and in its turn research on African financial systems in the past often fails to explore links with the rest of the world. However, African economies and financial systems have been linked to the rest of the world since ancient times. Sub-Saharan Africa was a key supplier of gold used to underpin the monetary systems of Europe and the North from the medieval period through the 19th century. It was West African gold rather than slaves that first brought Europeans to the Atlantic coast of Africa during the early modern period. Within sub-Saharan Africa, currency and credit systems reflected both internal economic and political structures as well as international links. Before the colonial period, indigenous currencies were often tied to particular trades or trade routes. These systems did not immediately cease to exist with the introduction of territorial currencies by colonial governments. Rather, both systems coexisted, often leading to shocks and localized crises during periods of global financial uncertainty. At independence, African governments had to contend with a legacy of financial underdevelopment left from the colonial period. Their efforts to address this have, however, been shaped by global economic trends. Despite recent expansion and innovation, limited financial development remains a hindrance to economic growth.

Article

Financial Inclusion and Human Development  

Maria Soledad Martinez Peria and Mu Yang Shin

The link between financial inclusion and human development is examined here. Using cross-country data, the behavior of variables that try to capture these concepts is examined and preliminary evidence of a positive association is offered. However, because establishing a causal relationship with macro-data is difficult, a thorough review of the literature on the impact of financial inclusion, focusing on micro-studies that can better address identification is conducted. The literature generally distinguishes between different dimensions of financial inclusion: access to credit, access to bank branches, and access to saving instruments (i.e., accounts). Despite promising results from a first wave of studies, the impact of expanding access to credit seems limited at best, with little evidence of transformative effects on human development outcomes. While there is more promising evidence on the impact of expanding access to bank branches and formal saving instruments, studies show that some interventions such as one-time account opening subsidies are unlikely to have a sizable impact on social and economic outcomes. Instead well-designed interventions catering to individuals’ specific needs in different contexts seem to be required to realize the full potential of formal financial services to enrich human lives.

Article

Financial Protection Against Medical Expense  

Owen O'Donnell

Financial protection is claimed to be an important objective of health policy. Yet there is a lack of clarity about what it is and no consensus on how to measure it. This impedes the design of efficient and equitable health financing. Arguably, the objective of financial protection is to shield nonmedical consumption from the cost of healthcare. The instruments are formal health insurance and public finances, as well as informal and self-insurance mechanisms that do not impair earnings potential. There are four main approaches to the measurement of financial protection: the extent of consumption smoothing over health shocks, the risk premium (willingness to pay in excess of a fair premium) to cover uninsured medical expenses, catastrophic healthcare payments, and impoverishing healthcare payments. The first of these does not restrict attention to medical expenses, which limits its relevance to health financing policy. The second rests on assumptions about risk preferences. No measure treats medical expenses that are financed through informal insurance and self-insurance instruments in an entirely satisfactory way. By ignoring these sources of imperfect insurance, the catastrophic payments measure overstates the impact of out-of-pocket medical expenses on living standards, while the impoverishment measure does not credibly identify poverty caused by them. It is better thought of as a correction to the measurement of poverty.

Article

Financial Strain and Health  

Irina Grafova

One of the most fundamental results in health economics is that a greater socio-economic status is associated with better health outcomes. However, the experience of financial pressure and lack of resources transcends the notion of low income and poverty. Families of all income categories can experience financial pressure and lack of resources. This article reviews the literature examining the relationship between financial strain and various health outcomes. There are three main approaches to the measurement of financial strain found in the research literature, each one capturing a slightly different aspect: the family’s debt position, the availability of emergency funds, and inability to meet current financial obligations. There are two main hypotheses explaining how financial strain may affect health. First, financial strain indicates a lower amount of financial resources available to individuals and families. This may have a dual impact on health. On the one hand, lower financial resources may lead to a decrease in consumption of substances such as tobacco that are harmful to health. On the other hand, lower financial resources may also negatively affect healthcare access, healthcare utilization, and adherence to treatment, with each contributing to a decline in health. Second, financial strain may produce greater uncertainty with regard to the availability of financial resources at present as well as in the future, thereby resulting in elevated stress, which may, in turn, result in poorer health outcomes. Examining the relationship between financial strain and health is complicated because it appears to be bidirectional. It is not only the case that financial strain may impact health but that health may impact financial strain. The research literature consistently finds that financial strain has a detrimental impact on a variety of mental health outcomes. This relationship has been documented for a variety of financial strain indicators, including non-collateralized (unsecure) debt, mortgage debt, and the inability to meet current financial obligations. The research on the association between financial strain and health behavior outcomes is more ambiguous. As one example, there are mixed results concerning whether financial strain results in a higher likelihood of obesity. This research has considered various indicators of financial strain, including credit card debt and the inability to meet current financial obligations. It appears that both among adults and children there is no consistent evidence on the impact of financial strain on body weight. Similarly, the results on the impact of financial strain on alcohol use and substance abuse are mixed. A number of significant questions regarding the relationship between financial strain and health remain unresolved. The majority of the existing studies focus on health outcomes among adults. There is a lack of understanding regarding how family exposure to financial strain can affect children. Additionally, very little is known about the implications of long-term exposure to financial strain. There are also some very important methodological challenges in this area of research related to establishing causality. Establishing causality and learning more about the implications of the exposure to financial strain could have important policy implications for a variety of safety net programs.

Article

Financing and Policy for Long-Term Care  

Alexandrina Stoyanova and David Cantarero-Prieto

Long-term care (LTC) systems entitle frail and disabled people, who experience declines in physical and mental capacities, to quality care and support from an appropriately trained workforce and aim to preserve individual health and promote personal well-being for people of all ages. Myriad social factors pose significant challenges to LTC services and systems worldwide. Leading among these factors is the aging population—that is, the growing proportion of older people, the main recipients of LTC, in the population—and the implications not only for the health and social protection sectors, but almost all other segments of society. The number of elderly citizens has increased significantly in recent years in most countries and regions, and the pace of that growth is expected to accelerate in the forthcoming decades. The rapid demographic evolution has been accompanied by substantial social changes that have modified the traditional pattern of delivery LTC. Although families (and friends) still provide most of the help and care to relatives with functional limitations, changes in the population structure, such as weakened family ties, increased participation of women in the labor market, and withdrawal of early retirement policies, have resulted in a decrease in the provision of informal care. Thus, the growing demands for care, together with a lower potential supply of informal care, is likely to put pressure on the provision of formal care services in terms of both quantity and quality. Other related concerns include the sustainable financing of LTC services, which has declined significantly in recent years, and the pursuit of equity. The current institutional background regarding LTC differs substantially across countries, but they all face similar challenges. Addressing these challenges requires a comprehensive approach that allows for the adoption of the “right” mix of policies between those aiming at informal care and those focusing on the provision and financing of formal LTC services.

Article

Financing Higher Education  

Bruce Chapman and Lorraine Dearden

The rapid worldwide growth in higher education undergraduate enrollments since around 1990 has meant that governments have had to rethink provision and funding arrangements to help ensure both cost-effective and equitable outcomes. It is important to understand in detail the fundamental financial conceptual building blocks that are necessary for an efficacious and socially just higher education financing system. In response to the critical question of who should pay for higher education and student income support, the case for the sharing of the costs between students, graduates, and taxpayers is overwhelming from the perspectives of both efficiency and equity. Further, there is a consensus that governments should intervene with respect to the underwriting of student loans, but there are very important and quite different implications for borrowers with respect to loan collection arrangements. The most equitable and effective higher education financing instrument involves loans that are repaid only when and if debtors can afford to do so, known as income-contingent loans. The less desirable form of student loans, defined by time-based collection, is internationally still the most common approach, but recent advances in economic theory and econometric methodology provide both conceptual bases and exciting and innovative ways for governments to understand why traditional student loan approaches are inferior to income-contingent collection. When the effects of student loans on access and welfare become more properly understood, the case for targeted assistance for all disadvantaged prospective students for reasons of social justice remains compelling. The importance of the attainment of the right financing system was highlighted by the economic trauma associated with the COVID-19 pandemic, an ordeal that caused many universities to experience an entirely unexpected financial crisis and led millions of students to struggle with unanticipated loan repayment difficulties.

Article

Fiscal and Monetary Policy in Open Economy  

Andrea Ferrero

The development of a simple framework with optimizing agents and nominal rigidities is the point of departure for the analysis of three questions about fiscal and monetary policies in an open economy. The first question concerns the optimal monetary policy targets in a world with trade and financial links. In the baseline model, the optimal cooperative monetary policy is fully inward-looking and seeks to stabilize a combination of domestic inflation and output gap. The equivalence with the closed economy case, however, ends if countries do not cooperate, if firms price goods in the currency of the market of destination, and if international financial markets are incomplete. In these cases, external variables that capture international misalignments relative to the first best become relevant policy targets. The second question is about the empirical evidence on the international transmission of government spending shocks. In response to a positive innovation, the real exchange rate depreciates and the trade balance deteriorates. Standard open economy models struggle to match this evidence. Non-standard consumption preferences and a detailed fiscal adjustment process constitute two ways to address the puzzle. The third question deals with the trade-offs associated with an active use of fiscal policy for stabilization purposes in a currency union. The optimal policy assignment mandates the monetary authority to stabilize union-wide aggregates and the national fiscal authorities to respond to country-specific shocks. Permanent changes of government debt allow to smooth the distortionary effects of volatile taxes. Clear and credible fiscal rules may be able to strike the appropriate balance between stabilization objectives and moral hazard issues.

Article

Forecasting Electricity Prices  

Katarzyna Maciejowska, Bartosz Uniejewski, and Rafal Weron

Forecasting electricity prices is a challenging task and an active area of research since the 1990s and the deregulation of the traditionally monopolistic and government-controlled power sectors. It is interdisciplinary by nature and requires expertise in econometrics, statistics or machine learning for developing well-performing predictive models, finance for understanding market mechanics, and electrical engineering for comprehension of the fundamentals driving electricity prices. Although electricity price forecasting aims at predicting both spot and forward prices, the vast majority of research is focused on short-term horizons which exhibit dynamics unlike in any other market. The reason is that power system stability calls for a constant balance between production and consumption, while being dependent on weather (in terms of demand and supply) and business activity (in terms of demand only). The recent market innovations do not help in this respect. The rapid expansion of intermittent renewable energy sources is not offset by the costly increase of electricity storage capacities and modernization of the grid infrastructure. On the methodological side, this leads to three visible trends in electricity price forecasting research. First, there is a slow but more noticeable tendency to consider not only point but also probabilistic (interval, density) or even path (also called ensemble) forecasts. Second, there is a clear shift from the relatively parsimonious econometric (or statistical) models toward more complex and harder to comprehend but more versatile and eventually more accurate statistical and machine learning approaches. Third, statistical error measures are regarded as only the first evaluation step. Since they may not necessarily reflect the economic value of reducing prediction errors, in recent publications they tend to be complemented by case studies comparing profits from scheduling or trading strategies based on price forecasts obtained from different models.

Article

Foreign Direct Investment and International Technology Diffusion  

Difei Geng and Kamal Saggi

Foreign direct investment (FDI) plays an important role in facilitating the process of international technology diffusion. While FDI among industrialized countries primarily occurs via international mergers and acquisitions (M&As), investment headed to developing countries is more likely to be greenfield in nature; that is, it involves the establishment or expansion of new foreign affiliates by multinational firms. M&As have the potential to yield productivity improvements via changes in management and organization structure of target firms, whereas greenfield FDI leads to transfer of novel technical know-how by initiating the production of new products in host countries as well as by introducing improvements in existing production processes. Given the prominent role that multinational firms play in global research and development (R&D), there is much interest in whether and how technologies transferred by them to their foreign subsidiaries later diffuse more broadly in host economies, thereby potentially generating broad-based productivity gains. Empirical evidence shows that whereas spillovers from FDI to competing local firms are elusive, such is not the case for spillovers to local suppliers and other agents involved in vertical relationships with multinationals. Multinationals have substantially increased their investments in research facilities in various parts of the world and in R&D collaboration with local firms in developing countries, most notably China and India. Such international collaboration in R&D spearheaded by multinational firms has the potential to accelerate global productivity growth.

Article

Foreign Exchange Intervention  

Helen Popper

The practice of central bank foreign exchange intervention for a time ran ahead of either compelling theoretical explanations of its use or persuasive empirical evidence of its effectiveness. Research accelerated when the emerging economy crises of the 1990s and the early 2000s brought fresh data in the form of urgent experimentation with foreign exchange intervention and related policies, and the financial crisis of 2008 propelled serious treatment of financial frictions into models of intervention. Current foreign exchange intervention models combine financial frictions with relevant externalities: with the aggregate demand and pecuniary externalities that inform macroeconomic models more broadly, and with the trade-related learning externalities that are particularly relevant for developing and emerging economies. These models characteristically allow for normative evaluation of the use of foreign exchange intervention, although most (but not all) do so from a single economy perspective. Empirical advances reflect the advantages of more variation in the use of foreign exchange intervention, better data, and novel econometric approaches to addressing endogeneity. Foreign exchange intervention is now widely viewed as influencing exchange rates at least to some extent, and sustained one-sided intervention; and its corresponding reserve accumulation appear to play a role in moderating exchange rate fluctuations and in reducing the likelihood of damaging consequences of financial crises. Key avenues for future research include sorting out which frictions and externalities matter most, and where foreign exchange intervention—and perhaps international cooperation—properly fits (if at all) into the blend of policies that might appropriately address the externalities.

Article

Fractional Integration and Cointegration  

Javier Hualde and Morten Ørregaard Nielsen

Fractionally integrated and fractionally cointegrated time series are classes of models that generalize standard notions of integrated and cointegrated time series. The fractional models are characterized by a small number of memory parameters that control the degree of fractional integration and/or cointegration. In classical work, the memory parameters are assumed known and equal to 0, 1, or 2. In the fractional integration and fractional cointegration context, however, these parameters are real-valued and are typically assumed unknown and estimated. Thus, fractionally integrated and fractionally cointegrated time series can display very general types of stationary and nonstationary behavior, including long memory, and this more general framework entails important additional challenges compared to the traditional setting. Modeling, estimation, and testing in the context of fractional integration and fractional cointegration have been developed in time and frequency domains. Related to both alternative approaches, theory has been derived under parametric or semiparametric assumptions, and as expected, the obtained results illustrate the well-known trade-off between efficiency and robustness against misspecification. These different developments form a large and mature literature with applications in a wide variety of disciplines.

Article

Frameworks for Priority Setting in Health and Social Care  

Marissa Collins, Neil McHugh, Rachel Baker, Alec Morton, Lucy Frith, Keith Syrett, and Cam Donaldson

Health and social care organizations work within the context of limited resources. Different techniques to aid resource allocation and decision-making exist and are important as scarcity of resources in health and social care is inescapable. Healthcare systems, regardless of how they are organized, must decide what services to provide given the resources available. This is particularly clear in systems funded by taxation, which have limited budgets and other limited resources (staff, skills, facilities, etc.) and in which the claims on these resources outstrip supply. Healthcare spending in many countries is not expected to increase over the short or medium term. Therefore, frameworks to set priorities are increasingly required. Four disciplines provide perspectives on priority setting: economics, decision analysis, ethics, and law. Although there is overlap amongst these perspectives, they are underpinned by different principles and processes for priority setting. As the values and viewpoints of those involved in priority setting in health and social care will differ, it is important to consider how these could be included to inform a priority setting process. It is proposed that these perspectives and the consideration of values and viewpoints could be brought together in a combined priority setting framework for use within local health and social care organizations.

Article

Frequency-Domain Approach in High-Dimensional Dynamic Factor Models  

Marco Lippi

High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T , the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n , a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows: x i t = α i u t + β i u t − 1 + ξ i t , i = 1, … , n , t = 1, … , T , (*) where the observable variables x i t are driven by the white noise u t , which is common to all the variables, the common factor, and by the idiosyncratic component ξ i t . The common factor u t is orthogonal to the idiosyncratic components ξ i t , the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor u t affect the variable x i t dynamically, that is through the lag polynomial α i + β i L . Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity. Model ( ∗ ) , generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter. We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector ( x 1 t x 2 t ⋯ x n t ) . Dynamic principal components, based on the spectral density of the x ’s, are then used to construct estimators of the common factors. These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x ’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors. Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q -dimensional white noise, with q < n , that is, singular vector stochastic processes. The main features of this literature are described with some detail. Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.

Article

From Clinical Outcomes to Health Utilities: The Role of Mapping to Bridge the Evidence Gap  

Mónica Hernández Alava

The assessment of health-related quality of life is crucially important in the evaluation of healthcare technologies and services. In many countries, economic evaluation plays a prominent role in informing decision making often requiring preference-based measures (PBMs) to assess quality of life. These measures comprise two aspects: a descriptive system where patients can indicate the impact of ill health, and a value set based on the preferences of individuals for each of the health states that can be described. These values are required for the calculation of quality adjusted life years (QALYs), the measure for health benefit used in the vast majority of economic evaluations. The National Institute for Health and Care Excellence (NICE) has used cost per QALY as its preferred framework for economic evaluation of healthcare technologies since its inception in 1999. However, there is often an evidence gap between the clinical measures that are available from clinical studies on the effect of a specific health technology and the PBMs needed to construct QALY measures. Instruments such as the EQ-5D have preference-based scoring systems and are favored by organizations such as NICE but are frequently absent from clinical studies of treatment effect. Even where a PBM is included this may still be insufficient for the needs of the economic evaluation. Trials may have insufficient follow-up, be underpowered to detect relevant events, or include the wrong PBM for the decision- making body. Often this gap is bridged by “mapping”—estimating a relationship between observed clinical outcomes and PBMs, using data from a reference dataset containing both types of information. The estimated statistical model can then be used to predict what the PBM would have been in the clinical study given the available information. There are two approaches to mapping linked to the structure of a PBM. The indirect approach (or response mapping) models the responses to the descriptive system using discrete data models. The expected health utility is calculated as a subsequent step using the estimated probability distribution of health states. The second approach (the direct approach) models the health state utility values directly. Statistical models routinely used in the past for mapping are unable to consider the idiosyncrasies of health utility data. Often they do not work well in practice and can give seriously biased estimates of the value of treatments. Although the bias could, in principle, go in any direction, in practice it tends to result in underestimation of cost effectiveness and consequently distorted funding decisions. This has real effects on patients, clinicians, industry, and the general public. These problems have led some analysts to mistakenly conclude that mapping always induces biases and should be avoided. However, the development and use of more appropriate models has refuted this claim. The need to improve the quality of mapping studies led to the formation of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Mapping to Estimate Health State Utility values from Non-Preference-Based Outcome Measures Task Force to develop good practice guidance in mapping.

Article

Gene–Environment Interplay in the Social Sciences  

Rita Dias Pereira, Pietro Biroli, Titus Galama, Stephanie von Hinke, Hans van Kippersluis, Cornelius A. Rietveld, and Kevin Thom

Nature (one’s genes) and nurture (one’s environment) jointly contribute to the formation and evolution of health and human capital over the life cycle. This complex interplay between genes and environment can be estimated and quantified using genetic information readily available in a growing number of social science data sets. Using genetic data to improve our understanding of individual decision making, inequality, and to guide public policy is possible and promising, but requires a grounding in essential genetic terminology, knowledge of the literature in economics and social-science genetics, and a careful discussion of the policy implications and prospects of the use of genetic data in the social sciences and economics.

Article

General Equilibrium Theories of Spatial Agglomeration  

Marcus Berliant and Ping Wang

General equilibrium theories of spatial agglomeration are closed models of agent location that explain the formation and growth of cities. There are several types of such theories: conventional Arrow-Debreu competitive equilibrium models and monopolistic competition models, as well as game theoretic models including search and matching setups. Three types of spatial agglomeration forces often come into play: trade, production, and knowledge transmission, under which cities are formed in equilibrium as marketplaces, factory towns, and idea laboratories, respectively. Agglomeration dynamics are linked to urban growth in the long run.