21-40 of 298 Results

Article

Geoffrey L. Taylor and Katherine L. Chiou

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article. The Andean highland region of South America was a center for the domestication of crops and the development of novel agricultural intensification strategies. These advances provided the social and economic foundations for one of the largest pre-Hispanic states in the Americas—the Inca—as well as numerous preceding and contemporaneous cultures. The legacy created by Andean agriculturalists includes terraced and raised fields that remain in use today as well as globally consumed foods including chili pepper (Capsicum spp.), potato (Solanum tuberosum), and quinoa (Chenopodium quinoa). Research on modern forms of traditional agriculture in South America by ethnographers, geographers, and agronomists can be grouped into three general themes: (1) the physical, social, and ritual practices of farming; (2) the environmental impacts of farming; and (3) agrobiodiversity and genetic conservation of crop varieties. Due to conquest by European invaders in the 16th century and the resulting demographic collapse, aspects of native knowledge and traditions were lost. Consequently, much of what is known about pre-Hispanic traditional agricultural practices is derived from archaeological research. To farm the steep mountainous slopes in the quechua and suni zones, native Andean peoples developed a suite of field types ranging from rainfed sloping fields to irrigated bench terracing that flattened the ground to increase surface area, raised soil temperatures, and reduced soil erosion. In the high plains or puna zone, flat wetlands were transformed into a patchwork of alternating raised fields and irrigation canals. By employing this strategy, Andean peoples created microclimates that resisted frost, managed moisture availability, and improved soil nutrient quality. These agricultural approaches cannot be divorced from enduring Andean cosmological and social concepts such as the ayni and minka exchange-labor systems based on reciprocity and the ayllu, a lineage and community group that also integrates the land itself and the wakas (nonhuman agentive beings) that reside there with the people. To understand traditional agriculture in the highland Andes and how it supported large populations in antiquity, facilitated the rapid expansion of the Inca Empire, and created field systems that are still farmed sustainably by populations today, it is essential to examine not only the physical practices themselves, but also the social context surrounding their development and use in ancient and modern times.

Article

Glenn H. Shepard Jr., Charles R. Clement, Helena Pinto Lima, Gilton Mendes dos Santos, Claide de Paula Moraes, and Eduardo Góes Neves

The tropical lowlands of South America were long thought of as a “counterfeit paradise,” a vast expanse of mostly pristine rainforests with poor soils for farming, limited protein resources, and environmental conditions inimical to the endogenous development of hierarchical human societies. These misconceptions derived largely from a fundamental misunderstanding of the unique characteristics of ancient and indigenous farming and environmental management in lowland South America, which are in turn closely related to the cultural baggage surrounding the term “agriculture.” Archaeological and archaeobotanical discoveries made in the early 21st century have overturned these misconceptions and revealed the true nature of the ancient and traditional food production systems of lowland South America, which involve a complex combination of horticulture, agroforestry, and the management of non-domesticated or incipiently domesticated species in cultural forest landscapes. In this sense, lowland South America breaks the mould of the Old World “farming hypothesis” by revealing cultivation without domestication and domestication without agriculture, a syndrome that has been referred to as “anti-domestication”. These discoveries have contributed to a better understanding of the cultural history of South America, while also suggesting new paradigms of environmental management and food production for the future of this critical and threatened biome.

Article

Marcello Hernández-Blanco and Robert Costanza

“The Anthropocene” has been proposed as the new geological epoch in which we now live. We have left behind the Holocene, an epoch of stable climate conditions that permitted the development of human civilization. To address the challenges of this new epoch, humanity needs to take an active role as stewards of the integrated Earth System, collaborating across scales and levels with a shared vision and values toward maintaining the planet within a safe and just operating space. In September 2015, the United Nations adopted the 2030 Agenda for Sustainable Development, which has at its core 17 Sustainable Development Goals (SDGs). These goals built on and superseded the Millennium Development Goals (MDGs). Unlike the MDGs, they apply to all countries and represent universal goals and targets that articulate the need and opportunity for the global community to build a sustainable and desirable future in an increasingly interconnected world. The global health crisis caused by COVID-19 has been a strong hit to a vulnerable development system, exacerbating many of the challenges that humanity faces in the Anthropocene. The pandemic has touched all segments of the global populations and all sectors of the economy, with the world’s poorest and most vulnerable people the most affected. Understanding the interdependence between SDGs is a key area of research and policy, which will require novel approaches to assess and implement systemic global strategies to achieve the 2030 agenda. Global society requires a new vision of the economy, one in which the economy is recognized to be a subsystem of the broader Earth System (a single complex system with reasonably well-defined states and transitions between them), instead of viewing nature as just another source of resources and sink for wastes. This approach will require acknowledging the value of nature, which, although it has been widely recognized in the scientific literature, has been often ignored by decision-makers. Therefore, there is a need to replace the static, linear model of gross domestic product (GDP) with more dynamic, integrated, natural, and human system models that incorporate the dynamics of stocks, flows, trade-offs, and synergies among the full range of variables that affect the SDGs and human and ecosystem well-being. The SDGs will only be achieved if humanity chooses a development path focused on thriving in a broad and integrated way, rather than growing material consumption at all costs. Achieving the SDGs is a future where society reconnects with the rest of nature and develops within its planetary boundaries. The new economics and the visions and strategies are aimed at achieving these shared global goals.

Article

Jan Zalasiewicz and Colin Waters

The Anthropocene hypothesis—that humans have impacted “the environment” but also changed the Earth’s geology—has spread widely through the sciences and humanities. This hypothesis is being currently tested to see whether the Anthropocene may become part of the Geological Time Scale. An Anthropocene Working Group has been established to assemble the evidence. The decision regarding formalization is likely to be taken in the next few years, by the International Commission on Stratigraphy, the body that oversees the Geological Time Scale. Whichever way the decision goes, there will remain the reality of the phenomenon and the utility of the concept. The evidence, as outlined here, rests upon a broad range of signatures reflecting humanity’s significant and increasing modification of Earth systems. These may be visible as markers in physical deposits in the form of the greatest expansion of novel minerals in the last 2.4 billion years of Earth history and development of ubiquitous materials, such as plastics, unique to the Anthropocene. The artefacts we produce to live as modern humans will form the technofossils of the future. Human-generated deposits now extend from our natural habitat on land into our oceans, transported at rates exceeding the sediment carried by rivers by an order of magnitude. That influence now extends increasingly underground in our quest for minerals, fuel, living space, and to develop transport and communication networks. These human trace fossils may be preserved over geological durations and the evolution of technology has created a new technosphere, yet to evolve into balance with other Earth systems. The expression of the Anthropocene can be seen in sediments and glaciers in chemical markers. Carbon dioxide in the atmosphere has risen by ~45 percent above pre–Industrial Revolution levels, mainly through combustion, over a few decades, of a geological carbon-store that took many millions of years to accumulate. Although this may ultimately drive climate change, average global temperature increases and resultant sea-level rises remain comparatively small, as yet. But the shift to isotopically lighter carbon locked into limestones and calcareous fossils will form a permanent record. Nitrogen and phosphorus contents in surface soils have approximately doubled through increased use of fertilizers to increase agricultural yields as the human population has also doubled in the last 50 years. Industrial metals, radioactive fallout from atomic weapons testing, and complex organic compounds have been widely dispersed through the environment and become preserved in sediment and ice layers. Despite radical changes to flora and fauna across the planet, the Earth still has most of its complement of biological species. However, current trends of habitat loss and predation may push the Earth into the sixth mass extinction event in the next few centuries. At present the dramatic changes relate to trans-global species invasions and population modification through agricultural development on land and contamination of coastal zones. Considering the entire range of environmental signatures, it is clear that the global, large and rapid scale of change related to the mid-20th century is the most obvious level to consider as the start of the Anthropocene Epoch.

Article

Julie Laity

Arid environments cover about one third of the Earth’s surface, comprising the most extensive of the terrestrial biomes. Deserts show considerable individual variation in climate, geomorphic surface expression, and biogeography. Climatically, deserts range from dry interior environments, with large temperature ranges, to humid and relatively cool coastal environments, with small temperature ranges. What all deserts share in common is a consistent deficit of precipitation relative to water loss by evaporation, implying that the biological availability of water is very low. Deserts develop because of climatic (persistent high-pressure cells), topographic (mountain ranges that cause rain shadow effects), and oceanographic (cold currents) factors that limit the amount of rain or snowfall that a region receives. Most global deserts are subtropical in distribution. There is a large range of geomorphic surfaces, including sand sheets and sand seas (ergs), stone pavements, bedrock outcrops, dry lakebeds, and alluvial fans. Vegetation cover is generally sparse, but may be enhanced in areas of groundwater seepage or along river courses. The limited vegetation cover affects fluvial and slope processes and results in an enhanced role for the wind. While the majority of streams in deserts are ephemeral features, both intermittent and perennial rivers develop in response to snowmelt in nearby mountains or runoff from distant, more well-watered regions. Most drainage is endoreic, meaning that it flows internally into closed basins and does not reach the sea, being disposed of by seepage and evaporation. The early study of deserts was largely descriptive. More process-based studies commenced with the study of North American deserts in the mid- to late-1800s. Since the late 20th century, research has expanded into many areas of the world, with notable contributions coming from China, but our knowledge of deserts is still more compete in regions such as North America, Australia, Israel, and southern Africa, where access and funding have been more consistently secure. The widespread availability of high-quality remotely sensed images has contributed to the spread of study into new global field areas. The temporal framework for research has also improved, benefiting from improvements in geochronological techniques. Geochronological controls are vital to desert research because most arid regions have experienced significant climatic changes. Deserts have not only expanded or contracted in size, but have experienced changes in the dominant geomorphic processes and biogeographic environment. Contemporary scientific work has also benefited from improvements in technology, notably in surveying techniques, and from the use of quantitative modeling.

Article

Saket Pande, Mahendran Roobavannan, Jaya Kandasamy, Murugesu Sivapalan, Daniel Hombing, Haoyang Lyu, and Luuk Rietveld

Water quantity and quality crises are emerging everywhere, and other crises of a similar nature are emerging at several locations. In spite of a long history of investing in sustainable solutions for environmental preservation and improved water supply, these phenomena continue to emerge, with serious economic consequences. Water footprint studies have found it hard to change culture, that is, values, beliefs, and norms, about water use in economic production. Consumption of water-intensive products such as livestock is seen as one main reason behind our degrading environment. Culture of water use is indeed one key challenge to water resource economics and development. Based on a review of socio-hydrology and of societies going all the way back to ancient civilizations, a narrative is developed to argue that population growth, migration, technology, and institutions characterize co-evolution in any water-dependent society (i.e., a society in a water-stressed environment). Culture is proposed as an emergent property of such dynamics, with institutions being the substance of culture. Inclusive institutions, strong diversified economies, and resilient societies go hand in hand and emerge alongside the culture of water use. Inclusive institutions, in contrast to extractive institutions, are the ones where no small group of agents is able to extract all the surplus from available resources at the cost of many. Just as values and norms are informed by changing conditions resulting from population and economic growth and climate, so too are economic, technological, and institutional changes shaped by prevailing culture. However, these feedbacks occur at different scales—cultural change being slower than economic development, often leading to “lock-ins” of decisions that are conditioned by prevailing culture. Evidence-based arguments are presented, which suggest that any attempt at water policy that ignores the key role that culture plays will struggle to be effective. In other words, interventions that are sustainable endogenize culture. For example, changing water policy, for example, by taking water away from agriculture and transferring it to the environment, at a time when an economy is not diversified enough to facilitate the needed change in culture, will backfire. Although the economic models (and policy based on them) are powerful in predicting actions, that is, how people make choices based on how humans value one good versus the other, they offer little on how preferences may change over time. The conceptualization of the dynamic role of values and norms remains weak. The socio-hydrological perspective emphasizes the need to acknowledge the often-ignored, central role of endogenous culture in water resource economics and development.

Article

Katrina Wyatt, Robin Durie, and Felicity Thomas

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article. The burden of ill health has shifted, globally, from communicable to non-communicable disease, with poor health clustering in areas of economic deprivation. However, for the most part, public health programs remain focused on changing behaviors associated with poor health (such as smoking or physical inactivity) rather than the contexts that give rise to, and influence, the wide range of behaviors associated with poor health. This way of understanding and responding to population ill health views poor health behavior as a defining “problem” exhibited by a particular group of individuals or a community, which needs to be solved by the intervention of expert practitioners. This sort of approach determines individuals and their communities in terms of deficits, and works on the basis of perceived needs within such communities when seeking to address public health issues. Growing recognition that many of the fundamental determinants of health cannot be attributed solely to individuals, but result instead from the complex interplay between individuals and their social, economic, and cultural environments, has led to calls for new ways of delivering policies and programs aimed at improving health and reducing health inequalities. Such approaches include the incorporation of subjective perspectives and priorities to inform the creation of “health promoting societal contexts.” Alongside this, asset-based approaches to health creation place great emphasis on valuing the skills, knowledge, connections, and potential within a community and seek to identify the protective factors within a neighborhood or organization that support health and wellbeing. Connecting Communities (C2) is a unique asset-based program aimed at creating the conditions for health and wellness within very low-income communities. At the heart of the program is the belief that health emerges from the patterns of relations within neighborhoods, rather than being a static attribute of individuals. C2 seeks to change the nature of the relations both within communities and with service providers (such as the police, housing, education, and health professionals) to co-create responses to issues that are identified by community members themselves. While many of the issues identified concern local environmental conditions, such as vandalism or safe out-door spaces, many are also contributory determinants of ill health. Listening to people, understanding the social, cultural, and environmental context within which they are located, and supporting new partnerships based on reciprocity and mutual benefit ensures that solutions are grounded in the local context and not externally determined, in turn resulting in sustainable health creating communities.

Article

Sumit Sharma, Liliana Nunez, and Veerabhadran Ramanathan

Atmospheric brown clouds (ABCs) are widespread pollution clouds that can at times span an entire continent or an ocean basin. ABCs extend vertically from the ground upward to as high as 3 km, and they consist of both aerosols and gases. ABCs consist of anthropogenic aerosols such as sulfates, nitrates, organics, and black carbon and natural dust aerosols. Gaseous pollutants that contribute to the formation of ABCs are NOx (nitrogen oxides), SOx (sulfur oxides), VOCs (volatile organic compounds), CO (carbon monoxide), CH4 (methane), and O3 (ozone). The brownish color of the cloud (which is visible when looking at the horizon) is due to absorption of solar radiation at short wavelengths (green, blue, and UV) by organic and black carbon aerosols as well as by NOx. While the local nature of ABCs around polluted cities has been known since the early 1900s, the widespread transoceanic and transcontinental nature of ABCs as well as their large-scale effects on climate, hydrological cycle, and agriculture were discovered inadvertently by The Indian Ocean Experiment (INDOEX), an international experiment conducted in the 1990s over the Indian Ocean. A major discovery of INDOEX was that ABCs caused drastic dimming at the surface. The magnitude of the dimming was as large as 10–20% (based on a monthly average) over vast areas of land and ocean regions. The dimming was shown to be accompanied by significant atmospheric absorption of solar radiation by black and brown carbon (a form of organic carbon). Black and brown carbon, ozone and methane contribute as much as 40% to anthropogenic radiative forcing. The dimming by sulfates, nitrates, and carbonaceous (black and organic carbon) species has been shown to disrupt and weaken the monsoon circulation over southern Asia. In addition, the ozone in ABCs leads to a significant decrease in agriculture yields (by as much as 20–40%) in the polluted regions. Most significantly, the aerosols (in ABCs) near the ground lead to about 4 million premature mortalities every year. Technological and regulatory measures are available to mitigate most of the pollution resulting from ABCs. The importance of ABCs to global environmental problems led the United Nations Environment Programme (UNEP) to form the international ABC program. This ABC program subsequently led to the identification of short-lived climate pollutants as potent mitigation agents of climate change, and in recognition, UNEP formed the Climate and Clean Air Coalition to deal with these pollutants.

Article

In 2018 barley accounts for only 5% of the cereal production worldwide, and regionally for up to 40% of cereal production. The cereal represents the oldest crop species and is one of the best adapted crop plants to a broad diversity of climates and environments. Originating from the wild progenitor species Hordeum vulgare ssp. spontaneum, biogeographically located in the Fertile Crescent of the Near East, the domesticated form developed as a founder crop in aceramic Neolithic societies 11,000 years ago, was cultivated in monocultures in Bronze Age Mesopotamia, entered the New World after 1492 ce, reached a state of global distribution in the 1950s and had reached approximately 200 accepted botanical varieties by the year 2000. Its stress tolerance in response to increased aridity and salinity on one hand and adaptability to cool climates on the other, partially explains its broad range of applications for subsistence and economy across different cultures, such as for baking, cooking, beer brewing and as an animal feed. Although the use of fermented starch for producing alcoholic beverages and foods is globally documented in archaeological contexts dating from at least the beginning of the Holocene era, it becomes concrete only in societies with a written culture, such as Bronze Age Mesopotamia and Egypt, where beer played a considerable role in everyday diet and its production represented an important sector of productivity. In 2004 approximately 85% of barley production was destined for feeding animals. However, as a component of the human diet, studies on the health benefits of the micronutrients in barley have found that it has a positive effect on blood cholesterol and glucose levels, and in turn impacts cardiovascular health and diabetes control. The increasing number of barley-breeding programs worldwide focus on improving the processing characteristics, nutritional value, and stress tolerance of barley within the context of global climate change.

Article

Kevin J. Boyle and Christopher F. Parmeter

Benefit transfer is the projection of benefits from one place and time to another time at the same place or to a new place. Thus, benefit transfer includes the adaptation of an original study to a new policy application at the same location or the adaptation to a different location. The appeal of a benefit transfer is that it can be cost effective, both monetarily and in time. Using previous studies, analysts can select existing results to construct a transferred value for the desired amenity influenced by the policy change. Benefit transfer practices are not unique to valuing ecosystem service and are generally applicable to a variety of changes in ecosystem services. An ideal benefit transfer will scale value estimates to both the ecosystem services and the preferences of those who hold values. The article outlines the steps in a benefit transfer, types of transfers, accuracy of transferred values, and challenges when conducting ecosystem transfers and ends with recommendations for the implementation of benefit transfers to support decision-making.

Article

Lora Fleming, Niccolò Tempini, Harriet Gordon-Brown, Gordon L. Nichols, Christophe Sarran, Paolo Vineis, Giovanni Leonardi, Brian Golding, Andy Haines, Anthony Kessel, Virginia Murray, Michael Depledge, and Sabina Leonelli

Big data refers to large, complex, potentially linkable data from diverse sources, ranging from the genome and social media, to individual health information and the contributions of citizen science monitoring, to large-scale long-term oceanographic and climate modeling and its processing in innovative and integrated “data mashups.” Over the past few decades, thanks to the rapid expansion of computer technology, there has been a growing appreciation for the potential of big data in environment and human health research. The promise of big data mashups in environment and human health includes the ability to truly explore and understand the “wicked environment and health problems” of the 21st century, from tracking the global spread of the Zika and Ebola virus epidemics to modeling future climate change impacts and adaptation at the city or national level. Other opportunities include the possibility of identifying environment and health hot spots (i.e., locations where people and/or places are at particular risk), where innovative interventions can be designed and evaluated to prevent or adapt to climate and other environmental change over the long term with potential (co-) benefits for health; and of locating and filling gaps in existing knowledge of relevant linkages between environmental change and human health. There is the potential for the increasing control of personal data (both access to and generation of these data), benefits to health and the environment (e.g., from smart homes and cities), and opportunities to contribute via citizen science research and share information locally and globally. At the same time, there are challenges inherent with big data and data mashups, particularly in the environment and human health arena. Environment and health represent very diverse scientific areas with different research cultures, ethos, languages, and expertise. Equally diverse are the types of data involved (including time and spatial scales, and different types of modeled data), often with no standardization of the data to allow easy linkage beyond time and space variables, as data types are mostly shaped by the needs of the communities where they originated and have been used. Furthermore, these “secondary data” (i.e., data re-used in research) are often not even originated for this purpose, a particularly relevant distinction in the context of routine health data re-use. And the ways in which the research communities in health and environmental sciences approach data analysis and synthesis, as well as statistical and mathematical modeling, are widely different. There is a lack of trained personnel who can span these interdisciplinary divides or who have the necessary expertise in the techniques that make adequate bridging possible, such as software development, big data management and storage, and data analyses. Moreover, health data have unique challenges due to the need to maintain confidentiality and data privacy for the individuals or groups being studied, to evaluate the implications of shared information for the communities affected by research and big data, and to resolve the long-standing issues of intellectual property and data ownership occurring throughout the environment and health fields. As with other areas of big data, the new “digital data divide” is growing, where some researchers and research groups, or corporations and governments, have the access to data and computing resources while others do not, even as citizen participation in research initiatives is increasing. Finally with the exception of some business-related activities, funding, especially with the aim of encouraging the sustainability and accessibility of big data resources (from personnel to hardware), is currently inadequate; there is widespread disagreement over what business models can support long-term maintenance of data infrastructures, and those that exist now are often unable to deal with the complexity and resource-intensive nature of maintaining and updating these tools. Nevertheless, researchers, policy makers, funders, governments, the media, and members of the general public are increasingly recognizing the innovation and creativity potential of big data in environment and health and many other areas. This can be seen in how the relatively new and powerful movement of Open Data is being crystalized into science policy and funding guidelines. Some of the challenges and opportunities, as well as some salient examples, of the potential of big data and big data mashup applications to environment and human health research are discussed.

Article

Holly Morgan, Saran Sohi, and Simon Shackley

Biochar is a charcoal that is used to improve land rather than as a fuel. Biochar is produced from biomass, usually through the process of pyrolysis. Due to the molecular structure and strength of the chemical bonds, the carbon in biochar is in a stable form and not readily mineralized to CO2 (as is the fate of most of the carbon in biomass). Because the carbon in biochar derives (via photosynthesis) from atmospheric CO2, biochar has the potential to be a net negative carbon technology/carbon dioxide removal option. Biochar is not a single homogeneous material. Its composition and properties (including longevity) differ according to feedstock (source biomass), pyrolysis (production) conditions, and its intended application. This variety and heterogeneity have so far eluded an agreed methodology for calculating biochar’s carbon abatement. Meta-analyses increasingly summarize the effects of biochar in pot and field trials. These results illuminate that biochar may have important agronomic benefits in poorer acidic tropical and subtropical soils, with one study indicating an average 25% yield increase across all trials. In temperate soils the impact is modest to trivial and the same study found no significant impact on crop yield arising from biochar amendment. There is much complexity in matching biochar to suitable soil-crop applications and this challenge has defied development of simple heuristics to enable implementation. Biochar has great potential as a carbon management technology and as a soil amendment. The lack of technically rigorous methodologies for measuring recalcitrant carbon limits development of the technology according to this specific purpose.

Article

Human activities in the Anthropocene are influencing the twin processes of biodiversity generation and loss in complex ways that threaten the maintenance of biodiversity levels that underpin human well-being. Yet many scientists and practitioners still present a simplistic view of biodiversity as a static stock rather than one determined by a dynamic interplay of feedback processes that are affected by anthropogenic drivers. Biodiversity describes the variety of life on Earth, from the genes within an organism to the ecosystem level. However, this article focuses on variation among living organisms, both within and between species. Within species, biodiversity is reflected in genetic, and consequent phenotypic, variations among individuals. Genetic diversity is generated by germ line mutations, genetic recombination during sexual reproduction, and immigration of new genotypes into populations. Across species, biodiversity is reflected in the number of different species present and also, by some metrics, in the evenness of their relative abundance. At this level, biodiversity is generated by processes of speciation and immigration of new species into an area. Anthropogenic drivers affect all these biodiversity generation processes, while the levels of genetic diversity can feed back and affect the level of species diversity, and vice versa. Therefore, biodiversity maintenance is a complex balance of processes and the biodiversity levels at any point in time may not be at equilibrium. A major concern for humans is that our activities are driving rapid losses of biodiversity, which outweigh by orders of magnitude the processes of biodiversity generation. A wide range of species and genetic diversity could be necessary for the provision of ecosystem functions and services (e.g., in maintaining the nutrient cycling, plant productivity, pollination, and pest control that underpin crop production). The importance of biodiversity becomes particularly marked over longer time periods, and especially under varying environmental conditions. In terms of biodiversity losses, there are natural processes that cause roughly continuous, low-level losses, but there is also strong evidence from fossil records for transient events in which exceptionally large loss of biodiversity has occurred. These major extinction episodes are thought to have been caused by various large-scale environmental perturbations, such as volcanic eruptions, sea-level falls, climatic changes, and asteroid impacts. From all these events, biodiversity has shown recovery over subsequent calmer periods, although the composition of higher-level evolutionary taxa can be significantly altered. In the modern era, biodiversity appears to be undergoing another mass extinction event, driven by large-scale human impacts. The primary mechanisms of biodiversity loss caused by humans vary over time and by geographic region, but they include overexploitation, habitat loss, climate change, pollution (e.g., nitrogen deposition), and the introduction of non-native species. It is worth noting that human activities may also lead to increases in biodiversity in some areas through species introductions and climatic changes, although these overall increases in species richness may come at the cost of loss of native species, and with uncertain effects on ecosystem service delivery. Genetic diversity is also affected by human activities, with many examples of erosion of diversity through crop and livestock breeding or through the decline in abundance of wild species populations. Significant future challenges are to develop better ways to monitor the drivers of biodiversity loss and biodiversity levels themselves, making use of new technologies, and improving coverage across geographic regions and taxonomic scope. Rather than treating biodiversity as a simple stock at equilibrium, developing a deeper understanding of the complex interactions—both between environmental drivers and between genetic and species diversity—is essential to manage and maintain the benefits that biodiversity delivers to humans, as well as to safeguard the intrinsic value of the Earth’s biodiversity for future generations.

Article

Peter Kareiva and Isaac Kareiva

The concept of biodiversity hotspots arose as a science-based framework with which to identify high-priority areas for habitat protection and conservation—often in the form of nature reserves. The basic idea is that with limited funds and competition from humans for land, we should use range maps and distributional data to protect areas that harbor the greatest biodiversity and that have experienced the greatest habitat loss. In its early application, much analysis and scientific debate went into asking the following questions: Should all species be treated equally? Do endemic species matter more? Should the magnitude of threat matter? Does evolutionary uniqueness matter? And if one has good data on one broad group of organisms (e.g., plants or birds), does it suffice to focus on hotspots for a few taxonomic groups and then expect to capture all biodiversity broadly? Early applications also recognized that hotspots could be identified at a variety of spatial scales—from global to continental, to national to regional, to even local. Hence, within each scale, it is possible to identify biodiversity hotspots as targets for conservation. In the last 10 years, the concept of hotspots has been enriched to address some key critiques, including the problem of ignoring important areas that might have low biodiversity but that certainly were highly valued because of charismatic wild species or critical ecosystem services. Analyses revealed that although the spatial correlation between high-diversity areas and high-ecosystem-service areas is low, it is possible to use quantitative algorithms that achieve both high protection for biodiversity and high protection for ecosystem services without increasing the required area as much as might be expected. Currently, a great deal of research is aimed at asking about what the impact of climate change on biodiversity hotspots is, as well as to what extent conservation can maintain high biodiversity in the face of climate change. Two important approaches to this are detailed models and statistical assessments that relate species distribution to climate, or alternatively “conserving the stage” for high biodiversity, whereby the stage entails regions with topographies or habitat heterogeneity of the sort that is expected to generate high species richness. Finally, conservation planning has most recently embraced what is in some sense the inverse of biodiversity hotspots—what we might call conservation wastelands. This approach recognizes that in the Anthropocene epoch, human development and infrastructure are so vast that in addition to using data to identify biodiversity hotspots, we should use data to identify highly degraded habitats and ecosystems. These degraded lands can then become priority development areas—for wind farms, solar energy facilities, oil palm plantations, and so forth. By specifying degraded lands, conservation plans commonly pair maps of biodiversity hotspots with maps of degraded lands that highlight areas for development. By putting the two maps together, it should be possible to achieve much more effective conservation because there will be provision of habitat for species and for economic development—something that can obtain broader political support than simply highlighting biodiversity hotspots.

Article

Although the concept of biodiversity emerged 30 years ago, patterns and processes influencing ecological diversity have been studied for more than a century. Historically, ecological processes tended to be considered as occurring in local habitats that were spatially homogeneous and temporally at equilibrium. Initially considered as a constraint to be avoided in ecological studies, spatial heterogeneity was progressively recognized as critical for biodiversity. This resulted, in the 1970s, in the emergence of a new discipline, landscape ecology, whose major goal is to understand how spatial and temporal heterogeneity influence biodiversity. To achieve this goal, researchers came to realize that a fundamental issue revolves around how they choose to conceptualize and measure heterogeneity. Indeed, observed landscape patterns and their apparent relationship with biodiversity often depend on the scale of observation and the model used to describe the landscape. Due to the strong influence of island biogeography, landscape ecology has focused primarily on spatial heterogeneity. Several landscape models were conceptualized, allowing for the prediction and testing of distinct but complementary effects of landscape heterogeneity on species diversity. We now have ample empirical evidence that patch structure, patch context, and mosaic heterogeneity all influence biodiversity. More recently, the increasing recognition of the role of temporal scale has led to the development of new conceptual frameworks acknowledging that landscapes are not only heterogeneous but also dynamic. The current challenge remains to truly integrate both spatial and temporal heterogeneity in studies on biodiversity. This integration is even more challenging when considering that biodiversity often responds to environmental changes with considerable time lags, and multiple drivers of global changes are interacting, resulting in non-additive and sometimes antagonistic effects. Recent technological advances in remote sensing, the availability of massive amounts of data, and long-term studies represent, however, very promising avenues to improve our understanding of how spatial and temporal heterogeneity influence biodiversity.

Article

Ihtiyor Bobojonov

Bioeconomic models are analytical tools that integrate biophysical and economic models. These models allow for analysis of the biological and economic changes caused by human activities. The biophysical and economic components of these models are developed based on historical observations or theoretical relations. Technically these models may have various levels of complexity in terms of equation systems considered in the model, modeling activities, and programming languages. Often, biophysical components of the models include crop or hydrological models. The core economic components of these models are optimization or simulation models established according to neoclassical economic theories. The models are often developed at farm, country, and global scales, and are used in various fields, including agriculture, fisheries, forestry, and environmental sectors. Bioeconomic models are commonly used in research on environmental externalities associated with policy reforms and technological modernization, including climate change impact analysis, and also explore the negative consequences of global warming. A large number of studies and reports on bioeconomic models exist, yet there is a lack of studies describing the multiple uses of these models across different disciplines.

Article

Industrialized livestock production can be characterized by five key attributes: confinement feeding of animals, separation of feed and livestock production, specialization, large size, and close vertical linkages with buyers. Industrialized livestock operations—popularly known as CAFOs, for Concentrated Animal Feeding Operations—have spread rapidly in developed and developing countries; by the early 21st century, they accounted for three quarters of poultry production and over half of global pork production, and held a growing foothold in dairy production. Industrialized systems have created significant improvements in agricultural productivity, leading to greater output of meat and dairy products for given commitments of land, feed, labor, housing, and equipment. They have also been effective at developing, applying, and disseminating research leading to persistent improvements in animal genetics, breeding, feed formulations, and biosecurity. The reduced prices associated with productivity improvements support increased meat and dairy product consumption in low and middle income countries, while reducing the resources used for such consumption in higher income countries. The high-stocking densities associated with confined feeding also exacerbate several social costs associated with livestock production. Animals in high-density environments may be exposed to diseases, subject to attacks from other animals, and unable to engage in natural behaviors, raising concerns about higher levels of fear, pain, stress, and boredom. Such animal welfare concerns have realized greater salience in recent years. By consolidating large numbers of animals in a location, industrial systems also concentrate animal wastes, often in levels that exceed the capacity of local cropland to absorb the nutrients in manure. While the productivity improvements associated with industrial systems reduce the resource demands of agriculture, excessive localized concentrations of manure can lean to environmental damage through contamination of ground and surface water and through volatilization of nitrogen nutrients into airborne pollutants. Finally, animals in industrialized systems are often provided with antibiotics in their feed or water, in order to treat and prevent disease, but also to realize improved feed absorption (“a production purpose”). Bacteria are developing resistance to many important antibiotic drugs; the extensive use of such drugs in human and animal medicine has contributed to the spread of antibiotic resistance, with consequent health risks to humans. The social costs associated with industrialized production have led to a range of regulatory interventions, primarily in North America and Europe, as well as private sector attempts to alter the incentives that producers face through the development of labels and through associated adjustments within supply chains.

Article

Jorge H. García and Thomas Sterner

Economists argue that carbon taxation (and more generally carbon pricing) is the single most powerful way to combat climate change. Since this is so controversial, we need to explain it better, and to be precise, the efficiency gains are largest when the costs of abatement are strongly heterogeneous. This is often—but not always—the case. When it is not, standards can fill much the same role. To internalize the climate externality, economic efficiency calls for a global carbon tax (or price) that is equal to the global damage or the so-called social cost of carbon. However, equity considerations as well as existing geographical and sectoral differences in the effectiveness of carbon taxation at reducing emissions, suggest earlier implementation of relatively high taxation levels in some sectors or countries—for instance, among richer economies followed by a more gradual phase-in among low-income countries. The number of national and subnational carbon pricing policies that have been implemented around the world during the first years following the Paris Agreement of 2015 is significant. By 2020, these programs covered 22% of global emissions with an average carbon price (weighted by the share of emissions covered) of USD15/tCO2 and a maximum price of USD120/tCO2. The share of emissions covered by carbon pricing as well as carbon prices themselves are expected to consistently rise throughout the decade 2021–2030 and beyond. Many experts agree that the social cost of carbon is in the range USD40–100/tCO2. Anti-climate lobbying, public opposition, and lack of understanding of the instrument are among the key challenges faced by carbon taxation. Opportunities for further expansion of carbon taxation lie in increased climate awareness, the communicative resources governments have to help citizens understand the logic behind carbon taxation, and earmarking of carbon tax revenues to address issues that are important to the public such as fairness.

Article

The effect of climate change on hydrology and water resources is possibly one of the most important current environmental challenges, and it will be important for the rest of the 21st century. Climate change is anticipated to intensify the hydrological cycle and to change the temporal and spatial distribution patterns of water resources. It is predicted to increase the frequency and intensity of extreme hydrological events, such as heavy rainfall and floods, but in some locations also droughts. Water-related hazards occur due to complex interactions between atmospheric and hydrological systems. These events can then cause economic disasters, societal disturbances, and environmental impacts, which can pose a major threat to lives and livelihoods if they happen in places that are exposed and vulnerable to them. The economic impacts of extreme hydrological events can be separated into direct damage and indirect losses. Direct damage includes the damages to fixed assets and capital; losses of raw materials, crops, and extractable natural resources; and, most importantly, mortality, morbidity, and population displacement. All can be a direct consequence of the extreme hydrological event. Indirect losses are reductions in economic activity, particularly the production of goods and services—which will be greatly decreased after the disaster and because of it. Possibly the most damaging hydro-meteorological hazard, drought, is also the one that is least understood and the most difficult to quantify—even its onset is often difficult to identify. Drought is recognized as being associated with some of the most high-profile humanitarian disasters of past years, threatening the lives and livelihoods of millions of people, particularly those living in semi-arid and arid regions. Drought impacts depend on a set of weather parameters—high temperatures, low humidity, the timing of rain, and the intensity and duration of precipitation, as well as its onset and termination—and they depend on the population and assets and their vulnerabilities. While drought has wide-ranging effects on many economic sectors, the agricultural sector bears much of the impact, as it is very dependent on precipitation and evapotranspiration. Approximately 1.3 billion people rely on agriculture as their main source of income. In developing countries, the agriculture sector absorbs up to 80% of all direct damages from droughts. Droughts may be the biggest threat to food security and rural livelihoods globally, and they can increase local poverty, displace large numbers of people, and hinder the already fragile progress that has been made toward the achievement of Sustainable Development Goals (SDGs). As such, understanding droughts’ impacts, identifying ways to prevent or ameliorate them, and preventing further deterioration in the climatic conditions and social vulnerabilities that are their root causes are all of utmost importance.

Article

Driving forces for natural soil salinity and alkalinity are climate, rock weathering, ion exchange, and mineral equilibria reactions that ultimately control the chemical composition of soil and water. The major weathering reactions that produce soluble ions are tabled. Where evapotranspiration is greater than precipitation, downward water movement is insufficient to leach solutes out of the soil profile and salts can precipitate. Microbes involved in organic matter mineralization and thus the carbon, nitrogen, and sulfur biogeochemical cycles are also implicated. Seasonal contrast and evaporative concentration during dry periods accelerate short-term oxidation-reduction reactions and local and regional accumulation of carbonate and sulfur minerals. The presence of salts and alkaline conditions, together with the occurrence of drought and seasonal waterlogging, creates some of the most extreme soil environments where only specially adapted organisms are able to survive. Sodic soils are alkaline, rich in sodium carbonates, with an exchange complex dominated by sodium ions. Such sodic soils, when low in other salts, exhibit dispersive behavior, and they are difficult to manage for cropping. Maintaining the productivity of sodic soils requires control of the flocculation-dispersion behavior of the soil. Poor land management can also lead to anthropogenically induced secondary salinity. New developments in physical chemistry are providing insights into ion exchange and how it controls flocculation-dispersion in soil. New water and solute transport models are enabling better options of remediation of saline and/or sodic soils.