Jan Zalasiewicz and Colin Waters
The Anthropocene hypothesis—that humans have impacted “the environment” but also changed the Earth’s geology—has spread widely through the sciences and humanities. This hypothesis is being currently tested to see whether the Anthropocene may become part of the Geological Time Scale. An Anthropocene Working Group has been established to assemble the evidence. The decision regarding formalization is likely to be taken in the next few years, by the International Commission on Stratigraphy, the body that oversees the Geological Time Scale. Whichever way the decision goes, there will remain the reality of the phenomenon and the utility of the concept.
The evidence, as outlined here, rests upon a broad range of signatures reflecting humanity’s significant and increasing modification of Earth systems. These may be visible as markers in physical deposits in the form of the greatest expansion of novel minerals in the last 2.4 billion years of Earth history and development of ubiquitous materials, such as plastics, unique to the Anthropocene. The artefacts we produce to live as modern humans will form the technofossils of the future. Human-generated deposits now extend from our natural habitat on land into our oceans, transported at rates exceeding the sediment carried by rivers by an order of magnitude. That influence now extends increasingly underground in our quest for minerals, fuel, living space, and to develop transport and communication networks. These human trace fossils may be preserved over geological durations and the evolution of technology has created a new technosphere, yet to evolve into balance with other Earth systems.
The expression of the Anthropocene can be seen in sediments and glaciers in chemical markers. Carbon dioxide in the atmosphere has risen by ~45 percent above pre–Industrial Revolution levels, mainly through combustion, over a few decades, of a geological carbon-store that took many millions of years to accumulate. Although this may ultimately drive climate change, average global temperature increases and resultant sea-level rises remain comparatively small, as yet. But the shift to isotopically lighter carbon locked into limestones and calcareous fossils will form a permanent record. Nitrogen and phosphorus contents in surface soils have approximately doubled through increased use of fertilizers to increase agricultural yields as the human population has also doubled in the last 50 years. Industrial metals, radioactive fallout from atomic weapons testing, and complex organic compounds have been widely dispersed through the environment and become preserved in sediment and ice layers.
Despite radical changes to flora and fauna across the planet, the Earth still has most of its complement of biological species. However, current trends of habitat loss and predation may push the Earth into the sixth mass extinction event in the next few centuries. At present the dramatic changes relate to trans-global species invasions and population modification through agricultural development on land and contamination of coastal zones.
Considering the entire range of environmental signatures, it is clear that the global, large and rapid scale of change related to the mid-20th century is the most obvious level to consider as the start of the Anthropocene Epoch.
Peter Kareiva and Isaac Kareiva
The concept of biodiversity hotspots arose as a science-based framework with which to identify high-priority areas for habitat protection and conservation—often in the form of nature reserves. The basic idea is that with limited funds and competition from humans for land, we should use range maps and distributional data to protect areas that harbor the greatest biodiversity and that have experienced the greatest habitat loss. In its early application, much analysis and scientific debate went into asking the following questions: Should all species be treated equally? Do endemic species matter more? Should the magnitude of threat matter? Does evolutionary uniqueness matter? And if one has good data on one broad group of organisms (e.g., plants or birds), does it suffice to focus on hotspots for a few taxonomic groups and then expect to capture all biodiversity broadly? Early applications also recognized that hotspots could be identified at a variety of spatial scales—from global to continental, to national to regional, to even local. Hence, within each scale, it is possible to identify biodiversity hotspots as targets for conservation.
In the last 10 years, the concept of hotspots has been enriched to address some key critiques, including the problem of ignoring important areas that might have low biodiversity but that certainly were highly valued because of charismatic wild species or critical ecosystem services. Analyses revealed that although the spatial correlation between high-diversity areas and high-ecosystem-service areas is low, it is possible to use quantitative algorithms that achieve both high protection for biodiversity and high protection for ecosystem services without increasing the required area as much as might be expected.
Currently, a great deal of research is aimed at asking about what the impact of climate change on biodiversity hotspots is, as well as to what extent conservation can maintain high biodiversity in the face of climate change. Two important approaches to this are detailed models and statistical assessments that relate species distribution to climate, or alternatively “conserving the stage” for high biodiversity, whereby the stage entails regions with topographies or habitat heterogeneity of the sort that is expected to generate high species richness.
Finally, conservation planning has most recently embraced what is in some sense the inverse of biodiversity hotspots—what we might call conservation wastelands. This approach recognizes that in the Anthropocene epoch, human development and infrastructure are so vast that in addition to using data to identify biodiversity hotspots, we should use data to identify highly degraded habitats and ecosystems. These degraded lands can then become priority development areas—for wind farms, solar energy facilities, oil palm plantations, and so forth. By specifying degraded lands, conservation plans commonly pair maps of biodiversity hotspots with maps of degraded lands that highlight areas for development. By putting the two maps together, it should be possible to achieve much more effective conservation because there will be provision of habitat for species and for economic development—something that can obtain broader political support than simply highlighting biodiversity hotspots.
Although the concept of biodiversity emerged 30 years ago, patterns and processes influencing ecological diversity have been studied for more than a century. Historically, ecological processes tended to be considered as occurring in local habitats that were spatially homogeneous and temporally at equilibrium. Initially considered as a constraint to be avoided in ecological studies, spatial heterogeneity was progressively recognized as critical for biodiversity. This resulted, in the 1970s, in the emergence of a new discipline, landscape ecology, whose major goal is to understand how spatial and temporal heterogeneity influence biodiversity. To achieve this goal, researchers came to realize that a fundamental issue revolves around how they choose to conceptualize and measure heterogeneity. Indeed, observed landscape patterns and their apparent relationship with biodiversity often depend on the scale of observation and the model used to describe the landscape. Due to the strong influence of island biogeography, landscape ecology has focused primarily on spatial heterogeneity. Several landscape models were conceptualized, allowing for the prediction and testing of distinct but complementary effects of landscape heterogeneity on species diversity. We now have ample empirical evidence that patch structure, patch context, and mosaic heterogeneity all influence biodiversity. More recently, the increasing recognition of the role of temporal scale has led to the development of new conceptual frameworks acknowledging that landscapes are not only heterogeneous but also dynamic. The current challenge remains to truly integrate both spatial and temporal heterogeneity in studies on biodiversity. This integration is even more challenging when considering that biodiversity often responds to environmental changes with considerable time lags, and multiple drivers of global changes are interacting, resulting in non-additive and sometimes antagonistic effects. Recent technological advances in remote sensing, the availability of massive amounts of data, and long-term studies represent, however, very promising avenues to improve our understanding of how spatial and temporal heterogeneity influence biodiversity.
Confidence in the projected impacts of climate change on agricultural systems has increased substantially since the first Intergovernmental Panel on Climate Change (IPCC) reports. In Africa, much work has gone into downscaling global climate models to understand regional impacts, but there remains a dearth of local level understanding of impacts and communities’ capacity to adapt. It is well understood that Africa is vulnerable to climate change, not only because of its high exposure to climate change, but also because many African communities lack the capacity to respond or adapt to the impacts of climate change. Warming trends have already become evident across the continent, and it is likely that the continent’s 2000 mean annual temperature change will exceed +2°C by 2100. Added to this warming trend, changes in precipitation patterns are also of concern: Even if rainfall remains constant, due to increasing temperatures, existing water stress will be amplified, putting even more pressure on agricultural systems, especially in semiarid areas. In general, high temperatures and changes in rainfall patterns are likely to reduce cereal crop productivity, and new evidence is emerging that high-value perennial crops will also be negatively impacted by rising temperatures. Pressures from pests, weeds, and diseases are also expected to increase, with detrimental effects on crops and livestock.
Much of African agriculture’s vulnerability to climate change lies in the fact that its agricultural systems remain largely rain-fed and underdeveloped, as the majority of Africa’s farmers are small-scale farmers with few financial resources, limited access to infrastructure, and disparate access to information. At the same time, as these systems are highly reliant on their environment, and farmers are dependent on farming for their livelihoods, their diversity, context specificity, and the existence of generations of traditional knowledge offer elements of resilience in the face of climate change. Overall, however, the combination of climatic and nonclimatic drivers and stressors will exacerbate the vulnerability of Africa’s agricultural systems to climate change, but the impacts will not be universally felt. Climate change will impact farmers and their agricultural systems in different ways, and adapting to these impacts will need to be context-specific.
Current adaptation efforts on the continent are increasing across the continent, but it is expected that in the long term these will be insufficient in enabling communities to cope with the changes due to longer-term climate change. African famers are increasingly adopting a variety of conservation and agroecological practices such as agroforestry, contouring, terracing, mulching, and no-till. These practices have the twin benefits of lowering carbon emissions while adapting to climate change as well as broadening the sources of livelihoods for poor farmers, but there are constraints to their widespread adoption. These challenges vary from insecure land tenure to difficulties with knowledge-sharing.
While African agriculture faces exposure to climate change as well as broader socioeconomic and political challenges, many of its diverse agricultural systems remain resilient. As the continent with the highest population growth rate, rapid urbanization trends, and rising GDP in many countries, Africa’s agricultural systems will need to become adaptive to more than just climate change as the uncertainties of the 21st century unfold.
Regimes of environmental stress are exceedingly complex. Particular stressors exist within continua of intensity of environmental factors. Those factors interact with each other, and their detrimental effects on organisms are manifest only at relatively high or low strengths of exposure—in fact, many of them are beneficial at intermediate levels of intensity. Although a diversity of environmental factors is manifest at any time and place, only one or a few of them tend to be dominant as stressors. It is useful to distinguish between stressors that occur as severe events (disturbances) and those that are chronic in their exposure, and to aggregate the kinds of stressors into categories (while noting some degree of overlap among them).
Climatic stressors are associated with extremes of temperature, solar radiation, wind, moisture, and combinations of these factors. They act as stressors if their condition is either insufficient or excessive, in comparison with the needs and comfort zones of organisms or ecosystem processes. Chemical stressors involve environments in which the availability of certain substances is too low to satisfy biological needs, or high enough to cause toxicity or another physiological detriment to organisms or to higher-level attributes of ecosystems. Wildfire is a disturbance that involves the combustion of much of the biomass of an ecosystem, affecting organisms by heat, physical damage, and toxic substances. Physical stress is a disturbance in which an exposure to kinetic energy is intense enough to damage organisms and ecosystems (such as a volcanic blast, seismic sea wave, ice scouring, or anthropogenic explosion or trampling).
Biological stressors are associated with interactions occurring among organisms. They may be directly caused by such trophic interactions as herbivory, predation, and parasitism. They may also indirectly affect the intensity of physical or chemical stressors, as when competition affects the availability of nutrients, moisture, or space.
Extreme environments are characterized by severe regimes of stressors, which result in relatively impoverished ecosystem development. This may be a consequence of either natural or anthropogenic stressors. If a regime of environmental stress intensifies, the resulting responses include a degradation of the structure and function of affected ecosystems and of ecological integrity more generally. In contrast, a relaxation of environmental stress allows some degree of ecosystem recovery.
Dominic Moran and Jorie Knook
Climate change is already having a significant impact on agriculture through greater weather variability and the increasing frequency of extreme events. International policy is rightly focused on adapting and transforming agricultural and food production systems to reduce vulnerability. But agriculture also has a role in terms of climate change mitigation. The agricultural sector accounts for approximately a third of global anthropogenic greenhouse gas emissions, including related emissions from land-use change and deforestation. Farmers and land managers have a significant role to play because emissions reduction measures can be taken to increase soil carbon sequestration, manage fertilizer application, and improve ruminant nutrition and waste. There is also potential to improve overall productivity in some systems, thereby reducing emissions per unit of product. The global significance of such actions should not be underestimated. Existing research shows that some of these measures are low cost relative to the costs of reducing emissions in other sectors such as energy or heavy industry. Some measures are apparently cost-negative or win–win, in that they have the potential to reduce emissions and save production costs. However, the mitigation potential is also hindered by the biophysical complexity of agricultural systems and institutional and behavioral barriers limiting the adoption of these measures in developed and developing countries. This includes formal agreement on how agricultural mitigation should be treated in national obligations, commitments or targets, and the nature of policy incentives that can be deployed in different farming systems and along food chains beyond the farm gate. These challenges also overlap growing concern about global food security, which highlights additional stressors, including demographic change, natural resource scarcity, and economic convergence in consumption preferences, particularly for livestock products. The focus on reducing emissions through modified food consumption and reduced waste is a recent agenda that is proving more controversial than dealing with emissions related to production.
Leon C. Braat
The concept of ecosystem services considers the usefulness of nature for human society. The economic importance of nature was described and analyzed in the 18th century, but the term ecosystem services was introduced only in 1981. Since then it has spurred an increasing number of academic publications, international research projects, and policy studies. Now a subject of intense debate in the global scientific community, from the natural to social science domains, it is also used, developed, and customized in policy arenas and considered, if in a still somewhat skeptical and apprehensive way, in the “practice” domain—by nature management agencies, farmers, foresters, and corporate business. This process of bridging evident gaps between ecology and economics, and between nature conservation and economic development, has also been felt in the political arena, including in the United Nations and the European Union (which have placed it at the center of their nature conservation and sustainable use strategies).
The concept involves the utilitarian framing of those functions of nature that are used by humans and considered beneficial to society as economic and social services. In this light, for example, the disappearance of biodiversity directly affects ecosystem functions that underpin critical services for human well-being. More generally, the concept can be defined in this manner: Ecosystem services are the direct and indirect contributions of ecosystems, in interaction with contributions from human society, to human well-being.
The concept underpins four major discussions: (1) Academic: the ecological versus the economic dimensions of the goods and services that flow from ecosystems to the human economy; the challenge of integrating concepts and models across this paradigmatic divide; (2) Social: the risks versus benefits of bringing the utilitarian argument into political debates about nature conservation (Are ecosystem services good or bad for biodiversity and vice versa?); (3) Policy and planning: how to value the benefits from natural capital and ecosystem services (Will this improve decision-making on topics ranging from poverty alleviation via subsidies to farmers to planning of grey with green infrastructure to combining economic growth with nature conservation?); and (4) Practice: Can revenue come from smart management and sustainable use of ecosystems? Are there markets to be discovered and can businesses be created? How do taxes figure in an ecosystem-based economy? The outcomes of these discussions will both help to shape policy and planning of economies at global, national, and regional scales and contribute to the long-term survival and well-being of humanity.
Elisabet Lindgren and Thomas Elmqvist
Ecosystem services refer to benefits for human societies and well-being obtained from ecosystems. Research on health effects of ecosystem services have until recently mostly focused on beneficial effects on physical and mental health from spending time in nature or having access to urban green space. However, nearly all of the different ecosystem services may have impacts on health, either directly or indirectly. Ecosystem services can be divided into provisioning services that provide food and water; regulating services that provide, for example, clean air, moderate extreme events, and regulate the local climate; supporting services that help maintain biodiversity and infectious disease control; and cultural services.
With a rapidly growing global population, the demand for food and water will increase. Knowledge about ecosystems will provide opportunities for sustainable agriculture production in both terrestrial and marine environments. Diarrheal diseases and associated childhood deaths are strongly linked to poor water quality, sanitation, and hygiene. Even though improvements are being made, nearly 750 million people still lack access to reliable water sources. Ecosystems such as forests, wetlands, and lakes capture, filter, and store water used for drinking, irrigation, and other human purposes. Wetlands also store and treat solid waste and wastewater, and such ecosystem services could become of increasing use for sustainable development.
Ecosystems contribute to local climate regulation and are of importance for climate change mitigation and adaptation. Coastal ecosystems, such as mangrove and coral reefs, act as natural barriers against storm surges and flooding. Flooding is associated with increased risk of deaths, epidemic outbreaks, and negative health impacts from destroyed infrastructure. Vegetation reduces the risk of flooding, also in cities, by increasing permeability and reducing surface runoff following precipitation events.
The urban heat island effect will increase city-center temperatures during heatwaves. The elderly, people with chronic cardiovascular and respiratory diseases, and outdoor workers in cities where temperatures soar during heatwaves are in particular vulnerable to heat. Vegetation and especially trees help in different ways to reduce temperatures by shading and evapotranspiration. Air pollution increases the mortality and morbidity risks during heatwaves. Vegetation has been shown also to contribute to improved air quality by, depending on plant species, filtering out gases and airborne particulates. Greenery also has a noise-reducing effect, thereby decreasing noise-related illnesses and annoyances. Biological control uses the knowledge of ecosystems and biodiversity to help control human and animal diseases.
Natural surroundings and urban parks and gardens have direct beneficial effects on people’s physical and mental health and well-being. Increased physical activities have well-known health benefits. Spending time in natural environments has also been linked to aesthetic benefits, life enrichments, social cohesion, and spiritual experience. Even living close to or with a view of nature has been shown to reduce stress and increase a sense of well-being.
Giles Jackson and Megan Epler Wood
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Ecotourism is an evolving field that originated in the 1980s, when leading conservationists explored and wrote seminal papers on how tourism could contribute to the conservation of natural areas. Hector Ceballos Lascurain coined the first definition, and the International Union for Conservation of Nature, the World Wildlife Fund, Conservation International, and The Nature Conservancy all undertook research and documentation of the benefits and potential risks of ecotourism in the 1990s. The International Ecotourism Society, founded in 1990, brought together conservation organizations and businesses to create the first definition that was globally accepted in short form: Responsible travel to natural areas that conserves the environment and sustains the well-being of local people.
Small group tour operators flourished during the 1990s, bringing travelers to a growing number of natural areas worldwide, together with top guiding, high-caliber interpretation, and strong ethical contributions to local wellbeing. Many important micro, small, and medium sized enterprises were founded in high biodiversity regions of Latin America, Asia, Africa, Antarctica, Australia, and throughout the Pacific Islands and the Caribbean, offering life-changing experiences while helping build conservation economies and inspiring positive action.
In 2015, nature-based tourism was estimated to have an economic value worldwide of hundreds of billions of dollars annually in protected areas alone, driven by the growing need of a rapidly urbanizing world to experience and reconnect with wild nature. However, this growth has not resulted in growing budgets to safeguard and manage natural areas, which are increasingly under threat. Scientific concerns that poor business practices under the guise of ecotourism might irreversibly damage fragile natural areas have led the conservation community to de-emphasize ecotourism as a conservation tool in favor of business certification. But these efforts have reached only a small percentage of the corporate sector of the eight trillion dollar global tourism industry.
Although the net economic, social, and environmental contributions of ecotourism have not been fully accounted for, the research to date has confirmed the conservation value of ecotourism—among the first examples of social enterprise. One well-documented case is Wilderness Safaris, an $89 million company operating in 58 destinations in Southern Africa in 2015, which reinvests at least 5% of its gross profit (before taxation and depreciation) to help protect the natural assets and support local communities on which the business depends. This example suggests that ecotourism can yield benefits for the conservation of biodiversity and can benefit local communities on a large scale. To increase ecotourism’s role in sustainable development, more businesses will need to scale up, and government management of tourism will require improved impact measurements, updated regulatory strategies, and effective policy mechanisms to garner a greater portion of tourism revenue.
The emergence of environment as a security imperative is something that could have been avoided. Early indications showed that if governments did not pay attention to critical environmental issues, these would move up the security agenda. As far back as the Club of Rome 1972 report, Limits to Growth, variables highlighted for policy makers included world population, industrialization, pollution, food production, and resource depletion, all of which impact how we live on this planet.
The term environmental security didn’t come into general use until the 2000s. It had its first substantive framing in 1977, with the Lester Brown Worldwatch Paper 14, “Redefining Security.” Brown argued that the traditional view of national security was based on the “assumption that the principal threat to security comes from other nations.” He went on to argue that future security “may now arise less from the relationship of nation to nation and more from the relationship between man to nature.”
Of the major documents to come out of the Earth Summit in 1992, the Rio Declaration on Environment and Development is probably the first time governments have tried to frame environmental security. Principle 2 says: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental and developmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national.”
In 1994, the UN Development Program defined Human Security into distinct categories, including:
• Economic security (assured and adequate basic incomes).
• Food security (physical and affordable access to food).
• Health security.
• Environmental security (access to safe water, clean air and non-degraded land).
By the time of the World Summit on Sustainable Development, in 2002, water had begun to be identified as a security issue, first at the Rio+5 conference, and as a food security issue at the 1996 FAO Summit. In 2003, UN Secretary General Kofi Annan set up a High-Level Panel on “Threats, Challenges, and Change,” to help the UN prevent and remove threats to peace. It started to lay down new concepts on collective security, identifying six clusters for member states to consider. These included economic and social threats, such as poverty, infectious disease, and environmental degradation.
By 2007, health was being recognized as a part of the environmental security discourse, with World Health Day celebrating “International Health Security (IHS).” In particular, it looked at emerging diseases, economic stability, international crises, humanitarian emergencies, and chemical, radioactive, and biological terror threats. Environmental and climate changes have a growing impact on health. The 2007 Fourth Assessment Report (AR4) of the UN Intergovernmental Panel on Climate Change (IPCC) identified climate security as a key challenge for the 21st century. This was followed up in 2009 by the UCL-Lancet Commission on Managing the Health Effects of Climate Change—linking health and climate change.
In the run-up to Rio+20 and the launch of the Sustainable Development Goals, the issue of the climate-food-water-energy nexus, or rather, inter-linkages, between these issues was highlighted. The dialogue on environmental security has moved from a fringe discussion to being central to our political discourse—this is because of the lack of implementation of previous international agreements.
Jean Louis Weber
Environmental accounting is an attempt to broaden the scope of the accounting frameworks used to assess economic performance, to take stock of elements that are not recorded in public or private accounting books. These gaps occur because the various costs of using nature are not captured, being considered, in many cases, as externalities that can be forwarded to others or postponed. Positive externalities—the natural resource—are depleted with no recording in National Accounts (while companies do record them as depreciation elements). Depletion of renewable resource results in degradation of the environment, which adds to negative externalities resulting from pollution and fragmentation of cyclic and living systems. Degradation, or its financial counterpart in depreciation, is not recorded at all. Therefore, the indicators of production, income, consumption, saving, investment, and debts on which many economic decisions are taken are flawed, or at least incomplete and sometimes misleading, when immediate benefits are in fact losses in the long run, when we consume the reproductive functions of our capital. Although national accounting has been an important driving force in change, environmental accounting encompasses all accounting frameworks including national accounts, financial accounting standards, and accounts established to assess the costs and benefits of plans and projects.
There are several approaches to economic environmental accounting at the national level. Of these approaches, one purpose is the calculation of genuine economic welfare by taking into account losses from environmental damage caused by economic activity and gains from unrecorded services provided by Nature. Here, particular attention is given to the calculation of a “Green GDP” or “Adjusted National Income” and/or “Genuine Savings” as well as natural assets value and depletion. A different view considers the damages caused to renewable natural capital and the resulting maintenance and restoration costs. Besides approaches based on benefits and costs, more descriptive accounts in physical units are produced with the purpose of assessing resource use efficiency. With regard to natural assets, the focus can be on assets directly used by the economy, or more broadly, on ecosystem capacity to deliver services, ecosystem resilience, and its possible degradation. These different approaches are not necessarily contradictory, although controversies can be noted in the literature.
The discussion focuses on issues such as the legitimacy of combining values obtained with shadow prices (needed to value the elements that are not priced by the market) with the transaction values recorded in the national accounts, the relative importance of accounts in monetary vs. physical units, and ultimately, the goals for environmental accounting. These goals include assessing the sustainability of the economy in terms of conservation (or increase) of the net income flow and total economic wealth (the weak sustainability paradigm), in relation to the sustainability of the ecosystem, which supports livelihoods and well-being in the broader sense (strong sustainability).
In 2012, the UN Statistical Commission adopted an international statistical standard called, the “System of Environmental-Economic Accounting Central Framework” (SEEA CF). The SEEA CF covers only items for which enough experience exists to be proposed for implementation by national statistical offices. A second volume on SEEA-Experimental Ecosystem Accounting (SEEA-EEA) was added in 2013 to supplement the SEEA CF with a research agenda and the development of tests. Experiments of the SEEA-EEA are developing at the initiative of the World Bank (WAVES), UN Environment Programme (VANTAGE, ProEcoServ), or the UN Convention on Biological Diversity (CBD) (SEEA-Ecosystem Natural Capital Accounts-Quick Start Package [ENCA-QSP]).
Beside the SEEA and in relation to it, other environmental accounting frameworks have been developed for specific purposes, including material flow accounting (MFA), which is now a regular framework at the Organisation for Economic Co-operation and Development (OECD) to report on the Green Growth strategy, the Intergovernmental Panel on Climate Change (IPCC) guidelines for the UN Framework Convention on Climate Change (UNFCCC), reporting greenhouse gas emissions and carbon sequestration. Can be considered as well the Ecological Footprint accounts, which aim at raising awareness that our resource use is above what the planet can deliver, or the Millennium Ecosystem Assessment of 2005, which presents tables and an overall assessment in an accounting style. Environmental accounting is also a subject of interest for business, both as a way to assess impacts—costs and benefits of projects—and to define new accounting standards to assess their long term performance and risks.
Mark V. Barrow
The prospect of extinction, the complete loss of a species or other group of organisms, has long provoked strong responses. Until the turn of the 18th century, deeply held and widely shared beliefs about the order of nature led to a firm rejection of the possibility that species could entirely vanish. During the 19th century, however, resistance to the idea of extinction gave way to widespread acceptance following the discovery of the fossil remains of numerous previously unknown forms and direct experience with contemporary human-driven decline and the destruction of several species. In an effort to stem continued loss, at the turn of the 19th century, naturalists, conservationists, and sportsmen developed arguments for preventing extinction, created wildlife conservation organizations, lobbied for early protective laws and treaties, pushed for the first government-sponsored parks and refuges, and experimented with captive breeding. In the first half of the 20th century, scientists began systematically gathering more data about the problem through global inventories of endangered species and the first life-history and ecological studies of those species.
The second half of the 20th and the beginning of the 21st centuries have been characterized both by accelerating threats to the world’s biota and greater attention to the problem of extinction. Powerful new laws, like the U.S. Endangered Species Act of 1973, have been enacted and numerous international agreements negotiated in an attempt to address the issue. Despite considerable effort, scientists remain fearful that the current rate of species loss is similar to that experienced during the five great mass extinction events identified in the fossil record, leading to declarations that the world is facing a biodiversity crisis. Responding to this crisis, often referred to as the sixth extinction, scientists have launched a new interdisciplinary, mission-oriented discipline, conservation biology, that seeks not just to understand but also to reverse biota loss. Scientists and conservationists have also developed controversial new approaches to the growing problem of extinction: rewilding, which involves establishing expansive core reserves that are connected with migratory corridors and that include populations of apex predators, and de-extinction, which uses genetic engineering techniques in a bid to resurrect lost species. Even with the development of new knowledge and new tools that seek to reverse large-scale species decline, a new and particularly imposing danger, climate change, looms on the horizon, threatening to undermine those efforts.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.
Fred Mackenzie and Abraham Lerman
The tendency to represent natural processes as cycles—from Latin cyclus and Greek κυκλος—is undoubtedly rooted in the human observations of repeating or periodic phenomena. The oldest notions of the water cycle, as water cycling between the Earth, air, and back to earth, are mentioned in the Old Testament and by Greek philosophers, from the 900s to 300s
The main “bioessential” chemical elements are carbon (C), nitrogen (N), phosphorus (P), oxygen (O), and hydrogen (H). These are represented in the mean composition of aquatic photosynthesizing organisms as the atomic abundance ratio C:N:P = 106:16:1 or as (CH2O)106(NH3)16(H3PO4). In land plants, estimates of mean composition vary from C:N:P = 510:4:1 to 2057:17:1. On land, the photosynthesizing organisms are much more efficient than in water by being able to incorporate more carbon atoms for each atom of phosphorus. The bioessential elements are coupled by the living organisms in the exogenic cycle, the processes at and near the Earth’s surface, and in the endogenic cycle of the processes that include subduction into the Earth’s interior and return to the surface. The main reservoirs of the bioessential elements are very different: although oxygen is the most abundant element in the Earth’s crust, most of it is locked in silicate minerals as SiO2, and the forms available to biogeochemical cycling are oxygen in water and, as a product of photosynthesis, as gas O2 in the atmosphere. Carbon is in the atmospheric reservoir of CO2 gas and dissolved in ocean and fresh waters. The main nitrogen reservoir is the molecular N2 in the atmosphere and oxidized and reduced nitrogen compounds in waters. Phosphorus occurs in the oxidized form of the phosphate-ion in crustal minerals, from where it is leached into the water.
The natural cycle of the bioessential elements has been greatly perturbed since the late 1700s by human industrial and agricultural activities, the period known as the Anthropocene epoch. The increase in CO2, CH4 and NOx emissions to the atmosphere from fossil-fuel burning and land-use changes has rapidly and strongly modified the chemical composition of the atmosphere. This change has affected the balance of solar radiation absorbed by the atmosphere—generally known as “climate change”—and the acidity of surface-ocean waters, referred to as “ocean acidification.” CO2 in water is a weak acid that dissolves carbonate minerals, biogenically and inorganically formed in the ocean, and it thus modifies the chemical composition of ocean water. Overall, a major anthropogenic perturbation of the biogeochemical cycles has been the faster increase in atmospheric concentration of CO2 than its removal from the atmosphere by plants, dissolution in the ocean, and uptake in mineral weathering.
Precipitation falling onto the land surface in terrestrial ecosystems is transformed into either “green water” or “blue water.” Green water is the portion stored in soil and potentially available for uptake by plants, whereas blue water either runs off into streams and rivers or percolates below the rooting zone into a groundwater aquifer. The principal flow of green water is by evapotranspiration from soil into the atmosphere, whereas blue water moves through the channel system at the land surface or through the pore space of an aquifer. Globally, the flow of green water accounts for about two-thirds of the global flow of all water, green or blue; thus the global flow of green water, most of which is by transpiration, dominates that of blue water. In fact, the global flow of green water by transpiration equals the flow of all the rivers on Earth into the oceans.
At the global scale, evapotranspiration is measured using a combination of ground-, satellite-, and model-based methods implemented over annual or monthly time-periods. Data are examined for self-consistency and compliance with water- and energy-balance constraints. At the catchment scale, average annual evapotranspiration data also must conform to water and energy balance. Application of these two constraints, plus the assumption that evapotranspiration is a homogeneous function of average annual precipitation and the average annual net radiative heat flux from the atmosphere to the land surface, leads to the Budyko model of catchment evapotranspiration. The functional form of this model strongly influences the interrelationship among climate, soil, and vegetation as represented in parametric catchment modeling, a very active area of current research in ecohydrology.
Green water flow leading to transpiration is a complex process, firstly because of the small spatial scale involved, which requires indirect visualization techniques, and secondly because the near-root soil environment, the rhizosphere, is habitat for the soil microbiome, an extraordinarily diverse collection of microbial organisms that influence water uptake through their symbiotic relationship with plant roots. In particular, microbial polysaccharides endow rhizosphere soil with properties that enhance water uptake by plants under drying stress. These properties differ substantially from those of non-rhizosphere soil and are difficult to quantify in soil water flow models. Nonetheless, current modeling efforts based on the Richards equation for water flow in an unsaturated soil can successfully capture the essential features of green water flow in the rhizosphere, as observed using visualization techniques.
There is also the yet-unsolved problem of upscaling rhizosphere properties from the small scale typically observed using visualization techniques to that of the rooting zone, where the Richards equation applies; then upscaling from the rooting zone to the catchment scale, where the Budyko model, based only on water- and energy-balance laws, applies, but still lacks a clear connection to current soil evaporation models; and finally, upscaling from the catchment to the global scale. This transitioning across a very broad range of spatial scales, millimeters to kilometers, remains as one of the outstanding grand challenges in green water ecohydrology.
Richard G. Lawford and Sushel Unninayar
The global water cycle concept has its roots in the ancient understanding of nature. Indeed, the Greeks and Hebrews documented some of the most some important hydrological processes. Furthermore, Africa, Sri Lanka, and China all have archaeological evidence to show the sophisticated nature of water management that took place thousands of years ago. During the 20th century, a broader perspective was taken and the hydrological cycle was used to describe the terrestrial and freshwater component of the global water cycle. Data analysis systems and modeling protocols were developed to provide the information needed to efficiently manage water resources. These advances were helpful in defining the water in the soil and the movement of water between stores of water over land surfaces. Atmospheric inputs to these balances were also monitored, but the measurements were much more reliable over countries with dense networks of precipitation gauges and radiosonde observations.
By the 1960s, early satellites began to provide images that gave a new perception of Earth processes, including a more complete realization that water cycle components and processes were continuous in space and could not be fully understood through analyses partitioned by geopolitical or topographical boundaries. In the 1970s, satellites delivered quantitative radiometric measurements that allowed for the estimation of a number of variables such as precipitation and soil moisture. In the United States, by the late 1970s, plans were made to launch the Earth System Science program, led by the National Aeronautics and Space Agency (NASA). The water component of this program integrated terrestrial and atmospheric components and provided linkages with the oceanic component so that a truly global perspective of the water cycle could be developed. At the same time, the role of regional and local hydrological processes within the integrated “global water cycle” began to be understood.
Benefits of this approach were immediate. The connections between the water and energy cycles gave rise to the Global Energy and Water Cycle Experiment (GEWEX)1 as part of the World Climate Research Programme (WCRP). This integrated approach has improved our understanding of the coupled global water/energy system, leading to improved prediction models and more accurate assessments of climate variability and change. The global water cycle has also provided incentives and a framework for further improvements in the measurement of variables such as soil moisture, evapotranspiration, and precipitation. In the past two decades, groundwater has been added to the suite of water cycle variables that can be measured from space. New studies are testing innovative space-based technologies for high-resolution surface water level measurements. While many benefits have followed from the application of the global water cycle concept, its potential is still being developed. Increasingly, the global water cycle is assisting in understanding broad linkages with other global biogeochemical cycles, such as the nitrogen and carbon cycles. Applications of this concept to emerging program priorities, including the Sustainable Development Goals (SDGs) and the Water-Energy-Food (W-E-F) Nexus, are also yielding societal benefits.
Christopher Morgan, Shannon Tushingham, Raven Garvey, Loukas Barton, and Robert Bettinger
At the global scale, conceptions of hunter-gatherer economies have changed considerably over time and these changes were strongly affected by larger trends in Western history, philosophy, science, and culture. Seen as either “savage” or “noble” at the dawn of the Enlightenment, hunter-gatherers have been regarded as everything from holdovers from a basal level of human development, to affluent, ecologically-informed foragers, and ultimately to this: an extremely diverse economic orientation entailing the fullest scope of human behavioral diversity. The only thing linking studies of hunter-gatherers over time is consequently simply the definition of the term: people whose economic mode of production centers on wild resources. When hunter-gatherers are considered outside the general realm of their shared subsistence economies, it is clear that their behavioral diversity rivals or exceeds that of other economic orientations. Hunter-gatherer behaviors range in a multivariate continuum from: a focus on mainly large fauna to broad, wild plant-based diets similar to those of agriculturalists; from extremely mobile to sedentary; from relying on simple, generalized technologies to very specialized ones; from egalitarian sharing economies to privatized competitive ones; and from nuclear family or band-level to centralized and hierarchical decision-making. It is clear, however, that hunting and gathering modes of production had to have preceded and thus given rise to agricultural ones. What research into the development of human economies shows is that transitions from one type of hunting and gathering to another, or alternatively to agricultural modes of production, can take many different evolutionary pathways. The important thing to recognize is that behaviors which were essential to the development of agriculture—landscape modification, intensive labor practices, the division of labor and the production, storage, and redistribution of surplus—were present in a range of hunter-gatherer societies beginning at least as early as the Late Pleistocene in Africa, Europe, Asia, and the Americas. Whether these behaviors eventually led to the development of agriculture depended in part on the development of a less variable and CO2-rich climatic regime and atmosphere during the Holocene, but also a change in the social relations of production to allow for hoarding privatized resources. In the 20th and 21st centuries, ethnographic and archaeological research shows that modern and ancient peoples adopt or even revert to hunting and gathering after having engaged in agricultural or industrial pursuits when conditions allow and that macroeconomic perspectives often mask considerable intragroup diversity in economic decision making: the pursuits and goals of women versus men and young versus old within groups are often quite different or even at odds with one another, but often articulate to form cohesive and adaptive economic wholes. The future of hunter-gatherer research will be tested by the continued decline in traditional hunting and gathering but will also benefit from observation of people who revert to or supplement their income with wild resources. It will also draw heavily from archaeology, which holds considerable potential to document and explain the full range of human behavioral diversity, hunter-gatherer or otherwise, over the longest of timeframes and the broadest geographic scope.
Vincent Moreau and Guillaume Massard
The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems.
Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for.
In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems.
In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The Quaternary period of Earth history, which commenced ca. 2.6 Ma ago, is noted for a series of dramatic shifts in global climate between long, cool (“icehouse”) and short, temperate (“greenhouse”) stages. This also coincides with the extinction of later Australopithecine hominins and evolution of modern Homo sapiens.
Wide recognition of a fourth, Quaternary, order of geologic time emerged in Europe between ca. 1760–1830 and became closely identified with the concept of an ice age. This most recent episode in Earth history is also the best preserved in stratigraphic and landscape records. Indeed, much of its character and processes continue in present time, which prompted early geologists’ recognition of the concept of uniformitarianism—the present is the key to the past.
Quaternary time was quickly divided into a dominant Pleistocene (“most recent”) epoch, characterized by cyclical growth and decay of major continental ice sheets and peripheral permafrost. Disappearance of most of these ice sheets, except in Antarctica and Greenland today, ushered in the Holocene (“wholly modern”) epoch, once thought to terminate the Ice Age but now seen as the current interglacial or temperate stage, commencing ca. 11.7 ka ago. Covering 30–50% of Earth’s land surface at their maxima, ice sheets and permafrost squeezed remaining biomes into a narrower circum-equatorial zone, where research indicated the former occurrence of pluvial and desiccation events. Early efforts to correlate them with mid-high latitude glacials and interglacials revealed the complex and often asynchronous Pleistocene record.
Nineteenth-century recognition of just four glaciations reflected a reliance on geomorphology and short terrestrial stratigraphic records, concentrated in northern hemisphere mid- and high-latitudes, until the 1970s. Correlation of δ16-18 O isotope signals from seafloor sediments (from ocean drilling programs after the 1960s) with polar ice core signals from the 1980s onward has revolutionized our understanding of the Quaternary, facilitating a sophisticated, time-constrained record of events and environmental reconstructions from regional to global scales. Records from oceans and ice sheets, some spanning 105–106 years, are augmented by similar long records from loess, lake sediments, and speleothems (cave sediments). Their collective value is enhanced by innovative analytical and dating tools.
Over 100 Marine Isotope Stages (MIS) are now recognized in the Quaternary, with dramatic climate shifts at decadal and centennial timescales—with the magnitude of 22 MIS in the past 900,000 years considered to reflect significant ice sheet accumulation and decay. Each cycle between temperate and cool conditions (odd- and even-numbered MIS respectively) is time-asymmetric, with progressive cooling over 80,000 to 100,000 years, followed by an abrupt termination then rapid return to temperate conditions for a few thousand years.
The search for causes of Quaternary climate and environmental change embraces all strands of Earth System Science. Strong correlation between orbital forcing and major climate changes (summarized as the Milankovitch mechanism) is displacing earlier emphasis on radiative (direct solar) forcing, but uncertainty remains over how the orbital signal is amplified or modulated. Tectonic forcing (ocean-continent distributions, tectonic uplift, and volcanic outgassing), atmosphere-biogeochemical and greenhouse gas exchange, ocean-land surface albedo and deep- and surface-ocean circulation are all contenders and important agents in their own right.
Modern understanding of Quaternary environments and processes feeds an exponential growth of multidisciplinary research, numerical modeling, and applications. Climate modeling exploits mutual benefits to science and society of “hindcasting,” using paleoclimate data to aid understanding of the past and increasing confidence in modeling forecasts. Pursuit of more detailed and sophisticated understanding of ocean-atmosphere-cryosphere-biosphere interaction proceeds apace.
The Quaternary is also the stage on which human evolution plays. And the essential distinction between natural climate variability and human forcing is now recognized as designating, in present time, a potential new Anthropocene epoch. Quaternary past and present are major keys to its future.
Resilience thinking in relation to the environment has emerged as a lens of inquiry that serves a platform for interdisciplinary dialogue and collaboration. Resilience is about cultivating the capacity to sustain development in the face of expected and surprising change and diverse pathways of development and potential thresholds between them. The evolution of resilience thinking is coupled to social-ecological systems and a truly intertwined human-environment planet. Resilience as persistence, adaptability and, transformability of complex adaptive social-ecological systems is the focus, clarifying the dynamic and forward-looking nature of the concept. Resilience thinking emphasizes that social-ecological systems, from the individual, to community, to society as a whole, are embedded in the biosphere. The biosphere connection is an essential observation if sustainability is to be taken seriously. In the continuous advancement of resilience thinking there are efforts aimed at capturing resilience of social-ecological systems and finding ways for people and institutions to govern social-ecological dynamics for improved human well-being, at the local, across levels and scales, to the global. Consequently, in resilience thinking, development issues for human well-being, for people and planet, are framed in a context of understanding and governing complex social-ecological dynamics for sustainability as part of a dynamic biosphere.