Juha Merilä and Ary A. Hoffmann
Changing climatic conditions have both direct and indirect influences on abiotic and biotic processes and represent a potent source of novel selection pressures for adaptive evolution. In addition, climate change can impact evolution by altering patterns of hybridization, changing population size, and altering patterns of gene flow in landscapes. Given that scientific evidence for rapid evolutionary adaptation to spatial variation in abiotic and biotic environmental conditions—analogous to that seen in changes brought by climate change—is ubiquitous, ongoing climate change is expected to have large and widespread evolutionary impacts on wild populations. However, phenotypic plasticity, migration, and various kinds of genetic and ecological constraints can preclude organisms from evolving much in response to climate change, and generalizations about the rate and magnitude of expected responses are difficult to make for a number of reasons.
First, the study of microevolutionary responses to climate change is a young field of investigation. While interest in evolutionary impacts of climate change goes back to early macroevolutionary (paleontological) studies focused on prehistoric climate changes, microevolutionary studies started only in the late 1980s. The discipline gained real momentum in the 2000s after the concept of climate change became of interest to the general public and funding organizations. As such, no general conclusions have yet emerged. Second, the complexity of biotic changes triggered by novel climatic conditions renders predictions about patterns and strength of natural selection difficult. Third, predictions are complicated also because the expression of genetic variability in traits of ecological importance varies with environmental conditions, affecting expected responses to climate-mediated selection.
There are now several examples where organisms have evolved in response to selection pressures associated with climate change, including changes in the timing of life history events and in the ability to tolerate abiotic and biotic stresses arising from climate change. However, there are also many examples where expected selection responses have not been detected. This may be partly explainable by methodological difficulties involved with detecting genetic changes, but also by various processes constraining evolution.
There are concerns that the rates of environmental changes are too fast to allow many, especially large and long-lived, organisms to maintain adaptedness. Theoretical studies suggest that maximal sustainable rates of evolutionary change are on the order of 0.1 haldanes (i.e., phenotypic standard deviations per generation) or less, whereas the rates expected under current climate change projections will often require faster adaptation. Hence, widespread maladaptation and extinctions are expected. These concerns are compounded by the expectation that the amount of genetic variation harbored by populations and available for selection will be reduced by habitat destruction and fragmentation caused by human activities, although in some cases this may be countered by hybridization. Rates of adaptation will also depend on patterns of gene flow and the steepness of climatic gradients. Theoretical studies also suggest that phenotypic plasticity (i.e., nongenetic phenotypic changes) can affect evolutionary genetic changes, but relevant empirical evidence is still scarce. While all of these factors point to a high level of uncertainty around evolutionary changes, it is nevertheless important to consider evolutionary resilience in enhancing the ability of organisms to adapt to climate change.
Mark V. Barrow
The prospect of extinction, the complete loss of a species or other group of organisms, has long provoked strong responses. Until the turn of the 18th century, deeply held and widely shared beliefs about the order of nature led to a firm rejection of the possibility that species could entirely vanish. During the 19th century, however, resistance to the idea of extinction gave way to widespread acceptance following the discovery of the fossil remains of numerous previously unknown forms and direct experience with contemporary human-driven decline and the destruction of several species. In an effort to stem continued loss, at the turn of the 19th century, naturalists, conservationists, and sportsmen developed arguments for preventing extinction, created wildlife conservation organizations, lobbied for early protective laws and treaties, pushed for the first government-sponsored parks and refuges, and experimented with captive breeding. In the first half of the 20th century, scientists began systematically gathering more data about the problem through global inventories of endangered species and the first life-history and ecological studies of those species.
The second half of the 20th and the beginning of the 21st centuries have been characterized both by accelerating threats to the world’s biota and greater attention to the problem of extinction. Powerful new laws, like the U.S. Endangered Species Act of 1973, have been enacted and numerous international agreements negotiated in an attempt to address the issue. Despite considerable effort, scientists remain fearful that the current rate of species loss is similar to that experienced during the five great mass extinction events identified in the fossil record, leading to declarations that the world is facing a biodiversity crisis. Responding to this crisis, often referred to as the sixth extinction, scientists have launched a new interdisciplinary, mission-oriented discipline, conservation biology, that seeks not just to understand but also to reverse biota loss. Scientists and conservationists have also developed controversial new approaches to the growing problem of extinction: rewilding, which involves establishing expansive core reserves that are connected with migratory corridors and that include populations of apex predators, and de-extinction, which uses genetic engineering techniques in a bid to resurrect lost species. Even with the development of new knowledge and new tools that seek to reverse large-scale species decline, a new and particularly imposing danger, climate change, looms on the horizon, threatening to undermine those efforts.
Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.
Rhett B. Larson
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Increased water variability is one of the most pressing challenges presented by global climate change. A warmer atmosphere will hold more water and result in more frequent and more intense El Niño events. Domestic and international water rights regimes must adapt to the more extreme drought and flood cycles resulting from these phenomena.
Laws that allocate rights to water, both at the domestic level between water users and at the international level between nations sharing transboundary water sources, are frequently rigid governance systems ill-suited to adapt to a changing climate. Often, water laws allocate a fixed quantity of water for a certain type of use. At the domestic level, such rights may be considered legally protected private property rights or guaranteed human rights. At the international level, such water allocation regimes may also be dictated by human rights, as well as concerns for national sovereignty. These legal considerations may ossify water governance and inhibit water managers’ abilities to alter water allocations in response to changing water supplies. To respond to water variability arising from climate change, such laws must be reformed or reinterpreted to enhance their adaptive capacity. Such adaptation should consider both intra-generational equity and intergenerational equity.
One potential approach to reinterpreting such water rights regimes is a stronger emphasis on the public trust doctrine. In many nations, water is a public trust resource, owned by the state and held in trust for the benefit of all citizens. Rights to water under this doctrine are merely usufructuary—a right to make a limited use of a specified quantity of water subject to governmental approval. The recognition and enforcement of the fiduciary obligation of water governance institutions to equitably manage the resource, and characterization of water rights as usufructuary, could introduce needed adaptive capacity into domestic water allocation laws. The public trust doctrine has been influential even at the international level, and that influence could be enhanced by recognizing a comparable fiduciary obligation for inter-jurisdictional institutions governing international transboundary waters.
Legal reforms to facilitate water markets may also introduce greater adaptive capacity into otherwise rigid water allocation regimes. Water markets are frequently inefficient for several reasons, including lack of clarity in water rights, externalities inherent in a resource that ignores political boundaries, high transaction costs arising from differing economic and cultural valuations of water, and limited competition when water utilities are frequently natural monopolies. Legal reforms that clarify property rights in water; specify the minimum quantity, quality, and affordability of water to meet basic human needs and environmental flows; and mandate participatory and transparent water pricing and contracting could allow greater flexibility in water allocations through more efficient and equitable water markets.
Wim De Vries, Enzai Du, Klaus Butterbach Bahl, Lena Schulte Uebbing, and Frank Dentener
Human activities have rapidly accelerated global nitrogen (N) cycling since the late 19th century. This acceleration has manifold impacts on ecosystem N and carbon (C) cycles, and thus on emissions of the greenhouse gases nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4), which contribute to climate change.
First, elevated N use in agriculture leads to increased direct N2O emissions. Second, it leads to emissions of ammonia (NH3), nitric oxide (NO), and nitrogen dioxide (NO2) and leaching of nitrate (NO3−), which cause indirect N2O emissions from soils and waterbodies. Third, N use in agriculture may also cause changes in CO2 exchange (emission or uptake) in agricultural soils due to N fertilization (direct effect) and in non-agricultural soils due to atmospheric NHx (NH3+NH4) deposition (indirect effect). Fourth, NOx (NO+NO2) emissions from combustion processes and from fertilized soils lead to elevated NOy (NOx+ other oxidized N) deposition, further affecting CO2 exchange. As most (semi-) natural terrestrial ecosystems and aquatic ecosystems are N limited, human-induced atmospheric N deposition usually increases net primary production (NPP) and thus stimulates C sequestration. NOx emissions, however, also induce tropospheric ozone (O3) formation, and elevated O3 concentrations can lead to a reduction of NPP and plant C sequestration. The impacts of human N fixation on soil CH4 exchange are insignificant compared to the impacts on N2O and CO2 exchange (emissions or uptake). Ignoring shorter lived components and related feedbacks, the net impact of human N fixation on climate thus mainly depends on the magnitude of the cooling effect of CO2 uptake as compared to the magnitude of the warming effect of (direct and indirect) N2O emissions.
The estimated impact of human N fixation on N2O emission is 8.0 (7.0–9.0) Tg N2O-N yr−1, which is equal 1.02 (0.89–1.15) Pg CO2-C equivalents (eq) yr−1. The estimated CO2 uptake due to N inputs to terrestrial, freshwater, and marine ecosystems equals −0.75 (−0.56 to −0.97) Pg CO2-C eq yr−1. At present, the impact of human N fixation on increased CO2 sequestration thus largely (on average near 75%) compensates the stimulating effect on N2O emissions. In the long term, however, effects on ecosystem CO2 sequestration are likely to diminish due to growth limitations by other nutrients such as phosphorus. Furthermore, N-induced O3 exposure reduces CO2 uptake, causing a net C loss at 0.14 (0.07–0.21) Pg CO2-C eq yr−1. Consequently, human N fixation causes an overall increase in net greenhouse gas emissions from global ecosystems, which is estimated at 0.41 (−0.01–0.80) Pg CO2-C eq yr−1. Even when considering all uncertainties, it is likely that human N inputs lead to a net increase in global greenhouse gas emissions.
These estimates are based on most recent science and modeling approaches with respect to: (i) N inputs to various ecosystems, including NH3 and NOx emission estimates and related atmospheric N (NH3 and NOx) deposition and O3 exposure; (ii) N2O emissions in response to N inputs; and (iii) carbon exchange in responses to N inputs (C–N response) and O3 exposure (C–O3 response), focusing on the global scale. Apart from presenting the current knowledge, this article also gives an overview of changes in the estimates of those fluxes and C–N response factors over time, including debates on C–N responses in literature, the uncertainties in the various estimates, and the potential for improving them.
Nations rapidly industrialized after World War II, sharply increasing the extraction of resources from the natural world. Colonial empires broke up on land after the war, but they were re-created in the oceans. The United States, Japan, and the Soviet Union, as well as the British, Germans, and Spanish, industrialized their fisheries, replacing fleets of small-scale, independent artisanal fishermen with fewer but much larger government-subsidized ships. Nations like South Korea and China, as well as the Eastern Bloc countries of Poland and Bulgaria, also began fishing on an almost unimaginable scale. Countries raced to find new stocks of fish to exploit. As the Cold War deepened, nations sought to negotiate fishery agreements with Third World nations. The conflict over territorial claims led to the development of the Law of the Sea process, starting in 1958, and to the adoption of 200-mile exclusive economic zones (EEZ) in the 1970s.
Fishing expanded with the understanding that fish stocks were robust and could withstand high harvest rates. The adoption of maximum sustained yield (MSY) after 1954 as the goal of postwar fishery negotiations assumed that fish had surplus and that scientists could determine how many fish could safely be caught. As fish stocks faltered under the onslaught of industrial fisheries, scientists re-assessed their assumptions about how many fish could be caught, but MSY, although modified, continues to be at the heart of modern fisheries management.
Matti Nummelin and Niko Urho
Conservation and sustainable use of biodiversity have been in the center of policy creation for half a century. The main international biodiversity conventions and processes include the Convention on Biological Diversity (CBD) and its protocols, the Convention on Trade in Endangered Species of Wild Fauna and Flora (CITES), the Convention on Wetlands of International Importance (Ramsar Convention), the World Heritage Convention (WHC), the Convention on Conservation of Migratory Species of Wild Animals (CMS), the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA), the International Plant Protection Convention (IPPC), the Commission on Genetic Resources for Food and Agriculture (CGRFA), and the International Convention on the Regulation of Whaling (ICRW). The governance of marine biodiversity in areas beyond national jurisdiction (BBNJ) is also discussed, as political focus has shifted to the protection of the oceans and is expected to culminate in the adoption of a new international convention under the United Nations Convention on Law of Seas (UNCLOS). Other conventions and processes with links to biodiversity include the United Nations Convention to Combat Desertification (UNCCD), the United Nations Framework Convention on Climate Change (UNFCCC), and the United Nations Forum on Forests (UNFF).
Despite the multitude of instruments, governments are faced with the fact that biodiversity loss is spiraling and international targets are not being met. The Earth’s sixth mass extinction event has led to various initiatives to fortify the relevance of biodiversity in the UN system and beyond to accelerate action on the ground. In face of an ever more complex international policy landscape on biodiversity, country delegates are seeking to enhance efficiency and reduce fragmentation by enhancing synergies among multilateral environmental agreements and strengthening their science−policy interface. Furthermore, biodiversity has been reflected throughout the 2030 Agenda on Sustainable Development and is gradually gaining more ground in the human rights context. The Global Pact for the Environment, a new international initiative that is aiming to reinforce soft law commitments and increase coherence among environmental treaties, holds the potential to influence and strengthen the way biodiversity conventions function, but extensive discussions are still needed before concrete action is agreed upon.
Vincent Moreau and Guillaume Massard
The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems.
Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for.
In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems.
In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated.
Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.”
Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources.
From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations.
The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.