You are looking at 81-100 of 191 articles
María E. Ibarrarán and Jerónimo Chavarría
In Mexico, the laws and norms that regulate the environment emerged at the end of the 19th century to standardize infrastructure construction and preserve nature. However, it was not until the early 1970s that the first formal government entity dedicated to promote environmental protection, the Vice-Ministry for Environmental Improvement, under the Ministry of Health, was founded, mostly responding to a government initiative rather than social pressure. Other laws were then issued and applied by the Secretariat of Urban Development and Ecology. However, in the 1980s, civil society pressed for more regulations aimed at protecting the environment.
In the 1990s, the Ministry of the Environment, Natural Resources and Fisheries (SEMARNAP) was created, focusing on natural resources, biodiversity, hazardous waste, and urban-industrial environmental problems. Its objective was to reduce the trends of environmental deterioration and to promote economic and social development under criteria of sustainability. This and other institutions have evolved since then, covering a larger set of topics and media. Nevertheless, degradation has not been stopped and is far from being reverted, because even though there is a toolbox of policies and instruments, many of them economic, they have not been fully implemented in some cases or enforced in others because of economic and political factors.
With the changes in institutions, legislation was also modified. Mexico became part of international environmental agreements and included the rights to a safe environment in the constitution. However, this legislation has not been enough to modify behavior because often the incentives either for regulators or for polluters themselves are not enough.
Environmental degradation is a market failure. It can be shaped as an externality that markets alone cannot solve either because of overproduction, abuse of open access resources, or underprovision of public goods. In any of these cases, resolution comes only through government intervention. Regulations must include consideration of the benefits and costs they impose to change behavior. However, regardless of formal regulation, there are still a host of environmental problems that affect both urban and rural communities and Indigenous and non-Indigenous populations, and there is a regulatory vacuum integrating environmental aspects with economic and social development issues. Examples of this are the Energy Reform of 2013 and the Law of Waters, as well as the Law of Biodiversity, where impacts on communities are often left aside, because of a de facto prevalence of economic activity over human rights. On the other hand, legal loopholes prevent adequate management of wildlife resources and sufficient treatment of hazardous waste discarded by industries, even if they are regulated. Furthermore, environmental regulations are based on corrective regulations, such as obligations, restrictions, and sanctions, but these have not strengthened their preventive character. It is still less expensive to pollute or degrade the environment than take measures not to. A shift in the paradigm toward policies that create incentives to protect the environment, both for polluters and regulators, may foster much better environmental quality.
George Morris and Patrick Saunders
Most people today readily accept that their health and disease are products of personal characteristics such as their age, gender, and genetic inheritance; the choices they make; and, of course, a complex array of factors operating at the level of society. Individuals frequently have little or no control over the cultural, economic, and social influences that shape their lives and their health and well-being. The environment that forms the physical context for their lives is one such influence and comprises the places where people live, learn work, play, and socialize, the air they breathe, and the food and water they consume. Interest in the physical environment as a component of human health goes back many thousands of years and when, around two and a half millennia ago, humans started to write down ideas about health, disease, and their determinants, many of these ideas centered on the physical environment.
The modern public health movement came into existence in the 19th century as a response to the dreadful unsanitary conditions endured by the urban poor of the Industrial Revolution. These conditions nurtured disease, dramatically shortening life. Thus, a public health movement that was ultimately to change the health and prosperity of millions of people across the world was launched on an “environmental conceptualization” of health. Yet, although the physical environment, especially in towns and cities, has changed dramatically in the 200 years since the Industrial Revolution, so too has our understanding of the relationship between the environment and human health and the importance we attach to it.
The decades immediately following World War II were distinguished by declining influence for public health as a discipline. Health and disease were increasingly “individualized”—a trend that served to further diminish interest in the environment, which was no longer seen as an important component in the health concerns of the day. Yet, as the 20th century wore on, a range of factors emerged to r-establish a belief in the environment as a key issue in the health of Western society. These included new toxic and infectious threats acting at the population level but also the renaissance of a “socioecological model” of public health that demanded a much richer and often more subtle understanding of how local surroundings might act to both improve and damage human health and well-being.
Yet, just as society has begun to shape a much more sophisticated response to reunite health with place and, with this, shape new policies to address complex contemporary challenges, such as obesity, diminished mental health, and well-being and inequities, a new challenge has emerged. In its simplest terms, human activity now seriously threatens the planetary processes and systems on which humankind depends for health and well-being and, ultimately, survival. Ecological public health—the need to build health and well-being, henceforth on ecological principles—may be seen as the society’s greatest 21st-century imperative. Success will involve nothing less than a fundamental rethink of the interplay between society, the economy, and the environment. Importantly, it will demand an environmental conceptualization of the public health as no less radical than the environmental conceptualization that launched modern public health in the 19th century, only now the challenge presents on a vastly extended temporal and spatial scale.
Paolo Vineis and Federica Russo
While genomics has been founded on accurate tools that lead to a limited amount of classification error, exposure assessment in epidemiology is often affected by large error. The “environment” is in fact a complex construct that encompasses chemical exposures (e.g., to carcinogens); biological agents (viruses, or the “microbiome”); and social relationships. The “exposome” concept was then put forward to stress the relatively poor development of appropriate tools for exposure assessment when applied to the study of disease etiology. Three layers of the exposome have been proposed: “general external” (including social capital, stress and psychology); “specific external” (including chemicals, viruses, radiation, etc.); and “internal” (including for example metabolism and gut microflora). In addition, there are at least three properties of the exposome: (a) it is based on a refinement of tools to measure exposures (including internal measurements in the body); (b) it involves a broad definition of “exposure” or environment, including overarching concepts at a societal level; and (c) it involves a temporal component (i.e., exposure is analyzed in a life-course perspective). The conceptual and practical challenge is how the different layers (i.e., general, specific external, and internal) connect to each other in a causally meaningful sequence. The relevance of this question pertains to the translation of science into policy—for example, if experiences in early life impact on the adult risk of disease, and on the quality of aging, how is distant action to be incorporated in biological causal models and into policy interventions? A useful causal theory to address scientific and policy question about exposure is based on the concept of information transmission. Such a theory can explain how to connect the different layers of the exposome in a life-course temporal frame and helps identify the best level for intervention (molecular, individual, or population level). In this context epigenetics plays a key role, partly because it explains the long-distance persistence of epigenetic changes via the concept of “epigenetic memory.”
The animal world is under increasing pressure, given the magnitude of anthropogenic environmental stress, especially from human-caused rapid climate change together with habitat conversion, fragmentation, and destruction. There is a global wave of species extinctions and decline in local species abundance. To stop or even reverse this so-called defaunation process, in situ conservation (in the wild) is no longer effective without ex situ conservation (in captivity). Consequently, zoos could play an ever-greater role in the conservation of endangered species and wildlife—hence the slogan Captivity for Conservation.
However, the integration of zoo-based tools and techniques in species conservation has led to many conflicts between wildlife conservationists and animal protectionists. Many wildlife conservationists agree with Michael Soulé, the widely acclaimed doyen of the relatively new discipline of conservation biology, that conservation and animal welfare are conceptually distinct, and that they should remain politically separate. Animal protectionists, on the other hand, draw support from existing leading accounts of animal ethics that oppose the idea of captivity for conservation, either because infringing an individual’s right to freedom for the preservation of the species is considered as morally wrong, or because the benefits of species conservation are not seen as significant enough to overcome the presumption against depriving an animal of its liberty.
Both sides view animals through different lenses and address different concerns. Whereas animal ethicists focus on individual organisms, and are concerned about the welfare and liberty of animals, wildlife conservationists perceive animals as parts of greater wholes such as species or ecosystems, and consider biodiversity and ecological integrity as key topics. This seemingly intractable controversy can be overcome by transcending both perspectives, and developing a bifocal view in which zoo animals are perceived as individuals in need of specific care and, at the same time, as members of a species in need of protection.
Based on such a bifocal approach that has lately been adopted by a growing international movement of “Compassionate Conservation,” the modern zoo can only achieve its conservation mission if it finds a morally acceptable balance between animal welfare concerns and species conservation commitments. The prospects for the zoo to achieve such a balance are promising. Over the past decade or so, zoos have made serious and sustained efforts to ensure and enhance animal welfare. At the same time, the zoo’s contribution to species conservation has also improved considerably.
Juha Merilä and Ary A. Hoffmann
Changing climatic conditions have both direct and indirect influences on abiotic and biotic processes and represent a potent source of novel selection pressures for adaptive evolution. In addition, climate change can impact evolution by altering patterns of hybridization, changing population size, and altering patterns of gene flow in landscapes. Given that scientific evidence for rapid evolutionary adaptation to spatial variation in abiotic and biotic environmental conditions—analogous to that seen in changes brought by climate change—is ubiquitous, ongoing climate change is expected to have large and widespread evolutionary impacts on wild populations. However, phenotypic plasticity, migration, and various kinds of genetic and ecological constraints can preclude organisms from evolving much in response to climate change, and generalizations about the rate and magnitude of expected responses are difficult to make for a number of reasons.
First, the study of microevolutionary responses to climate change is a young field of investigation. While interest in evolutionary impacts of climate change goes back to early macroevolutionary (paleontological) studies focused on prehistoric climate changes, microevolutionary studies started only in the late 1980s. The discipline gained real momentum in the 2000s after the concept of climate change became of interest to the general public and funding organizations. As such, no general conclusions have yet emerged. Second, the complexity of biotic changes triggered by novel climatic conditions renders predictions about patterns and strength of natural selection difficult. Third, predictions are complicated also because the expression of genetic variability in traits of ecological importance varies with environmental conditions, affecting expected responses to climate-mediated selection.
There are now several examples where organisms have evolved in response to selection pressures associated with climate change, including changes in the timing of life history events and in the ability to tolerate abiotic and biotic stresses arising from climate change. However, there are also many examples where expected selection responses have not been detected. This may be partly explainable by methodological difficulties involved with detecting genetic changes, but also by various processes constraining evolution.
There are concerns that the rates of environmental changes are too fast to allow many, especially large and long-lived, organisms to maintain adaptedness. Theoretical studies suggest that maximal sustainable rates of evolutionary change are on the order of 0.1 haldanes (i.e., phenotypic standard deviations per generation) or less, whereas the rates expected under current climate change projections will often require faster adaptation. Hence, widespread maladaptation and extinctions are expected. These concerns are compounded by the expectation that the amount of genetic variation harbored by populations and available for selection will be reduced by habitat destruction and fragmentation caused by human activities, although in some cases this may be countered by hybridization. Rates of adaptation will also depend on patterns of gene flow and the steepness of climatic gradients. Theoretical studies also suggest that phenotypic plasticity (i.e., nongenetic phenotypic changes) can affect evolutionary genetic changes, but relevant empirical evidence is still scarce. While all of these factors point to a high level of uncertainty around evolutionary changes, it is nevertheless important to consider evolutionary resilience in enhancing the ability of organisms to adapt to climate change.
Agriculture has been the principal influence on the physical structure of the English landscape for many thousands of years. Driven by a wider raft of demographic, social, and economic developments, farming has changed in complex ways over this lengthy period, with differing responses to the productive potential and problems of local environments leading to the emergence of distinct regional landscapes. The character and configuration of these, as much as any contemporary influences, have in turn structured the practice of agriculture at particular points in time. The increasing complexity of the wider economy has also been a key influence on the development of the farmed landscape, especially large-scale industrialization in the late 18th and 19th centuries; and, from the late 19th century, globalization and increasing levels of state intervention. Change in agricultural systems has not continued at a constant rate but has displayed periods of more and less innovation.
Indigenous rights to water follow diverse trajectories across the globe. In Asia and Africa even the concept of indigeneity is questioned and peoples with ancient histories connected to place are defined by ethnicity as opposed to sovereign or place-based rights, although many seek to change that. In South America indigenous voices are rising. In the parts of the globe colonized by European settlement, the definition of these rights has been in a continual state of transition as social norms evolve and indigenous capacity to assert rights grow. From the point of European contact, these rights have been contested. They have evolved primarily through judicial rulings by the highest court in the relevant nation-state. For those nation-states that do address whether indigenous rights to land and water exist, the approach has ranged from the 18th- and 19th-century doctrines of terra nullius (the land (and resources) belonged to no one) to a recognized right of “use and occupancy” that could be usurped under the doctrine of “discovery” by the conquering power. In the 20th and 21st centuries the evolution of the recognition of indigenous rights remains uneven, reflecting the values, judicial doctrine, and degree to which the contested water resource is already developed in the relevant nation-state. Thus, indigenous rights to water range from the recognition of cultural and spiritual rights that would have been in existence at the time of European contact, to inclusion of subsistence rights, rights sufficient for economic development, rights for homeland purposes, and rights as guardian for a water resource. At the forefront in this process of recognition is the right of indigenous peoples as sovereign to control, allocate, develop and protect their own water resources. This aspirational goal is reflected in the effort to create a common global understanding of the rights of indigenous peoples through declaration and definition of the right of self-determination articulated in the UN Declaration on the Rights of Indigenous Peoples.
Mark V. Barrow
The prospect of extinction, the complete loss of a species or other group of organisms, has long provoked strong responses. Until the turn of the 18th century, deeply held and widely shared beliefs about the order of nature led to a firm rejection of the possibility that species could entirely vanish. During the 19th century, however, resistance to the idea of extinction gave way to widespread acceptance following the discovery of the fossil remains of numerous previously unknown forms and direct experience with contemporary human-driven decline and the destruction of several species. In an effort to stem continued loss, at the turn of the 19th century, naturalists, conservationists, and sportsmen developed arguments for preventing extinction, created wildlife conservation organizations, lobbied for early protective laws and treaties, pushed for the first government-sponsored parks and refuges, and experimented with captive breeding. In the first half of the 20th century, scientists began systematically gathering more data about the problem through global inventories of endangered species and the first life-history and ecological studies of those species.
The second half of the 20th and the beginning of the 21st centuries have been characterized both by accelerating threats to the world’s biota and greater attention to the problem of extinction. Powerful new laws, like the U.S. Endangered Species Act of 1973, have been enacted and numerous international agreements negotiated in an attempt to address the issue. Despite considerable effort, scientists remain fearful that the current rate of species loss is similar to that experienced during the five great mass extinction events identified in the fossil record, leading to declarations that the world is facing a biodiversity crisis. Responding to this crisis, often referred to as the sixth extinction, scientists have launched a new interdisciplinary, mission-oriented discipline, conservation biology, that seeks not just to understand but also to reverse biota loss. Scientists and conservationists have also developed controversial new approaches to the growing problem of extinction: rewilding, which involves establishing expansive core reserves that are connected with migratory corridors and that include populations of apex predators, and de-extinction, which uses genetic engineering techniques in a bid to resurrect lost species. Even with the development of new knowledge and new tools that seek to reverse large-scale species decline, a new and particularly imposing danger, climate change, looms on the horizon, threatening to undermine those efforts.
Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.
Tomiko Yamaguchi and Shun-Nan Chiang
Food safety has been a critical issue from the beginning of human existence, but more recently the nature of concerns over food safety has changed. Further, in terms of both scale and impact, the modern problems of food safety are very different from the issues that confronted the past. For example, especially since the late 1990s, society has faced food safety crises and scares arising from threats as diverse as bovine spongiform encephalitis (BSE), dioxin contamination, melamine-tainted infant milk formula, and so forth. These phenomena show that an ever-increasing variety of contaminants such as chemical and microbial agents can potentially find their way into the food supply, while novel foods such as GM foods and cultured meat add new challenges when it comes to certifying food safety.
Food safety has become a particularly complex issue in the context of the global economy because the governance of food safety is entangled with several larger trends at the global scale, including (a) trade liberalization in the 1980s; (b) the adoption of a risk analysis framework by global and national food safety administrations; and (c) the spread of food quality management regimes throughout the entire food industry, from food production to processing and retail. Furthermore, there are vast differences between developed and developing countries with respect to both food safety regulations and prominent food safety issues. These facts, combined with the borderless nature of sociotechnical food systems, contribute to a situation in which it is extremely challenging for any individual country to manage food safety issues within its jurisdiction. This observation underscores the importance of global food safety governance, a goal which is in itself difficult to achieve.
Two especially significant dilemmas have emerged within the existing situation vis-à-vis global food safety governance. The first is the challenges arising from the tensions inherent in a “modern” food safety governance approach, a model that combines a science-based strategy of dealing with food safety problems, on one hand, and the ideal of participatory democracy, on the other hand, in trying to deal with food safety issues. Problems arise from the contradictions between the science-based risked management approach, focused narrowly on monitoring and mitigation of hazards, and the wide-ranging complexity of the social, political, and interpersonal factors that shape people’s real-world concerns about food safety. The second is cross-border application of risk management to food imports in the Global North and its implications for exporting countries in the Global South. Problems arise from disparities in approaches and expectations regarding food safety between the Global North and the South. These two dilemmas have one thing in common: Each inherently contains challenges arising from internal contractions, as when the goal of achieving sound and consistent solutions to food safety issues is pursued alongside the goal of building a broad consensus across varying actors whose values, norms, needs, and interests differ and who are situated in differing socioeconomic and political contexts. Drawing insights from the sociology of agriculture and food and from social studies of science, an attempt is made to unpack the societal and policy challenges of food safety governance in a globalized economy.
Wun Jern Ng, Keke Xiao, Vinay Kumar Tyagi, Chaozhi Pan, and Leong Soon Poh
Agriculture waste can be a significant issue in waste management as its impact can be felt far from its place of origin. Post-harvest crop residues require clearance prior to the next planting and a common practice is burning on the field. The uncontrolled burning results in air pollution and can adversely impact the environment far from the burn site. Agriculture waste can also include animal husbandry waste such as from cattle, swine, and poultry. Animal manure not only causes odors but also pollutes water if discharged untreated. However, agricultural activities, particularly on a large scale, are typically at some distance from urban centers. The environmental impacts associated with production may not be well recognized by the consumers. As the consumption terminal of agricultural produce, urban areas in turn generate food waste, which can contribute significantly to municipal solid wastes. There is a correlation between the quantity of food waste generated and a community’s economic progress.
Managing waste carries a cost, which may illustrate cost transfer from waste generators to the public. However, waste need not be seen only as an unwanted material that requires costly treatment before disposal. The waste may instead be perceived as a raw material for resource recovery. For example, the material may have substantial quantities of organic carbon, which can be recovered for energy generation. This offers opportunity for producing and using renewable and environment-friendly fuels. The “waste” may also include quantities of recoverable nutrients such as nitrogen and phosphorus.
Forest transitions take place when trends over time in forest cover shift from deforestation to reforestation. These transitions are of immense interest to researchers because the shift from deforestation to reforestation brings with it a range of environmental benefits. The most important of these would be an increased volume of sequestered carbon, which if large enough would slow climate change. This anticipated atmospheric effect makes the circumstances surrounding forest transitions of immediate interest to policymakers in the climate change era. This encyclopedia entry outlines these circumstances. It begins by describing the socio-ecological foundations of the first forest transitions in western Europe. Then it discusses the evolution of the idea of a forest transition, from its introduction in 1990 to its latest iteration in 2019. This discussion describes the proliferation of different paths through the forest transition. The focus then shifts to a discussion of the primary driver of the 20th-century forest transitions, economic development, in its urbanizing, industrializing, and globalizing forms. The ecological dimension of the forest transition becomes the next focus of the discussion. It describes the worldwide redistribution of forests toward more upland settings. Climate change since 2000, with its more extreme ecological events in the form of storms and droughts, has obscured some ongoing forest transitions. The final segment of this entry focuses on the role of the state in forest transitions. States have become more proactive in managing forest transitions. This tendency became more marked after 2010 as governments have searched for ways to reduce carbon emissions or to offset emissions through more carbon sequestration. The forest transitions by promoting forest expansion would contribute additional carbon offsets to a nation’s carbon budget. For this reason, the era of climate change could also see an expansion in the number of promoted forest transitions.
Hans Keune and Timo Assmuth
Framing and dealing with complexity are crucially important in environment and human health science, policy, and practice. Complexity is a key feature of most environment and human health issues, which by definition include aspects of the environment and human health, both of which constitute complex phenomena. The number and range of factors that may play a role in an environment and human health issue are enormous, and the issues have a multitude of characteristics and consequences. Framing this complexity is crucial because it will involve key decisions about what to take into account when addressing environment and human health issues and how to deal with them. This is not merely a technical process of scientific framing, but also a methodological decision-making process with both scientific and societal implications. In general, the benefits and risks related to such issues cannot be generalized or objectified, and will be distributed unevenly, resulting in health and environmental inequalities. Even more generally, framing is crucial because it reflects cultural factors and historical contingencies, perceptions and mindsets, political processes, and associated values and worldviews. Framing is at the core of how we as humans relate to, and deal with, environment and human health, as scientists, policymakers, and practitioners, with models, policies, or actions.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.
After millennia of hunting and gathering, prehistoric human societies around the world made the transition to food production using domesticated plants and animals. Several key areas for the initial domestication of plants and animals can be identified: southwestern Asia, Mesoamerica, China, Neotropical South America, eastern North America, Highland New Guinea, and sub-Saharan Africa. In the Old World, wheat, barley, millet, rice, sheep, goats, cattle, and pigs were the major founding crops, while in the New World, maize, squashes, beans, and many other seed and tuber plants were brought into cultivation. Although each area had its own distinct pathway to agriculture, it typically followed a standard path from resource management by hunter-gatherers, incipient cultivation (and livestock herding in some areas), domestication, to commitment to agriculture. Many theories to explain the transition to agriculture have been proposed. Early single-factor hypotheses have been largely discarded in favor models drawn from human evolutionary biology that emphasize the interplay between humans and the species targeted for domestication. Although within the long span of human history, the transition from hunting and gathering to farming in the last 10,000 years can be considered extraordinarily rapid, usually this process took decades, centuries, or even millennia when considered from the perspective of the human factors involved. From these core areas, agricultural practices dispersed, both through their integration into the plant and animal economies of hunter-gatherer societies and through the spread of farming populations. The transition to agriculture had consequences on a global scale, leading to social complexity and, in many cases, urban societies that would be impossible to imagine without agriculture.
Anil Markandya, Elena Paglialunga, Valeria Costantini, and Giorgia Sforna
Economic damage from climate change includes several aspects that need to be considered at the global and regional levels to achieve an equitable common solution to global warming. The economic literature reviewed here analyzes this issue under three general perspectives.
First, the analytical estimation of the linkages between damages in monetary terms and climate variables, as projections of temperature, precipitation, and frequency of extreme events, is rapidly evolving. Damage functions are included in complex economic models in order to calculate the economic impact of the climate change on economic output and growth, thus informing the debate on the amount of resources that should be devoted to reducing greenhouse gas (GHG) emissions and limiting climate damages. The choice of the geographical aggregation in this respect is a crucial aspect to be considered if policy advice is to be formulated on the basis of model results. The higher the level of regional detail, the more reliable the results are in terms of geographical distribution of economic damages.
Second, the precise estimation of the costs associated with different damages caused by climate change is attracting growing interest. Climate costs present a wide range of heterogeneity for several reasons, such as the different formulation of the damage function adopted, the modeling design of the economic impact, the temporal horizon considered, and the differentiation across sectors. Two broad categories of analysis are relevant. The first refers to the choice of the sectoral dimension under investigation, where some studies cover multiple sectors and their interactions, while others analyze specific sectors in depth. The second classification criterion refers to the choice of the economic aspects estimated, where a strand of literature analyzes only market-based costs, while other analyses also include non-market (or intangible) damages. The most common sectors investigated are agriculture, forestry, health, energy, coastal zones and sea level rise, extreme events, tourism, ecosystem, industry, air quality, and catastrophic damages. Most studies consider market-based costs, while non-market impacts need to be better detailed in economic models.
Third, the computation of a single number through the analytical framework of the social costs of carbon (SCC) represents a key aspect of the process of adapting complex results in order to properly inform the political debate. SCC represents the marginal global damage cost of carbon emissions and can also be interpreted as the economic value of damages avoided for unitary GHG emission reduction. Several uncertainties still influence the robustness of the SCC analytical framework, such as the choice of the discount rate, which strongly influences the role of SCC in supporting or not mitigation action in the short term.
Although the debate on the economic damages arising from climate change is flourishing, several aspects still need to be investigated in order to build a common consensus within the scientific community as a necessary condition to properly inform the political debate and to facilitate the achievement of a long-term equitable global climate agreement.
Fred Mackenzie and Abraham Lerman
The tendency to represent natural processes as cycles—from Latin cyclus and Greek κυκλος—is undoubtedly rooted in the human observations of repeating or periodic phenomena. The oldest notions of the water cycle, as water cycling between the Earth, air, and back to earth, are mentioned in the Old Testament and by Greek philosophers, from the 900s to 300s
The main “bioessential” chemical elements are carbon (C), nitrogen (N), phosphorus (P), oxygen (O), and hydrogen (H). These are represented in the mean composition of aquatic photosynthesizing organisms as the atomic abundance ratio C:N:P = 106:16:1 or as (CH2O)106(NH3)16(H3PO4). In land plants, estimates of mean composition vary from C:N:P = 510:4:1 to 2057:17:1. On land, the photosynthesizing organisms are much more efficient than in water by being able to incorporate more carbon atoms for each atom of phosphorus. The bioessential elements are coupled by the living organisms in the exogenic cycle, the processes at and near the Earth’s surface, and in the endogenic cycle of the processes that include subduction into the Earth’s interior and return to the surface. The main reservoirs of the bioessential elements are very different: although oxygen is the most abundant element in the Earth’s crust, most of it is locked in silicate minerals as SiO2, and the forms available to biogeochemical cycling are oxygen in water and, as a product of photosynthesis, as gas O2 in the atmosphere. Carbon is in the atmospheric reservoir of CO2 gas and dissolved in ocean and fresh waters. The main nitrogen reservoir is the molecular N2 in the atmosphere and oxidized and reduced nitrogen compounds in waters. Phosphorus occurs in the oxidized form of the phosphate-ion in crustal minerals, from where it is leached into the water.
The natural cycle of the bioessential elements has been greatly perturbed since the late 1700s by human industrial and agricultural activities, the period known as the Anthropocene epoch. The increase in CO2, CH4 and NOx emissions to the atmosphere from fossil-fuel burning and land-use changes has rapidly and strongly modified the chemical composition of the atmosphere. This change has affected the balance of solar radiation absorbed by the atmosphere—generally known as “climate change”—and the acidity of surface-ocean waters, referred to as “ocean acidification.” CO2 in water is a weak acid that dissolves carbonate minerals, biogenically and inorganically formed in the ocean, and it thus modifies the chemical composition of ocean water. Overall, a major anthropogenic perturbation of the biogeochemical cycles has been the faster increase in atmospheric concentration of CO2 than its removal from the atmosphere by plants, dissolution in the ocean, and uptake in mineral weathering.
Rhett B. Larson
Increased water variability is one of the most pressing challenges presented by global climate change. A warmer atmosphere will hold more water and will result in more frequent and more intense El Niño events. Domestic and international water rights regimes must adapt to the more extreme drought and flood cycles resulting from these phenomena.
Laws that allocate rights to water, both at the domestic level between water users and at the international level between nations sharing transboundary water sources, are frequently rigid governance systems ill-suited to adapt to a changing climate. Often, water laws allocate a fixed quantity of water for a certain type of use. At the domestic level, such rights may be considered legally protected private property rights or guaranteed human rights. At the international level, such water allocation regimes may also be dictated by human rights, as well as concerns for national sovereignty. These legal considerations may ossify water governance and inhibit water managers’ abilities to alter water allocations in response to changing water supplies. To respond to water variability arising from climate change, such laws must be reformed or reinterpreted to enhance their adaptive capacity. Such adaptation should consider both intra-generational equity and inter-generational equity.
One potential approach to reinterpreting such water rights regimes is a stronger emphasis on the public trust doctrine. In many nations, water is a public trust resource, owned by the state and held in trust for the benefit of all citizens. Rights to water under this doctrine are merely usufructuary—a right to make a limited use of a specified quantity of water subject to governmental approval. The recognition and enforcement of the fiduciary obligation of water governance institutions to equitably manage the resource, and characterization of water rights as usufructuary, could introduce needed adaptive capacity into domestic water allocation laws. The public trust doctrine has been influential even at the international level, and that influence could be enhanced by recognizing a comparable fiduciary obligation for inter-jurisdictional institutions governing international transboundary waters.
Legal reforms to facilitate water markets may also introduce greater adaptive capacity into otherwise rigid water allocation regimes. Water markets are frequently inefficient for several reasons, including lack of clarity in water rights, externalities inherent in a resource that ignores political boundaries, high transaction costs arising from differing economic and cultural valuations of water, and limited competition when water utilities are frequently natural monopolies. Legal reforms that clarify property rights in water, specify the minimum quantity, quality, and affordability of water to meet basic human needs and environmental flows, and mandate participatory and transparent water pricing and contracting could allow greater flexibility in water allocations through more efficient and equitable water markets.
Jac van der Gun
Human behavior in relation to groundwater has remained relatively unchanged from ancient times until the early 20th century. Intercepting water from springs or exploiting shallow aquifers by means of wells or qanats was common practice worldwide, but only modest quantities of groundwater were abstracted. In general, the resource was taken for granted in absence of any knowledge regarding groundwater systems and their vulnerability. During the 20th century, however, an unprecedent change started spreading globally—a change so drastic that it could be called the Global Groundwater Revolution. It did not surface simultaneously everywhere but rather encroached into different regions as waves of change, with varied timing, depending on local conditions. This Global Groundwater Revolution has three main components: (1) rapid intensification of the exploitation of groundwater, (2) fundamentally changing views on groundwater, and (3) the emergence of integrated groundwater management and governance. These three components are mostly interdependent, although their emergence and development tend to be somewhat asynchronous. The Global Groundwater Revolution marks a radical historical change in the relation between human society and groundwater. It has taken benefits produced by groundwater to an unprecedented level, but their sustainability is assured only if there is good groundwater governance.
Wim De Vries, Enzai Du, Klaus Butterbach Bahl, Lena Schulte Uebbing, and Frank Dentener
Human activities have rapidly accelerated global nitrogen (N) cycling since the late 19th century. This acceleration has manifold impacts on ecosystem N and carbon (C) cycles, and thus on emissions of the greenhouse gases nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4), which contribute to climate change.
First, elevated N use in agriculture leads to increased direct N2O emissions. Second, it leads to emissions of ammonia (NH3), nitric oxide (NO), and nitrogen dioxide (NO2) and leaching of nitrate (NO3−), which cause indirect N2O emissions from soils and waterbodies. Third, N use in agriculture may also cause changes in CO2 exchange (emission or uptake) in agricultural soils due to N fertilization (direct effect) and in non-agricultural soils due to atmospheric NHx (NH3+NH4) deposition (indirect effect). Fourth, NOx (NO+NO2) emissions from combustion processes and from fertilized soils lead to elevated NOy (NOx+ other oxidized N) deposition, further affecting CO2 exchange. As most (semi-) natural terrestrial ecosystems and aquatic ecosystems are N limited, human-induced atmospheric N deposition usually increases net primary production (NPP) and thus stimulates C sequestration. NOx emissions, however, also induce tropospheric ozone (O3) formation, and elevated O3 concentrations can lead to a reduction of NPP and plant C sequestration. The impacts of human N fixation on soil CH4 exchange are insignificant compared to the impacts on N2O and CO2 exchange (emissions or uptake). Ignoring shorter lived components and related feedbacks, the net impact of human N fixation on climate thus mainly depends on the magnitude of the cooling effect of CO2 uptake as compared to the magnitude of the warming effect of (direct and indirect) N2O emissions.
The estimated impact of human N fixation on N2O emission is 8.0 (7.0–9.0) Tg N2O-N yr−1, which is equal 1.02 (0.89–1.15) Pg CO2-C equivalents (eq) yr−1. The estimated CO2 uptake due to N inputs to terrestrial, freshwater, and marine ecosystems equals −0.75 (−0.56 to −0.97) Pg CO2-C eq yr−1. At present, the impact of human N fixation on increased CO2 sequestration thus largely (on average near 75%) compensates the stimulating effect on N2O emissions. In the long term, however, effects on ecosystem CO2 sequestration are likely to diminish due to growth limitations by other nutrients such as phosphorus. Furthermore, N-induced O3 exposure reduces CO2 uptake, causing a net C loss at 0.14 (0.07–0.21) Pg CO2-C eq yr−1. Consequently, human N fixation causes an overall increase in net greenhouse gas emissions from global ecosystems, which is estimated at 0.41 (−0.01–0.80) Pg CO2-C eq yr−1. Even when considering all uncertainties, it is likely that human N inputs lead to a net increase in global greenhouse gas emissions.
These estimates are based on most recent science and modeling approaches with respect to: (i) N inputs to various ecosystems, including NH3 and NOx emission estimates and related atmospheric N (NH3 and NOx) deposition and O3 exposure; (ii) N2O emissions in response to N inputs; and (iii) carbon exchange in responses to N inputs (C–N response) and O3 exposure (C–O3 response), focusing on the global scale. Apart from presenting the current knowledge, this article also gives an overview of changes in the estimates of those fluxes and C–N response factors over time, including debates on C–N responses in literature, the uncertainties in the various estimates, and the potential for improving them.