161-180 of 328 Results


Geography and Chronology of the Transition to Agriculture  

Peter Bogucki

After millennia of hunting and gathering, prehistoric human societies around the world made the transition to food production using domesticated plants and animals. Several key areas for the initial domestication of plants and animals can be identified: southwestern Asia, Mesoamerica, China, Neotropical South America, eastern North America, Highland New Guinea, and sub-Saharan Africa. In the Old World, wheat, barley, millet, rice, sheep, goats, cattle, and pigs were the major founding crops, while in the New World, maize, squashes, beans, and many other seed and tuber plants were brought into cultivation. Although each area had its own distinct pathway to agriculture, it typically followed a standard path from resource management by hunter-gatherers, incipient cultivation (and livestock herding in some areas), domestication, to commitment to agriculture. Many theories to explain the transition to agriculture have been proposed. Early single-factor hypotheses have been largely discarded in favor models drawn from human evolutionary biology that emphasize the interplay between humans and the species targeted for domestication. Although within the long span of human history, the transition from hunting and gathering to farming in the last 10,000 years can be considered extraordinarily rapid, usually this process took decades, centuries, or even millennia when considered from the perspective of the human factors involved. From these core areas, agricultural practices dispersed, both through their integration into the plant and animal economies of hunter-gatherer societies and through the spread of farming populations. The transition to agriculture had consequences on a global scale, leading to social complexity and, in many cases, urban societies that would be impossible to imagine without agriculture.


Statistical Scaling of Randomly Fluctuating Hierarchical Variables  

Shlomo P. Neuman, Monica Riva, Alberto Guadagnini, Martina Siena, and Chiara Recalcati

Environmental variables tend to fluctuate randomly and exhibit multiscale structures in space and time. Whereas random fluctuations arise from variations in environmental properties and phenomena, multiscale behavior implies that these properties and phenomena possess hierarchical structures. Understanding and quantifying such random, multiscale behavior is critical for the analysis of fluid flow as well as mass and energy transport in the environment. The multiscale nature of randomly fluctuating variables that characterize a hierarchical environment (or process) tends to be reflected in the way their increments vary in space (or time). Quite often such increments (a) fluctuate randomly in a highly irregular fashion; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails, which sometimes decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions (statistical moments of absolute increments) in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (linear relations between log structure functions of successive orders) at all lags; (e) display nonlinear scaling of power-law exponent with order of sample structure function; and (f) reveal various degrees of anisotropy in these behaviors. Similar statistical scaling is known to characterize many earth, ecological, biological, physical, astrophysical, and financial variables. The literature has traditionally associated statistical scaling behaviors of the aforementioned kind with multifractals. This is so even though multifractal theory (a) focuses solely on statistical scaling of variable increments, unrelated to statistics of the variable itself, and (b) explains neither observed breakdown in power-law scaling at small and large lags nor extended power-law scaling of such increments. A novel Generalized sub-Gaussian scaling model is introduced that does not suffer from such deficiencies, and some of its key aspects are illustrated on microscale surface measurements of a calcite crystal fragment undergoing dissolution reaction due to contact with a fluid solution.


Global and Regional Economic Damages from Climate Change  

Anil Markandya, Elena Paglialunga, Valeria Costantini, and Giorgia Sforna

Economic damage from climate change includes several aspects that need to be considered at the global and regional levels to achieve an equitable common solution to global warming. The economic literature reviewed here analyzes this issue under three general perspectives. First, the analytical estimation of the linkages between damages in monetary terms and climate variables, as projections of temperature, precipitation, and frequency of extreme events, is rapidly evolving. Damage functions are included in complex economic models in order to calculate the economic impact of the climate change on economic output and growth, thus informing the debate on the amount of resources that should be devoted to reducing greenhouse gas (GHG) emissions and limiting climate damages. The choice of the geographical aggregation in this respect is a crucial aspect to be considered if policy advice is to be formulated on the basis of model results. The higher the level of regional detail, the more reliable the results are in terms of geographical distribution of economic damages. Second, the precise estimation of the costs associated with different damages caused by climate change is attracting growing interest. Climate costs present a wide range of heterogeneity for several reasons, such as the different formulation of the damage function adopted, the modeling design of the economic impact, the temporal horizon considered, and the differentiation across sectors. Two broad categories of analysis are relevant. The first refers to the choice of the sectoral dimension under investigation, where some studies cover multiple sectors and their interactions, while others analyze specific sectors in depth. The second classification criterion refers to the choice of the economic aspects estimated, where a strand of literature analyzes only market-based costs, while other analyses also include non-market (or intangible) damages. The most common sectors investigated are agriculture, forestry, health, energy, coastal zones and sea level rise, extreme events, tourism, ecosystems, industry, air quality, and catastrophic damages. Most studies consider market-based costs, while non-market impacts need to be better detailed in economic models. Third, the computation of a single number through the analytical framework of the social costs of carbon (SCC) represents a key aspect of the process of adapting complex results in order to properly inform the political debate. SCC represents the marginal global damage cost of carbon emissions and can also be interpreted as the economic value of damages avoided for unitary GHG emission reduction. Several uncertainties still influence the robustness of the SCC analytical framework, such as the choice of the discount rate, which strongly influences the role of SCC in supporting or not mitigation action in the short term. Although the debate on the economic damages arising from climate change is flourishing, several aspects still need to be investigated in order to build a common consensus within the scientific community as a necessary condition to properly inform the political debate and to facilitate the achievement of a long-term equitable global climate agreement.


Global Biogeochemical Cycling  

Fred Mackenzie and Abraham Lerman

The tendency to represent natural processes as cycles—from Latin cyclus and Greek κυκλος—is undoubtedly rooted in the human observations of repeating or periodic phenomena. The oldest notions of the water cycle, as water cycling between the Earth, air, and back to earth, are mentioned in the Old Testament and by Greek philosophers, from the 900s to 300s bce. The life of plants, deriving their constituents from the soil and air, and returning them thereto, is a classic example of a cycling or recycling process. For chemical elements, the concept of their cycling developed gradually since 1875 to about 1950, as the knowledge of the parts of the Earth—its compartments or reservoirs—progressed and the flow of material between them became better understood. The main “bioessential” chemical elements are carbon (C), nitrogen (N), phosphorus (P), oxygen (O), and hydrogen (H). These are represented in the mean composition of aquatic photosynthesizing organisms as the atomic abundance ratio C:N:P = 106:16:1 or as (CH2O)106(NH3)16(H3PO4). In land plants, estimates of mean composition vary from C:N:P = 510:4:1 to 2057:17:1. On land, the photosynthesizing organisms are much more efficient than in water by being able to incorporate more carbon atoms for each atom of phosphorus. The bioessential elements are coupled by the living organisms in the exogenic cycle, the processes at and near the Earth’s surface, and in the endogenic cycle of the processes that include subduction into the Earth’s interior and return to the surface. The main reservoirs of the bioessential elements are very different: although oxygen is the most abundant element in the Earth’s crust, most of it is locked in silicate minerals as SiO2, and the forms available to biogeochemical cycling are oxygen in water and, as a product of photosynthesis, as gas O2 in the atmosphere. Carbon is in the atmospheric reservoir of CO2 gas and dissolved in ocean and fresh waters. The main nitrogen reservoir is the molecular N2 in the atmosphere and oxidized and reduced nitrogen compounds in waters. Phosphorus occurs in the oxidized form of the phosphate-ion in crustal minerals, from where it is leached into the water. The natural cycle of the bioessential elements has been greatly perturbed since the late 1700s by human industrial and agricultural activities, the period known as the Anthropocene epoch. The increase in CO2, CH4 and NOx emissions to the atmosphere from fossil-fuel burning and land-use changes has rapidly and strongly modified the chemical composition of the atmosphere. This change has affected the balance of solar radiation absorbed by the atmosphere—generally known as “climate change”—and the acidity of surface-ocean waters, referred to as “ocean acidification.” CO2 in water is a weak acid that dissolves carbonate minerals, biogenically and inorganically formed in the ocean, and it thus modifies the chemical composition of ocean water. Overall, a major anthropogenic perturbation of the biogeochemical cycles has been the faster increase in atmospheric concentration of CO2 than its removal from the atmosphere by plants, dissolution in the ocean, and uptake in mineral weathering.


Global Climate Change and the Reallocation of Water  

Rhett B. Larson

Increased water variability is one of the most pressing challenges presented by global climate change. A warmer atmosphere will hold more water and will result in more frequent and more intense El Niño events. Domestic and international water rights regimes must adapt to the more extreme drought and flood cycles resulting from these phenomena. Laws that allocate rights to water, both at the domestic level between water users and at the international level between nations sharing transboundary water sources, are frequently rigid governance systems ill-suited to adapt to a changing climate. Often, water laws allocate a fixed quantity of water for a certain type of use. At the domestic level, such rights may be considered legally protected private property rights or guaranteed human rights. At the international level, such water allocation regimes may also be dictated by human rights, as well as concerns for national sovereignty. These legal considerations may ossify water governance and inhibit water managers’ abilities to alter water allocations in response to changing water supplies. To respond to water variability arising from climate change, such laws must be reformed or reinterpreted to enhance their adaptive capacity. Such adaptation should consider both intra-generational equity and inter-generational equity. One potential approach to reinterpreting such water rights regimes is a stronger emphasis on the public trust doctrine. In many nations, water is a public trust resource, owned by the state and held in trust for the benefit of all citizens. Rights to water under this doctrine are merely usufructuary—a right to make a limited use of a specified quantity of water subject to governmental approval. The recognition and enforcement of the fiduciary obligation of water governance institutions to equitably manage the resource, and characterization of water rights as usufructuary, could introduce needed adaptive capacity into domestic water allocation laws. The public trust doctrine has been influential even at the international level, and that influence could be enhanced by recognizing a comparable fiduciary obligation for inter-jurisdictional institutions governing international transboundary waters. Legal reforms to facilitate water markets may also introduce greater adaptive capacity into otherwise rigid water allocation regimes. Water markets are frequently inefficient for several reasons, including lack of clarity in water rights, externalities inherent in a resource that ignores political boundaries, high transaction costs arising from differing economic and cultural valuations of water, and limited competition when water utilities are frequently natural monopolies. Legal reforms that clarify property rights in water, specify the minimum quantity, quality, and affordability of water to meet basic human needs and environmental flows, and mandate participatory and transparent water pricing and contracting could allow greater flexibility in water allocations through more efficient and equitable water markets.


The Global Groundwater Revolution  

Jac van der Gun

Human behavior in relation to groundwater has remained relatively unchanged from ancient times until the early 20th century. Intercepting water from springs or exploiting shallow aquifers by means of wells or qanats was common practice worldwide, but only modest quantities of groundwater were abstracted. In general, the resource was taken for granted in absence of any knowledge regarding groundwater systems and their vulnerability. During the 20th century, however, an unprecedent change started spreading globally—a change so drastic that it could be called the Global Groundwater Revolution. It did not surface simultaneously everywhere but rather encroached into different regions as waves of change, with varied timing, depending on local conditions. This Global Groundwater Revolution has three main components: (1) rapid intensification of the exploitation of groundwater, (2) fundamentally changing views on groundwater, and (3) the emergence of integrated groundwater management and governance. These three components are mostly interdependent, although their emergence and development tend to be somewhat asynchronous. The Global Groundwater Revolution marks a radical historical change in the relation between human society and groundwater. It has taken benefits produced by groundwater to an unprecedented level, but their sustainability is assured only if there is good groundwater governance.


Global-Scale Impact of Human Nitrogen Fixation on Greenhouse Gas Emissions  

Wim De Vries, Enzai Du, Klaus Butterbach Bahl, Lena Schulte Uebbing, and Frank Dentener

Human activities have rapidly accelerated global nitrogen (N) cycling since the late 19th century. This acceleration has manifold impacts on ecosystem N and carbon (C) cycles, and thus on emissions of the greenhouse gases nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4), which contribute to climate change. First, elevated N use in agriculture leads to increased direct N2O emissions. Second, it leads to emissions of ammonia (NH3), nitric oxide (NO), and nitrogen dioxide (NO2) and leaching of nitrate (NO3−), which cause indirect N2O emissions from soils and waterbodies. Third, N use in agriculture may also cause changes in CO2 exchange (emission or uptake) in agricultural soils due to N fertilization (direct effect) and in non-agricultural soils due to atmospheric NHx (NH3+NH4) deposition (indirect effect). Fourth, NOx (NO+NO2) emissions from combustion processes and from fertilized soils lead to elevated NOy (NOx+ other oxidized N) deposition, further affecting CO2 exchange. As most (semi-) natural terrestrial ecosystems and aquatic ecosystems are N limited, human-induced atmospheric N deposition usually increases net primary production (NPP) and thus stimulates C sequestration. NOx emissions, however, also induce tropospheric ozone (O3) formation, and elevated O3 concentrations can lead to a reduction of NPP and plant C sequestration. The impacts of human N fixation on soil CH4 exchange are insignificant compared to the impacts on N2O and CO2 exchange (emissions or uptake). Ignoring shorter lived components and related feedbacks, the net impact of human N fixation on climate thus mainly depends on the magnitude of the cooling effect of CO2 uptake as compared to the magnitude of the warming effect of (direct and indirect) N2O emissions. The estimated impact of human N fixation on N2O emission is 8.0 (7.0–9.0) Tg N2O-N yr−1, which is equal 1.02 (0.89–1.15) Pg CO2-C equivalents (eq) yr−1. The estimated CO2 uptake due to N inputs to terrestrial, freshwater, and marine ecosystems equals −0.75 (−0.56 to −0.97) Pg CO2-C eq yr−1. At present, the impact of human N fixation on increased CO2 sequestration thus largely (on average near 75%) compensates the stimulating effect on N2O emissions. In the long term, however, effects on ecosystem CO2 sequestration are likely to diminish due to growth limitations by other nutrients such as phosphorus. Furthermore, N-induced O3 exposure reduces CO2 uptake, causing a net C loss at 0.14 (0.07–0.21) Pg CO2-C eq yr−1. Consequently, human N fixation causes an overall increase in net greenhouse gas emissions from global ecosystems, which is estimated at 0.41 (−0.01–0.80) Pg CO2-C eq yr−1. Even when considering all uncertainties, it is likely that human N inputs lead to a net increase in global greenhouse gas emissions. These estimates are based on most recent science and modeling approaches with respect to: (i) N inputs to various ecosystems, including NH3 and NOx emission estimates and related atmospheric N (NH3 and NOx) deposition and O3 exposure; (ii) N2O emissions in response to N inputs; and (iii) carbon exchange in responses to N inputs (C–N response) and O3 exposure (C–O3 response), focusing on the global scale. Apart from presenting the current knowledge, this article also gives an overview of changes in the estimates of those fluxes and C–N response factors over time, including debates on C–N responses in literature, the uncertainties in the various estimates, and the potential for improving them.


Green Infrastructure for Stormwater Runoff Control in China  

Haifeng Jia and Dingkun Yin

In the early 21st century, high-intensity human activities have led to the rapid development and expansion of urban areas in many countries, and these have had several adverse impacts upon the water environment. In particular, urban runoff quantity and quality control have emerged as key concerns for municipal officials. China, as one of the countries with rapid urbanization, faces many challenges in this process. Since the year 2000, China has been promoting the protection of its urban water environment using ecological construction. Use of green infrastructure (GI) to solve urban stormwater issues have become the priority of urban green and sustainable development. The Sponge City (SPC) approach was proposed to emphasize the comprehensive construction of multi-objective stormwater drainage and flood mitigation systems, and to consider water ecology, public safety, environmental protection, and preservation of water resources. The goal of GI is to achieve storm runoff quality enhancement and pollution control, which is similar to the sustainable development concept of SPC. According to its major functions, GI can be divided into infiltration and retention GI, regulation GI, transmission GI, pollution interception and treatment GI. GI should be planned and designed according to the long-term runoff volume capture ratio, which is determined by the annual rainfall depth and the level of catchment development at the project site. Different structural layer materials and spatial layout of GI have significant impact on their effects. Upon the completion of a project, long-term monitoring is recommended for evaluating its effectiveness. In order to ensure the continuous efficiency of GI, it is necessary to carry out regular maintenance. Different types of GI demand various maintenance methods and frequencies. Appropriate maintenance methods can effectively extend the service life of GI.


Green Water  

Garrison Sposito

Precipitation falling onto the land surface in terrestrial ecosystems is transformed into either “green water” or “blue water.” Green water is the portion stored in soil and potentially available for uptake by plants, whereas blue water either runs off into streams and rivers or percolates below the rooting zone into a groundwater aquifer. The principal flow of green water is by evapotranspiration from soil into the atmosphere, whereas blue water moves through the channel system at the land surface or through the pore space of an aquifer. Globally, the flow of green water accounts for about two-thirds of the global flow of all water, green or blue; thus the global flow of green water, most of which is by transpiration, dominates that of blue water. In fact, the global flow of green water by transpiration equals the flow of all the rivers on Earth into the oceans. At the global scale, evapotranspiration is measured using a combination of ground-, satellite-, and model-based methods implemented over annual or monthly time-periods. Data are examined for self-consistency and compliance with water- and energy-balance constraints. At the catchment scale, average annual evapotranspiration data also must conform to water and energy balance. Application of these two constraints, plus the assumption that evapotranspiration is a homogeneous function of average annual precipitation and the average annual net radiative heat flux from the atmosphere to the land surface, leads to the Budyko model of catchment evapotranspiration. The functional form of this model strongly influences the interrelationship among climate, soil, and vegetation as represented in parametric catchment modeling, a very active area of current research in ecohydrology. Green water flow leading to transpiration is a complex process, firstly because of the small spatial scale involved, which requires indirect visualization techniques, and secondly because the near-root soil environment, the rhizosphere, is habitat for the soil microbiome, an extraordinarily diverse collection of microbial organisms that influence water uptake through their symbiotic relationship with plant roots. In particular, microbial polysaccharides endow rhizosphere soil with properties that enhance water uptake by plants under drying stress. These properties differ substantially from those of non-rhizosphere soil and are difficult to quantify in soil water flow models. Nonetheless, current modeling efforts based on the Richards equation for water flow in an unsaturated soil can successfully capture the essential features of green water flow in the rhizosphere, as observed using visualization techniques. There is also the yet-unsolved problem of upscaling rhizosphere properties from the small scale typically observed using visualization techniques to that of the rooting zone, where the Richards equation applies; then upscaling from the rooting zone to the catchment scale, where the Budyko model, based only on water- and energy-balance laws, applies, but still lacks a clear connection to current soil evaporation models; and finally, upscaling from the catchment to the global scale. This transitioning across a very broad range of spatial scales, millimeters to kilometers, remains as one of the outstanding grand challenges in green water ecohydrology.


Groundwater Development Paths in the U.S. High Plains  

Renata Rimšaitė and Nicholas Brozović

The High Plains Aquifer is the largest aquifer in the United States and the major source of groundwater withdrawals in the region. Although regionally abundant, groundwater availability for agriculture and other uses is not uniform across the area. Three separate states comprising the most significant portion of the aquifer have distinct climate and hydrologic characteristics, water law systems, and institutional groundwater governance leading to different concerns about water policy issues across the area. The northern, largest, and most saturated part of the High Plains Aquifer is located under Nebraska. The state has the largest irrigated area in the United States, most of which is groundwater irrigated. Nebraska is the home of the largest companies in the center pivot irrigation industry. Center pivot technology has had a fundamental role in expanding groundwater-fed irrigation. Nebraska is not free from groundwater depletion issues, but these issues are more important in central and south-central parts of the aquifer underlying large, primarily agricultural, lands of Kansas and Texas. The natural aquifer recharge is much lower in the south-central parts of the region, which has caused large groundwater extractions to have more significant water declines than in Nebraska. In the United States, the greatest portion of water quantity management regulatory oversight is left to individual states and local government agencies. Each of the three states has a unique legal system, which highly influences the framework of groundwater management locally. In Nebraska, groundwater is governed following two doctrines: correlative and reasonable use, which, in times of water shortage, lead to a proportional reduction of everyone’s allocation. Kansas uses the prior appropriation doctrine to manage groundwater, which applies the seniority principle when there is scarcity in water availability, making junior water rights holders bear the greatest risk. The absolute ownership doctrine is used to govern groundwater in Texas, which allows landowners to drill wells on their property and extract as much water as needed. Institutional groundwater governance in Nebraska is performed by the system of 23 locally elected Natural Resources Districts having full regulatory power to manage the state’s groundwater. The local governments use a variety of regulatory and incentive-based groundwater management tools to achieve local groundwater management goals. In Kansas, the Chief Engineer in the Kansas Department of Agriculture is in charge of water administration for the state. The Kansas legislature established five Groundwater Management Districts to address groundwater depletion issues, which can make policy recommendations but do not have the power to regulate. Groundwater Conservation Districts were created in Texas to provide protection from uncontrolled water mining in the state. The districts gained more power to regulate and enforce rules over time; however, significant groundwater depletion issues remain. Multiple lessons have been learned across the region since the beginning of groundwater development. Some of these could be applied in other areas seeking to address negative consequences of groundwater use. Forward-looking perspectives about groundwater management in the region vary from strong government-led solutions in Nebraska to various producer-initiated innovative approaches in Kansas and Texas.


Groundwater Models  

Timothy M. Weigand, Matthew W. Farthing, and Casey T. Miller

Groundwater modeling is widely relied upon by environmental scientists and engineers to advanced understanding, make predictions, and design solutions to water resource problems of importance to society. Groundwater models are tools used to approximate subsurface behavior, including the movement of water, the chemical composition of the phases present, and the temperature distribution. As a model is a simplification of a real-world system, approximations and uncertainties are inherent to the modeling process. Due to this, special consideration must be given to the role of uncertainty quantification, as essentially all groundwater systems are stochastic in nature.


Hedging and Financial Tools for Water Management  

C. Dionisio Pérez-Blanco

The management of risky water episodes entails a comprehensive set of instruments that can be broadly divided into two groups: damage prevention and damage management. Damage prevention instruments aim at negating or minimizing the economic damage of water scarcity and water extremes, and they include hard and soft engineering, information and awareness campaigns, and regulations and economic incentives. Damage management instruments aim at compensating damages and facilitating recovery, and they include tort law and hedging and financial tools. The growing interconnections and cascading uncertainties across coupled human and water systems make it increasingly challenging to comprehensively predict and anticipate expected damages from water scarcity and extremes, which is giving higher prominence to the management of damages, notably through hedging and financial tools. Hedging and financial tools are a risk transfer mechanism by which a potential future damage is transferred from one party to another, typically in exchange of a pecuniary compensation (risk premium), albeit they can be also freely provided (e.g., state aid). Hedging and financial instruments are varied and include futures, options, insurance, self-capitalization, reinsurance, private actions such as charities or nongovernmental organizations, state aid, and solidarity funds. The first section of this document discusses the political context for disaster risk reduction efforts at an international level and provides key definitions. The second section presents a taxonomy for hedging and financial instruments; assesses their strengths and weaknesses, performance, and market penetration levels; and critically reviews reform propositions in the literature toward increasing their performance and adoption. The third section discusses the interconnections between hedging and financial tools and damage prevention tools, as well as how their design can enhance each other’s performance. The last section discusses barriers and enablers for the adoption of hedging and financial tools.


Historical Development of the Global Water Cycle as a Science Framework  

Richard G. Lawford and Sushel Unninayar

The global water cycle concept has its roots in the ancient understanding of nature. Indeed, the Greeks and Hebrews documented some of the most some important hydrological processes. Furthermore, Africa, Sri Lanka, and China all have archaeological evidence to show the sophisticated nature of water management that took place thousands of years ago. During the 20th century, a broader perspective was taken and the hydrological cycle was used to describe the terrestrial and freshwater component of the global water cycle. Data analysis systems and modeling protocols were developed to provide the information needed to efficiently manage water resources. These advances were helpful in defining the water in the soil and the movement of water between stores of water over land surfaces. Atmospheric inputs to these balances were also monitored, but the measurements were much more reliable over countries with dense networks of precipitation gauges and radiosonde observations. By the 1960s, early satellites began to provide images that gave a new perception of Earth processes, including a more complete realization that water cycle components and processes were continuous in space and could not be fully understood through analyses partitioned by geopolitical or topographical boundaries. In the 1970s, satellites delivered quantitative radiometric measurements that allowed for the estimation of a number of variables such as precipitation and soil moisture. In the United States, by the late 1970s, plans were made to launch the Earth System Science program, led by the National Aeronautics and Space Agency (NASA). The water component of this program integrated terrestrial and atmospheric components and provided linkages with the oceanic component so that a truly global perspective of the water cycle could be developed. At the same time, the role of regional and local hydrological processes within the integrated “global water cycle” began to be understood. Benefits of this approach were immediate. The connections between the water and energy cycles gave rise to the Global Energy and Water Cycle Experiment (GEWEX)1 as part of the World Climate Research Programme (WCRP). This integrated approach has improved our understanding of the coupled global water/energy system, leading to improved prediction models and more accurate assessments of climate variability and change. The global water cycle has also provided incentives and a framework for further improvements in the measurement of variables such as soil moisture, evapotranspiration, and precipitation. In the past two decades, groundwater has been added to the suite of water cycle variables that can be measured from space. New studies are testing innovative space-based technologies for high-resolution surface water level measurements. While many benefits have followed from the application of the global water cycle concept, its potential is still being developed. Increasingly, the global water cycle is assisting in understanding broad linkages with other global biogeochemical cycles, such as the nitrogen and carbon cycles. Applications of this concept to emerging program priorities, including the Sustainable Development Goals (SDGs) and the Water-Energy-Food (W-E-F) Nexus, are also yielding societal benefits.


A Historical Perspective of Unconventional Oil and Gas Extraction and Public Health  

Erin N. Haynes, Lisa McKenzie, Stephanie A. Malin, and John W. Cherrie

Technological advances in directional well drilling and hydraulic fracturing have enabled extraction of oil and gas from once unobtainable geological formations. These unconventional oil and gas extraction (UOGE) techniques have positioned the United States as the fastest-growing oil and gas producer in the world. The onset of UOGE as a viable subsurface energy abstraction technology has also led to the rise of public concern about its potential health impacts on workers and communities, both in the United States and other countries where the technology is being developed. Herein we review in the national and global impact of UOGE from a historical perspective of occupational and public health. Also discussed are the sociological interactions between scientific knowledge, social media, and citizen action groups, which have brought wider attention to the potential public health implications of UOGE.


History of Agriculture in the United States  

Pamela Riney-Kehrberg

Agriculture is at the very center of the human enterprise; its trappings are in evidence all around, yet the agricultural past is an exceptionally distant place from modern America. While the majority of Americans once raised a significant portion of their own food, that ceased to be the case at the beginning of the 20th century. Only a very small portion of the American population today has a personal connection to agriculture. People still must eat, but the process by which food arrives on their plates is less evident than ever. The evolution of that process, with all of its many participants, is the stuff of agricultural history. The task of the agricultural historian is to make that past evident, and usable, for an audience that is divorced from the production of food. People need to know where their food comes from, past and present, and what has gone into the creation of the modern food system.


History of Ecological Design  

Lydia Kallipoliti

The term ecological design was coined in a 1996 book by Sim van der Ryn and Stewart Cowan, in which the authors argued for a seamless integration of human activities with natural processes to minimize destructive environmental impact. Following their cautionary statements, William McDonough and Michael Braungart published in 2002 their manifesto book From Cradle to Cradle, which proposed a circular political economy to replace the linear logic of “cradle to grave.” These books have been foundational in architecture and design discussions on sustainability and establishing the technical dimension, as well as the logic, of efficiency, optimization, and evolutionary competition in environmental debates. From Cradle to Cradle evolved into a production model implemented by a number of companies, organizations, and governments around the world, and it also has become a registered trademark and a product certification. Popularized recently, these developments imply a very short history for the growing field of ecological design. However, their accounts hark as far back as Ernst Haeckel’s definition of the field of ecology in 1866 as an integral link between living organisms and their surroundings (Generelle Morphologie der Organismen, 1866); and Henry David Thoreau’s famous 1854 manual for self-reliance and living in proximity with natural surroundings, in the cabin that he built at Walden Pond, Massachusetts (Walden; or, Life in the Woods, 1854). Since World War II, contrary to the position of ecological design as a call to fit harmoniously within the natural world, there has been a growing interest in a form of synthetic naturalism, (Closed Worlds; The Rise and Fall of Dirty Physiology, 2015), where the laws of nature and metabolism are displaced from the domain of wilderness to the domain of cities, buildings, and objects. With the rising awareness of what John McHale called disturbances in the planetary reservoir (The Future of the Future, 1969), the field of ecological design has signified not only the integration of the designed object or space in the natural world, but also the reproduction of the natural world in design principles and tools through technological mediation. This idea of architecture and design producing nature paralleled what Buckminster Fuller, John McHale, and Ian McHarg, among others, referred to as world planning; that is, to understand ecological design as the design of the planet itself as much as the design of an object, building, or territory. Unlike van der Ryn and Cowan’s argumentation, which focused on a deep appreciation for nature’s equilibrium, ecological design might commence with the synthetic replication of natural systems. These conflicting positions reflect only a small fraction of the ubiquitous terms used to describe the field of ecological design, including green, sustain, alternative, resilient, self-sufficient, organic, and biotechnical. In the context of this study, this paper will argue that ecological design starts with the reconceptualization of the world as a complex system of flows rather than a discrete compilation of objects, which visual artist and theorist György Kepes has described as one of the fundamental reorientations of the 20th century (Art and Ecological Consciousness, 1972).


The History of Synoptic Meteorology in the Age of Numerical Weather Forecasting  

Kristine C. Harper

Despite some early attempts in the 19th century, national weather services did not regularly create forecasts for public consumption until the early 20th century, and many of those were based on a handful of surface observations of dubious quality. With the invention of the balloon-borne radiosonde in the 1930s, upper-air observations became more common, and knowledge of upper-level processes was melded into forecasting practice. World War II brought its own challenges and opportunities, expanding the number of trained meteorologists worldwide, establishing many new observing stations in tropical and high-latitude locations, and opening the possibility of using radar to identify short-range severe weather. But the big change was the development of digital electronic computers, and with them the opportunity to calculate the weather. The first efforts were marginal at best, but international teams in the United States and Sweden continued their efforts, and by the late 1950s, midatmospheric prognosis charts were being transmitted to forecast offices, which would prepare the final local forecasts. Unfortunately for the synoptic forecasters in the field offices, the new objective numerical weather prediction (NWP) products were not comparable to the old subjective forecast charts that they had used for years. The resulting push and pull between the atmospheric modelers and the synoptic meteorologists ultimately changed both groups: the atmospheric modelers used forecaster feedback to upgrade the models, and the synoptic meteorologists learned to use the objective forecasts. The anticipated improvements in weather forecasting, however, did not follow immediately. As the decades passed, computing power increased and the introduction of satellites with multiple specialized sensors, purpose-built weather radar, and other remote sensing devices increased the availability of ground and upper-air data. As a result, more variables and the physics that defined them were added to NWP models, and the resulting products changed the way synoptic meteorologists made their forecasts, even if they did not change their feel for the atmosphere. Those changes continued into the 21st century, fueling the desire for specialized forecasts from multiple interest groups and the public’s desire for accurate, up-to-the-minute weather forecasts that extend up to 2 weeks into the future.


History and Assessment of the Intergovernmental Platform on Biodiversity and Ecosystem Services  

Céline Granjou and Isabelle Arpin

The recent implementation of the IPBES is a major cornerstone in the transformation of the international environmental governance in the early 21st century. Often presented as “the IPCC (Intergovernmental Panel on Climate Change) for biodiversity,” the IPBES aims to produce regular expert assessments of the state and evolution of biodiversity and ecosystems at the local, regional, and global levels. Its creation was promoted in the 1990s by biodiversity scientists and NGOs who increasingly came to view the failure of achieving effective conservation of nature as the consequence of the gap between science and policy, rather than of a lack of knowledge. The new institution embodies an approach to nature and nature conservation that results from the progressive evolution of international environmental governance, marked by the notion of ecosystem services (i.e., the idea that nature provides benefits to people and that nature conservation and human development should be thought of as mutually constitutive). The IPBES creation was entrusted to the United Nations Environment Programme (UNEP). Social environmental studies accounted for the genesis and organization of the IPBES and paid special attention to the strong emphasis put by IPBES participants on principles of openness and inclusivity and on the need to consider scientific knowledge and other forms of knowledge (e.g. traditional ecological knowledge) on an equal footing. Overall the IPBES can be considered an innovative platform characterized by organizations and practices that foster inclusiveness and openness both to academic science and indigenous knowledge as well as to diverse values and visions of nature and its relationship to society. However, the extent to which it succeeded in putting different biodiversity values and knowledge on an equal footing in practice has varied and remains diversely appreciated by the literature.


History of Wildlife Tracking Technologies  

Kristoffer Whitney

Technologies for wildlife tracking in a systematic way by scientists and other naturalists have their origins in the mid-19th century. Tagging and banding systems for fish and birds are exemplary of this: Both were used by late-19th- and early 20th-century biologists to gather data on the populations and migrations of a wide variety of species considered commercially useful or scientifically interesting. These tracking systems were deployed by networks of professional and amateur naturalists, working with a number of institutions integral to natural history work at the time: government agencies, birding and hunting groups, zoos, museums, and universities. By the mid- to late 20th century, wildlife tracking had expanded to include a wider array of species for a number of reasons. Technologically, electronic surveillance equipment from early radio telemetry to modern satellite tracking allowed for more animals to be tracked in ever more precise ways. Culturally and politically, the environmental movement and endangered species programs brought more attention to the plight of nongame or non-commercially valuable species. In the process, traditional biological disciplines were reshaped, and new subfields such as movement and acoustic ecology have emerged. And although the plethora of knowledge generated about wildlife in the past century and a half may prove to be a key component in environmental conservation in the face of climate change and biodiversity loss, there are a number of ethical issues emerging from the history of wildlife tracking technologies to be addressed as well.


Household Air Pollution in Low and Middle Income Countries  

Caroline A. Ochieng, Cathryn Tonne, Sotiris Vardoulakis, and Jan Semenza

Household air pollution from use of solid fuels (biomass fuels and coal) is a major problem in low and middle income countries, where 90% of the population relies on these fuels as the primary source of domestic energy. Use of solid fuels has multiple impacts, on individuals and households, and on the local and global environment. For individuals, the impact on health can be considerable, as household air pollution from solid fuel use has been associated with acute lower respiratory infections, chronic obstructive pulmonary disease, lung cancer, and other illnesses. Household-level impacts include the work, time, and high opportunity costs involved in biomass fuel collection and processing. Harvesting and burning biomass fuels affects local environments by contributing to deforestation and outdoor air pollution. At a global level, inefficient burning of solid fuels contributes to climate change. Improved biomass cookstoves have for a long time been considered the most feasible immediate intervention in resource-poor settings. Their ability to reduce exposure to household air pollution to levels that meet health standards is however questionable. In addition, adoption of improved cookstoves has been low, and there is limited evidence on how the barriers to adoption and use can be overcome. However, the issue of household air pollution in low and middle income countries has gained considerable attention in recent years, with a range of international initiatives in place to address it. These initiatives could enable a transition from biomass to cleaner fuels, but such a transition also requires an enabling policy environment, especially at the national level, and new modes of financing technology delivery. More research is also needed to guide policy and interventions, especially on exposure-response relationships with various health outcomes and on how to overcome poverty and other barriers to wide-scale transition from biomass fuels to cleaner forms of energy.