You are looking at 121-140 of 154 articles
Peter J. Schubert
Renewable energy was used exclusively by the first humans and is likely to be the predominant source for future humans. Between these times the use of extracted resources such as coal, oil, and natural gas has created an explosion of population and affluence, but also of pollution and dependency. This article explores the advent of energy sources in a broad social context including economics, finance, and policy. The means of producing renewable energy are described in an accessible way, highlighting the broad range of considerations in their development, deployment, and ability to scale to address the entirety of human enterprises.
Resilience thinking in relation to the environment has emerged as a lens of inquiry that serves a platform for interdisciplinary dialogue and collaboration. Resilience is about cultivating the capacity to sustain development in the face of expected and surprising change and diverse pathways of development and potential thresholds between them. The evolution of resilience thinking is coupled to social-ecological systems and a truly intertwined human-environment planet. Resilience as persistence, adaptability and, transformability of complex adaptive social-ecological systems is the focus, clarifying the dynamic and forward-looking nature of the concept. Resilience thinking emphasizes that social-ecological systems, from the individual, to community, to society as a whole, are embedded in the biosphere. The biosphere connection is an essential observation if sustainability is to be taken seriously. In the continuous advancement of resilience thinking there are efforts aimed at capturing resilience of social-ecological systems and finding ways for people and institutions to govern social-ecological dynamics for improved human well-being, at the local, across levels and scales, to the global. Consequently, in resilience thinking, development issues for human well-being, for people and planet, are framed in a context of understanding and governing complex social-ecological dynamics for sustainability as part of a dynamic biosphere.
Scott M. Moore
It has long been accepted that non-renewable natural resources like oil and gas are often the subject of conflict between both nation-states and social groups. But since the end of the Cold War, the idea that renewable resources like water and timber might also be a cause of conflict has steadily gained credence. This is particularly true in the case of water: in the early 1990s, a senior World Bank official famously predicted that “the wars of the next century will be fought over water,” while two years ago Indian strategist Brahma Chellaney made a splash in North America by claiming that water would be “Asia’s New Battleground.” But it has not quite turned out that way. The world has, so far, avoided inter-state conflict over water in the 21st century, but it has witnessed many localized conflicts, some involving considerable violence. As population growth, economic development, and climate change place growing strains on the world’s fresh water supplies, the relationship between resource scarcity, institutions, and conflict has become a topic of vocal debate among social and environmental scientists.
The idea that water scarcity leads to conflict is rooted in three common assertions. The first of these arguments is that, around the world, once-plentiful renewable resources like fresh water, timber, and even soils are under increasing pressure, and are therefore likely to stoke conflict among increasing numbers of people who seek to utilize dwindling supplies. A second, and often corollary, argument holds that water’s unique value to human life and well-being—namely that there are no substitutes for water, as there are for most other critical natural resources—makes it uniquely conductive to conflict. Finally, a third presumption behind the water wars hypothesis stems from the fact that many water bodies, and nearly all large river basins, are shared between multiple countries. When an upstream country can harm its downstream neighbor by diverting or controlling flows of water, the argument goes, conflict is likely to ensue.
But each of these assertions depends on making assumptions about how people react to water scarcity, the means they have at their disposal to adapt to it, and the circumstances under which they are apt to cooperate rather than to engage in conflict. Untangling these complex relationships promises a more refined understanding of whether and how water scarcity might lead to conflict in the 21st century—and how cooperation can be encouraged instead.
Rewilding aims at maintaining or even increasing biodiversity through the restoration of ecological and evolutionary processes using extant keystone species or ecological replacements of extinct keystone species that drive these processes. It is hailed by some as the most exciting and promising conservation strategy to slow down or stop what is considered to be the greatest mass extinction of species since the extinction of the dinosaurs 65 million years ago. Others have raised serious concerns about the many scientific and societal uncertainties and risks of rewilding. Moreover, despite its growing popularity, rewilding has made only limited inroads within the conservation mainstream and still has to prove itself in practice.
Rewilding differs from traditional restoration in at least two important respects. Whereas restoration has typically focused on the recovery of plants communities, rewilding has drawn attention to animals, particularly large carnivores and large herbivores. Whereas restoration aims to return an ecosystem back to some historical condition, rewilding is forward-looking rather than backward-looking: it examines the past not so much to recreate it, but to learn from the past how to activate and maintain the natural processes that are crucial for biodiversity conservation.
Rewilding makes use of a variety of techniques to re-establish these natural processes. Besides the familiar method of reintroducing animals in areas where populations have decreased dramatically or even gone extinct, rewilders also employ some more controversial methods, including back breeding to restore wild traits in domesticated species, taxon substitution to replace extinct species by closely related species with similar roles within an ecosystem, and de-extinction to bring extinct species back to life again using advanced biotechnological technologies such as cloning and gene editing.
Rewilding has clearly gained the most traction in North America and Europe, which have several key features in common. Both regions have recently experienced a spontaneous return of wildlife. Rewilders on both sides of the Atlantic are aware, however, that this wildlife resurgence is not that impressive, given that we are in the midst of the sixth mass extinction, which is characterized by the loss of large-bodied animals known as megafauna. The common goal is to bring back such megafaunal species because of their importance for maintaining and enhancing biodiversity. Last, both North American and European rewilders perceive the extinction crisis through the lens of the island theory, which shows that the number of species in an area depends on its size and degree of isolation—hence their special attention to the spatial aspects of rewilding.
But rewilding projects on both sides of the Atlantic not only have much in common, they also differ in certain aspects. North American rewilders have adopted the late Pleistocene as a reference period and have emphasized the role of predation by large carnivores, while European rewilders have opted for the mid-Holocene and put more focus on naturalistic grazing by large herbivores.
Ortwin Renn and Andreas Klinke
Risk perception is an important component of risk governance, but it cannot and should not determine environmental policies. The reality is that people suffer and die as a result of false information or perception biases. It is particularly important to be aware of intuitive heuristics and common biases in making inferences from information in a situation where personal or institutional decisions have far-reaching consequences. The gap between risk assessment and risk perception is an important aspect of environmental policymaking. Communicators, risk managers, as well as representatives of the media, stakeholders, and the affected public should be well informed about the results of risk perception and risk response studies. They should be aware of typical patterns of information processing and reasoning when they engage in designing communication programs and risk management measures. At the same time, the potential recipients of information should be cognizant of the major psychological and social mechanisms of perception as a means to avoid painful errors.
To reach this goal of mutual enlightenment, it is crucial to understand the mechanisms and processes of how people perceive risks (with emphasis on environmental risks) and how they behave on the basis of their perceptions. Based on the insights from cognitive psychology, social psychology, micro-sociology, and behavioral studies, one can distill some basic lessons for risk governance that reflect universal characteristics of perception and that can be taken for granted in many different cultures and risk contexts.
This task of mutual enlightenment on the basis of evidence-based research and investigations is constrained by complexity, uncertainty, and ambiguity in describing, assessing, and analyzing risks, in particular environmental risks. The idea that the “truth” needs to be framed in a way that the targeted audience understands the message is far too simple. In a stochastic and nonlinear understanding of (environmental) risk there are always several (scientifically) legitimate ways of representing scientific insights and causal inferences. Much knowledge in risk and disaster assessment is based on incomplete models, simplified simulations, and expert judgments with a high degree of uncertainty and ambiguity. The juxtaposition of scientific truth, on one hand, and erroneous risk perception, on the other hand, does not reflect the real situation and lends itself to a vision of expertocracy that is neither functionally correct nor democratically justified. The main challenge is to initiate a dialogue that incorporates the limits and uncertainties of scientific knowledge and also starts a learning process by which obvious misperceptions are corrected and the legitimate corridor of interpretation is jointly defined.
In essence, expert opinion and lay perception need to be perceived as complementing, rather than competing with each other. The very essence of responsible action is to make viable and morally justified decisions in the face of uncertainty based on a range of scientifically legitimate expert assessments. These assessments have to be embedded into the context of criteria for acceptable risks, trade-offs between risks to humans and ecosystems, fair risk and benefit distribution, and precautionary measures. These criteria most precisely reflect the main points of lay perception. For a rational politics of risk, it is, therefore, imperative to collect both ethically justifiable evaluation criteria and standards and the best available systematic knowledge that inform us about the performance of each risk source or disaster-reduction option according to criteria that have been identified and approved in a legitimate due process. Ultimately, decisions on acceptable risks have to be based on a subjective mix of factual evidence, attitudes toward uncertainties, and moral standards.
Mehrad Bastani, Nurcin Celik, and Danielle Coogan
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The volume of municipal solid waste produced in the United States has increased by 68% since 1980, up from 151 million to over 254 million tons per year. As the output of municipal waste has grown, more attention has been placed on the occupations associated with waste management. In 2014, the occupation of refuse and recyclable material collection was ranked as the 6th most dangerous job in the United States, with a rate of 27.1 deaths per 100,000 workers. With the revelation of reported exposure statistics among solid waste workers in the United States, the problem of the identification and assessment of occupational health risks among solid waste workers is receiving more consideration.
From the generation of waste to its disposal, solid waste workers are exposed to substantial levels of physical, chemical, and biological toxins. Current waste management systems in the United States involve significant risk of contact with waste hazards, highlighting that prevention methods such as monitoring exposures, personal protection, engineering controls, job education and training, and other interventions are under-utilized. To recognize and address occupational hazards encountered by solid waste workers, it is necessary to discern potential safety concerns and their causes, as well as their direct and/or indirect impacts on the various types of workers. In solid waste management, the major industries processing solid waste are introduced as recycling, incineration, landfill, and composting. Thus, the reported exposures and potential occupational health risks need to be identified for workers in each of the aforementioned industries. Then, by acquiring data on reported exposure among solid waste workers, multiple county-level and state-level quantitative assessments for major occupational risks can be conducted using statistical assessment methods. To assess health risks among solid waste workers, the following questions must be answered: How can the methods of solid waste management be categorized? Which are the predominant occupational health risks among solid waste workers, and how can they be identified? Which practical and robust assessment methods are useful for evaluating occupational health risks among solid waste workers? What are possible solutions that can be implemented to reduce the occupational health hazard rates among solid waste workers?
Growing a cover crop between main crops imitates natural ecosystems where the soil is continuously covered with vegetation. This is an important management practice in preserving soil nutrient resources and reducing nitrogen (N) losses to waters. Cover crops also provide other functions that are important for the resilience and long-term stability of cropping systems, such as reduced erosion, increased soil fertility, carbon sequestration, increased soil phosphorus (P) availability, and suppression of weeds and pathogens.
Much is known about how to use cover crops to reduce N leaching, for climates where there is a water surplus outside the growing season. Non-legume cover crops reduce N leaching by 20%–80% and legumes reduce it by, on average, 23%. There are both synergies and possible conflicts between different environmental and production aspects that should be considered when developing efficient and multifunctional cover crop systems, but contradictions about different functions provided by cover crops can sometimes be overcome with site-specific adaptation of measures. One example is cover crop effects on P losses. Cover crops reduce losses of total P, but extract soil P to available forms and may increase losses of dissolved P. How to use this effect to increase soil P availability on subtropical soils needs further studies. Knowledge and examples of how to maximize the positive effects of cover crops on cropping systems are improving, thereby increasing the sustainability of agriculture. One example is combined weed suppression in order to reduce dependence on herbicides or intensive mechanical treatment.
James B. London
Coastal zone management (CZM) has evolved since the enactment of the U.S. Coastal Zone Management Act of 1972, which was the first comprehensive program of its type. The newer iteration of Integrated Coastal Zone Management (ICZM), as applied to the European Union (2000, 2002), establishes priorities and a comprehensive strategy framework. While coastal management was established in large part to address issues of both development and resource protection in the coastal zone, conditions have changed. Accelerated rates of sea level rise (SLR) as well as continued rapid development along the coasts have increased vulnerability. The article examines changing conditions over time and the role of CZM and ICZM in addressing increased climate related vulnerabilities along the coast.
The article argues that effective adaptation strategies will require a sound information base and an institutional framework that appropriately addresses the risk of development in the coastal zone. The information base has improved through recent advances in technology and geospatial data quality. Critical for decision-makers will be sound information to identify vulnerabilities, formulate options, and assess the viability of a set of adaptation alternatives. The institutional framework must include the political will to act decisively and send the right signals to encourage responsible development patterns. At the same time, as communities are likely to bear higher costs for adaptation, it is important that they are given appropriate tools to effectively weigh alternatives, including the cost avoidance associated with corrective action. Adaptation strategies must be pro-active and anticipatory. Failure to act strategically will be fiscally irresponsible.
Food security is dependent on the work of plant scientists and breeders who develop new varieties of crops that are high yielding, nutritious, and tolerate a range of biotic and abiotic stresses. These scientists and breeders need access to novel genetic material to evaluate and to use in their breeding programs; seed- (gene-)banks are the main source of novel genetic material. There are more than 1,750 genebanks around the world that are storing the orthodox (desiccation tolerant) seeds of crops and their wild relatives. These seeds are stored at low moisture content and low temperature to extend their longevity and ensure that seeds with high viability can be distributed to end-users. Thus, seed genebanks serve two purposes: the long-term conservation of plant genetic resources, and the distribution of seed samples.
Globally, there are more than 7,400,000 accessions held in genebanks; an accession is a supposedly distinct, uniquely identifiable germplasm sample which represents a particular landrace, variety, breeding line, or population. Genebank staff manage their collections to ensure that suitable material is available and that the viability of the seeds remains high. Accessions are regenerated if viability declines or if stocks run low due to distribution. Many crops come under the auspices of the International Treaty on Plant Genetic Resources for Food and Agriculture and germplasm is shared using the Standard Material Transfer Agreement. The Treaty collates information on the sharing of germplasm with a view to ensuring that farmers ultimately benefit from making their agrobiodiversity available.
Ongoing research related to genebanks covers a range of disciplines, including botany, seed and plant physiology, genetics, geographic information science, and law.
Maria Cristina Fossi and Cristina Panti
A vigorous effort to identify and study sentinel species of marine ecosystem in the world’s oceans has developed over the past 50 years. The One Health concept recognizes that the health of humans is connected to the health of animals and the environment. Species ranging from invertebrate to large marine vertebrates have acted as “sentinels” of the exposure to environmental stressors and health impacts on the environment that may also affect human health. Sentinel species can signal warnings, at different levels, about the potential impacts on a specific ecosystem. These warnings can help manage the abiotic and anthropogenic stressors (e.g., climate change, chemical and microbial pollutants, marine litter) affecting ecosystems, biota, and human health.
The effects of exposure to multiple stressors, including pollutants, in the marine environment may be seen at multiple trophic levels of the ecosystem. Attention has focused on the large marine vertebrates, for several reasons. In the past, the use of large marine vertebrates in monitoring and assessing the marine ecosystem has been criticized. The fact that these species are pelagic and highly mobile has led to the suggestion that they are not useful indicators or sentinel species. In recent years, however, an alternative view has emerged: when we have a sufficient understanding of differences in species distribution and behavior in space and time, these species can be extremely valuable sentinels of environmental quality.
Knowledge of the status of large vertebrate populations is crucial for understanding the health of the ecosystem and instigating mitigation measures for the conservation of large vertebrates. For example, it is well known that the various cetacean species exhibit different home ranges and occupy different habitats. This knowledge can be used in “hot spot” areas, such as the Mediterranean Basin, where different species can serve as sentinels of marine environmental quality. Organisms that have relatively long life spans (such as cetaceans) allow for the study of chronic diseases, including reproductive alterations, abnormalities in growth and development, and cancer. As apex predators, marine mammals feed at or near the top of the food chain. As the result of biomagnification, the levels of anthropogenic contaminants found in the tissues of top predators and long-living species are typically high. Finally, the application of consistent examination procedures and biochemical, immunological, and microbiological techniques, combined with pathological examination and behavioral analysis, has led to the development of health assessment methods at the individual and population levels in wild marine mammals. With these tools in hand, investigators have begun to explore and understand the relationships between exposures to environmental stressors and a range of disease end points in sentinel species (ranging from invertebrates to marine mammals) as an indicator of ecosystem health and a harbinger of human health and well-being.
Jean-François Bissonnette and Rodolphe De Koninck
Plantation farming emerged as a large-scale system of specialized agriculture in the tropics under European colonialism, in opposition to smallholding subsistence agriculture. Despite large-scale plantations in the tropics, smallholdings have consistently formed the backbone of rural economies, to the extent that they have become the main producers of some of the former plantation crops. In the early 21st century, oil palm has become the third most important cash crop in the world in terms of area cultivated, largely due to the expansion of this crop in Malaysia and Indonesia. Although in these countries, oil palm is primarily cultivated in large plantations, smallholders cultivate a large share of the territory devoted to this crop. This is related to the programs set up by governments of Malaysia and Indonesia during the second half of the 20th century, to provide smallholders with land plots in capital intensive large-scale oil palm schemes. Despite the relative success encountered by these programs in both countries, policymakers have continued to insist on the development of private centrally managed large-scale plantations. Yet, smallholding family farming has remained the most resilient economic activity in rural areas of the tropics. This system has proven adaptive to environmental change and, given proper access to markets and capital, particularly responsive to market signals. Today, many small-holdings are still characterized by the diversity of crops cultivated, low use of chemical inputs, reliance on family labor, and high levels of ecological knowledge. These are some of the main factors explaining why small family farms have proven more efficient than large plantations and, in the long term, more economically and ecologically resilient. Yet, large-scale land acquisitions for monocrop production remain a current issue, highlighting the paradox of the latest stage of agrarian capitalism and of its persistent built-in disregard for environmental deterioration.
Frank W. Geels
Addressing persistent environmental problems such as climate change or biodiversity loss requires shifts to new kinds of energy, mobility, housing, and agro-food systems. These shifts are called socio-technical transitions because they involve not just changes in technology but also changes in consumer practices, policies, cultural meanings, infrastructures, and business models. Socio-technical transitions to sustainability are challenging for mainstream social sciences because they are multiactor, long-term, goal-oriented, disruptive, contested, and nonlinear processes. Sustainability transitions are being investigated by a new research community, which uses a socio-technical Multi-Level Perspective (MLP) as one of its orienting frameworks. Focusing on multidimensional struggles between “green” innovations and entrenched systems, the MLP suggests that transitions involve alignments of processes within and between three analytical levels: niche innovations, socio-technical regimes, and an exogenous socio-technical landscape. To understand more specific change mechanisms, the MLP mobilizes ideas from evolutionary economics, sociology of innovation, and institutional theory. Different phases, actors, and struggles are distinguished to understand the complexities of sustainability transitions, while still providing analytical traction and policy advice. The MLP draws attention to socio-technical systems as a new unit of analysis, which is more comprehensive than a micro-focus on individuals and more concrete than a macro-focus on a green economy. It also forms a new analytical framework that spans several stale dichotomies in environmental social science debates related to agency or structure and behavioral or technical change. The MLP accommodates stability and change and offers an integrative view on transitions, ranging from local projects to niche innovations to sector-level regimes and broader societal contexts. This new interdisciplinary research is attracting increasing attention from the European Environment Agency, International Panel on Climate Change (IPCC), and Organization for Economic Cooperation and Development (OECD).
Soils, the earth’s skin, are at the intersection of the lithosphere, hydrosphere, atmosphere, and biosphere. The persistence of life on our planet depends on the maintenance of soils as they constitute the biological engines of earth. Human population has increased exponentially in recent decades, along with the demand for food, materials, and energy, which have caused a shift from low-yield and subsistence agriculture to a more productive, high-cost, and intensive agriculture. However, soils are very fragile ecosystems and require centuries for their development, thus within the human timescale they are not renewable resources. Modern and intensive agriculture implies serious concern about the conservation of soil as living organism, i.e., of its capacity to perform the vast number of biochemical processes needed to complete the biogeochemical cycles of plant nutrients, such as nitrogen and phosphorus, crucial for crop primary production. Most practices related to intensive agriculture determine a deterioration even in the short-middle term of their physical, chemical, and biological properties, which all together contribute to soil quality, along with an overexploitation of soils as living organisms. Recent trends are turning toward styles of agriculture management that are more sustainable or conservative for soil quality.
Usually, use of soils for agricultural purposes deflect them at various degrees from the “natural” soil development processes (pedogenesis), and this shift may be assumed as a divergence from soil sustainability principles. For decades, the misuse of land due to intensive crop management has deteriorated soil health and quality. A huge plethora of microorganisms inhabits soils, thus acting as “the biological engine of the earth”; indeed, this microbiota serves the soil ecosystem, performing several fundamental functions. Therefore, management practices might be planned looking at the safeguard of soil microbial diversity and resilience. In addition, each unexpected alteration in numberless soil biochemical processes, being regulated by microbial communities, may represent an early and sensible signal of soil homeostasis weakening and, consequently, warn about soil conservation. Within the vast number of soil biochemical processes and connected features (bioindicators) virtually effective to measure the sustainable soil exploitation, those related to the mineralization or immobilization of the main nutrients (C and N), including enzyme activity (functioning) and composition (diversity) of microbial communities, exert a fundamental role because of their involvement in soil metabolism. Comparing the influence of many cropping factors (tillage, mulching and cover crops, rotations, mineral and organic fertilization) under both intensive and sustainable managements on soil microbial diversity and functioning, through both chemical and biological soil quality indicators, makes it possible to identify the most hazardous diversions from soil sustainability principles.
David A. Robinson, Fiona Seaton, Katrina Sharps, Amy Thomas, Francis Parry Roberts, Martine van der Ploeg, Laurence Jones, Jannes Stolte, Maria Puig de la Bellacasa, Paula Harrison, and Bridget Emmett
Soils provide important functions, which according to the European Commission include: biomass production (e.g., agriculture and forestry); storing, filtering, and transforming nutrients, substances, and water; harboring biodiversity (habitats, species, and genes); forming the physical and cultural environment for humans and their activities; providing raw materials; acting as a carbon pool; and forming an archive of geological and archaeological heritage, all of which support human society and planetary life. The basis of these functions is the soil natural capital, the stocks of soil material. Soil functions feed into a range of ecosystem services which in turn contribute to the United Nations sustainable development goals (SDGs). This overarching framework hides a range of complex, often nonlinear, biophysical interactions with feedbacks and perhaps yet to be discovered tipping points. Moreover, interwoven with this biophysical complexity are the interactions with human society and the socioeconomic system which often drives our attitudes toward, and the management and exploitation of, our environment.
Challenges abound, both social and environmental, in terms of how to feed an increasingly populous and material world, while maintaining some semblance of thriving ecosystems to pass on to future generations. How do we best steward the resources we have, keep them from degradation, and restore them where necessary as soils underpin life? How do we measure and quantify the soil resources we have, how are they changing in time and space, what can we predict about their future use and function? What is the value of soil resources, and how should we express it? This article explores how soil properties and processes underpin ecosystem services, how to measure and model them, and how to identify the wider benefits they provide to society. Furthermore, it considers value frameworks, including caring for our resources.
Salt accumulation in soils, affecting agricultural productivity, environmental health, and the economy of the community, is a global phenomenon since the decline of ancient Mesopotamian civilization by salinity. The global distribution of salt-affected soils is estimated to be around 830 million hectares extending over all the continents, including Africa, Asia, Australasia, and the Americas. The concentration and composition of salts depend on several resources and processes of salt accumulation in soil layers. Major types of soil salinization include groundwater associated salinity, non–groundwater-associated salinity, and irrigation-induced salinity. There are several soil processes which lead to salt build-up in the root zone interfering with the growth and physiological functions of plants.
Salts, depending on the ionic composition and concentration, can also affect many soil processes, such as soil water dynamics, soil structural stability, solubility of essential nutrients, and pH and pE of soil water—all indirectly hindering plant growth. The direct effect of salinity includes the osmotic effect affecting water and nutrient uptake and the toxicity or deficiency due to high concentration of certain ions. The plan of action to resolve the problems associated with soil salinization should focus on prevention of salt accumulation, removal of accumulated salts, and adaptation to a saline environment. Successful utilization of salinized soils needs appropriate soil and irrigation management and improvement of plants by breeding and genetic engineering techniques to tolerate different levels of salinity and associated abiotic stress.
Beyond damage to rainfed agricultural and forestry ecosystems, soil erosion due to water affects surrounding environments. Large amounts of eroded soil are deposited in streams, lakes, and other ecosystems. The most costly off-site damages occur when eroded particles, transported along the hillslopes of a basin, arrive at the river network or are deposited in lakes. The negative effects of soil erosion include water pollution and siltation, organic matter loss, nutrient loss, and reduction in water storage capacity. Sediment deposition raises the bottom of waterways, making them more prone to overflowing and flooding. Sediments contaminate water ecosystems with soil particles and the fertilizer and pesticide chemicals they contain. Siltation of reservoirs and dams reduces water storage, increases the maintenance cost of dams, and shortens the lifetime of reservoirs. Sediment yield is the quantity of transported sediments, in a given time interval, from eroding sources through the hillslopes and river network to a basin outlet. Chemicals can also be transported together with the eroded sediments. Sediment deposition inside a reservoir reduces the water storage of a dam.
The prediction of sediment yield can be carried out by coupling an erosion model with a mathematical operator which expresses the sediment transport efficiency of the hillslopes and the channel network. The sediment lag between sediment yield and erosion can be simply represented by the sediment delivery ratio, which can be calculated at the outlet of the considered basin, or by using a distributed approach. The former procedure couples the evaluation of basin soil loss with an estimate of the sediment delivery ratio SDRW for the whole watershed. The latter procedure requires that the watershed be discretized into morphological units, areas having a constant steepness and a clearly defined length, for which the corresponding sediment delivery ratio is calculated. When rainfall reaches the surface horizon of the soil, some pollutants are desorbed and go into solution while others remain adsorbed and move with soil particles. The spatial distribution of the loading of nitrogen, phosphorous, and total organic carbon can be deduced using the spatial distribution of sediment yield and the pollutant content measured on soil samples. The enrichment concept is applied to clay, organic matter, and all pollutants adsorbed by soil particles, such as nitrogen and phosphorous. Knowledge of both the rate and pattern of sediment deposition in a reservoir is required to establish the remedial strategies which may be practicable. Repeated reservoir capacity surveys are used to determine the total volume occupied by sediment, the sedimentation pattern, and the shift in the stage-area and stage-storage curves. By converting the sedimentation volume to sediment mass, on the basis of estimated or measured bulk density, and correcting for trap efficiency, the sediment yield from the basin can be computed.
Soils are the complex, dynamic, spatially diverse, living, and environmentally sensitive foundations of terrestrial ecosystems as well as human civilizations. The modern, environmental study of soil is a truly young scientific discipline that emerged only in the late 19th century from foundations in agricultural chemistry, land resource mapping, and geology. Today, little more than a century later, soil science is a rigorously interdisciplinary field with a wide range of exciting applications in agronomy, ecology, environmental policy, geology, public health, and many other environmentally relevant disciplines. Soils form slowly, in response to five inter-related factors: climate, organisms, topography, parent material, and time. Consequently, many soils are chemically, biologically, and/or geologically unique. The profound importance of soil, combined with the threats of erosion, urban development, pollution, climate change, and other factors, are now prompting soil scientists to consider the application of endangered species concepts to rare or threatened soil around the world.
Gerrit de Rooij
Henry Darcy was an engineer who built the drinking water supply system of the French city of Dijon in the mid-19th century. In doing so, he developed an interest in the flow of water through sands, and, together with Charles Ritter, he experimented (in a hospital, for unclear reasons) with water flow in a vertical cylinder filled with different sands to determine the laws of flow of water through sand. The results were published in an appendix to Darcy’s report on his work on Dijon’s water supply. Darcy and Ritter installed mercury manometers at the bottom and near the top of the cylinder, and they observed that the water flux density through the sand was proportional to the difference between the mercury levels. After mercury levels are converted to equivalent water levels and recast in differential form, this relationship is known as Darcy’s Law, and until this day it is the cornerstone of the theory of water flow in porous media. The development of groundwater hydrology and soil water hydrology that originated with Darcy’s Law is tracked through seminal contributions over the past 160 years.
Darcy’s Law was quickly adopted for calculating groundwater flow, which blossomed after the introduction of a few very useful simplifying assumptions that permitted a host of analytical solutions to groundwater problems, including flows toward pumped drinking water wells and toward drain tubes. Computers have made possible ever more advanced numerical solutions based on Darcy’s Law, which have allowed tailor-made computations for specific areas. In soil hydrology, Darcy’s Law itself required modification to facilitate its application for different soil water contents. The understanding of the relationship between the potential energy of soil water and the soil water content emerged early in the 20th century. The mathematical formalization of the consequences for the flow rate and storage change of soil water was established in the 1930s, but only after the 1970s did computers become powerful enough to tackle unsaturated flows head-on. In combination with crop growth models, this allowed Darcy-based models to aid in the setup of irrigation practices and to optimize drainage designs. In the past decades, spatial variation of the hydraulic properties of aquifers and soils has been shown to affect the transfer of solutes from soils to groundwater and from groundwater to surface water. More recently, regional and continental-scale hydrology have been required to quantify the role of the terrestrial hydrological cycle in relation to climate change. Both developments may pose new areas of application, or show the limits of applicability, of a law derived from a few experiments on a cylinder filled with sand in the 1850s.
Gary Sands, Srinivasulu Ale, Laura Christianson, and Nathan Utt
Agricultural (tile) drainage enables agricultural production on millions of hectares of arable lands worldwide. Lands where drainage or irrigation (and sometimes both) are implemented, generate a disproportionately large share of global agricultural production compared to dry land or rain-fed agricultural lands and thus, these water management tools are vital for meeting the food demands of today and the future. Future food demands will likely require irrigation and drainage to be practiced on an even greater share of the world’s agricultural lands. The practice of agricultural drainage finds its roots in ancient societies and has evolved greatly to incorporate modern technologies and materials, including the modern drainage plow, plastic drainage pipe and tubing, laser and GPS-guided installation equipment, and computer-aided design tools. Although drainage brings important agricultural production and environmental benefits to poorly drained and salt-affected arable lands, it can also give rise to the transport of nutrients and other constituents to downstream waters. Other unwanted ecological and hydrologic environmental effects may also be associated with the practice. The goal of this article is to familiarize the reader with the practice of subsurface agricultural drainage, the history and extent of its application, and the benefits commonly associated with it. In addition, environmental effects associated with subsurface drainage including hydrologic and water quality effects are presented, and conservation practices for mitigating these unwanted effects are described. These conservation practices are categorized by whether they are implemented in-field (such as controlled drainage) versus edge-of-field (such as bioreactors). The literature cited and reviewed herein is not meant to be exhaustive, but seminal and key literary works are identified where possible.
Luis S. Pereira and José M. Gonçalves
Surface irrigation is the oldest and most widely used irrigation method, more than 83% of the world’s irrigated area. It comprises traditional systems, developed over millennia, and modern systems with mechanized and often automated water application and adopting precise land-leveling. It adapts well to non-sloping conditions, low to medium soil infiltration characteristics, most crops, and crop mechanization as well as environmental conditions. Modern methods provide for water and energy saving, control of environmental impacts, labor saving, and cropping economic success, thus for competing with pressurized irrigation methods. Surface irrigation refers to a variety of gravity application of the irrigation water, which infiltrates into the soil while flowing over the field surface. The ways and timings of how water flows over the field and infiltrates the soil determine the irrigation phases—advance, maintenance or ponding, depletion, and recession—which vary with the irrigation method, namely paddy basin, leveled basin, border and furrow irrigation, generally used for field crops, and wild flooding and water spreading from contour ditches, used for pasture lands. System performance is commonly assessed using the distribution uniformity indicator, while management performance is assessed with the application efficiency or the beneficial water use fraction. The factors influencing system performance are multiple and interacting—inflow rate, field length and shape, soil hydraulics roughness, field slope, soil infiltration rate, and cutoff time—while management performance, in addition to these factors, depends upon the soil water deficit at time of irrigation, thus on the way farmers are able to manage irrigation. The process of surface irrigation is complex to describe because it combines surface flow with infiltration into the soil profile. Numerous mathematical computer models have therefore been developed for its simulation, aimed at both design adopting a target performance and field evaluation of actual performance. The use of models in design allows taking into consideration the factors referred to before and, when adopting any type of decision support system or multicriteria analysis, also taking into consideration economic and environmental constraints and issues.
There are various aspects favoring and limiting the adoption of surface irrigation. Favorable aspects include the simplicity of its adoption at farm in flat lands with low infiltration rates, namely when water conveyance and distribution are performed with canal and/or low-pressure pipe systems, low capital investment, and low energy consumption. Most significant limitations include high soil infiltration and high variability of infiltration throughout the field, land leveling requirements, need for control of a constant inflow rate, difficulties in matching irrigation time duration with soil water deficit at time of irrigation, and difficult access to equipment for mechanized and automated water application and distribution. The modernization of surface irrigation systems and design models, as well as models and tools usable to support surface irrigation management, have significantly impacted water use and productivity, and thus competitiveness of surface irrigation.