The development of information infrastructures that make ecological research data available has increased in recent years, contributing to fundamental changes in ecological research. Science and Technology Studies (STS) and the subfield of Infrastructure Studies, which aims at informing infrastructures’ design, use, and maintenance from a social science point of view, provide conceptual tools for understanding data infrastructures in ecology. This perspective moves away from the language of engineering, with its discourse on physical structures and systems, to use a lexicon more “social” than “technical” to understand data infrastructures in their informational, sociological, and historical dimensions. It takes a holistic approach that addresses not only the needs of ecological research but also the diversity and dynamics of data, data work, and data management. STS research, having focused for some time on studying scientific practices, digital devices, and information systems, is expanding to investigate new kinds of data infrastructures and their interdependencies across the data landscape. In ecology, data sharing and data infrastructures create new responsibilities that require scientists to engage in opportunities to plan, experiment, learn, and reshape data arrangements. STS and Infrastructure Studies scholars are suggesting that ecologists as well as data specialists and social scientists would benefit from active partnerships to ensure the growth of data infrastructures that effectively support scientific investigative processes in the digital era.
Florence Millerand and Karen S. Baker
The Mississippi River, the longest in North America, is really two rivers geophysically. The volume is less, the slope steeper, the velocity greater, and the channel straighter in its upper portion than in its lower portion. Below the mouth of the Ohio River, the Mississippi meanders through a continental depression that it has slowly filled with sediment over many millennia. Some limnologists and hydrologists consider the transitional middle portion of the Mississippi, where the waters of its two greatest tributaries, the Missouri and Ohio rivers, join it, to comprise a third river, in terms of its behavioral patterns and stream and floodplain ecologies. The Mississippi River humans have known, with its two or three distinct sections, is a relatively recent formation. The lower Mississippi only settled into its current formation following the last ice age and the dissipation of water released by receding glaciers. Much of the current river delta is newer still, having taken shape over the last three to five hundred years. Within the lower section of the Mississippi are two subsections, the meander zone and the delta. Below Cape Girardeau, Missouri, the river passes through Crowley’s Ridge and enters the wide and flat alluvial plain. Here the river meanders in great loops, often doubling back on itself, forming cut offs that, if abandoned by the river, forming lakes. Until modern times, most of the plain, approximately 35,000 square miles, comprised a vast and rich—rich in terms of biomass production—ecological wetland sustained by annual Mississippi River floods that brought not just water, but fertile sediment—topsoil—gathered from across much of the continent. People thrived in the Mississippi River meander zone. Some of the most sophisticated indigenous cultures of North America emerged here. Between Natchez, Mississippi, and Baton Rouge, Louisiana, at Old River Control, the Mississippi begins to fork into distributary channels, the largest of which is the Atchafalaya River. The Mississippi River delta begins here, formed of river sediment accrued upon the continental shelf. In the delta the land is wetter, the ground water table is shallower. Closer to the sea, the water becomes brackish and patterns of river sediment distribution are shaped by ocean tides and waves. The delta is frequently buffeted by hurricanes. Over the last century and a half people have transformed the lower Mississippi River, principally through the construction of levees and drainage canals that have effectively disconnected the river from the floodplain. The intention has been to dry the land adjacent to the river, to make it useful for agriculture and urban development. However, an unintended effect of flood control and wetland drainage has been to interfere with the flood-pulse process that sustained the lower valley ecology, and with the process of sediment distribution that built the delta and much of the Louisiana coastline. The seriousness of the delta’s deterioration has become especially apparent since Hurricane Katrina, and has moved conservation groups to action. They are pushing politicians and engineers to reconsider their approach to Mississippi River management.
Mohamed Ait-Kadi and Melvyn Kay
This is an immersive journey through different water management concepts. The conceptual attractiveness of concepts is not enough; they must be applicable in the real and fast-changing world. Thus, beyond the concepts, our long-standing challenge remains increasing water security. This is about stewardship of water resources for the greatest good of societies and the environment. It is a public responsibility requiring dynamic, adaptable, participatory, and balanced planning. It is all about coordination and sharing. Multi-sectoral approaches are needed to adequately address the threats and opportunities relating to water resources management in the context of climate change, rapid urbanization, and growing disparities. The processes involved are many and need consistency and long-term commitment to succeed. Climate change is closely related to the problems of water security, food security, energy security and environment sustainability. These interconnections are often ignored when policy-makers devise partial responses to individual problems. They call for broader public policy planning tools with the capacity to encourage legitimate public/collective clarification of the trade-offs and the assessment of the potential of multiple uses of water to facilitate development and growth. We need to avoid mental silos and to overcome the current piecemeal approach to solving the water problems. This requires a major shift in practice for organizations (governmental as well as donor organizations) accustomed to segregating water problems by subsectors. Our experience with integration tells us that (1) we need to invest in understanding the political economy of different sectors; (2) we need new institutional arrangements that function within increasing complexity, cutting across sectoral silos and sovereign boundaries; (3) top down approaches for resources management will not succeed without bottom-up efforts to help people improve their livelihoods and their capacity to adapt to increasing resource scarcity as well as to reduce unsustainable modes of production. Political will, as well as political skill, need visionary and strong leadership to bring opposing interests into balance to inform policy- making with scientific understanding, and to negotiate decisions that are socially accepted. Managing water effectively across a vast set of concerns requires equally vast coordination. Strong partnerships and knowledge creation and sharing are essential. Human civilization – we know- is a response to challenge. Certainly, water scarcity can be a source of conflict among competing users, particularly when combined with other factors of political or cultural tension. But it can also be an inducement to cooperation even in high tension areas. We believe that human civilization can find itself the resources to respond successfully to the many water challenges, and in the process make water a learning ground for building the expanded sense of community and sharing necessary to an increasingly interconnected world.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated. Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.” Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources. From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations. The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.
Archis R. Ambulkar
Since the industrial revolution, societies across the globe have observed significant urbanization and population growth. Newer technologies, industries, and manufacturing plants have evolved over the period to develop sophisticated infrastructures and amenities for mankind. To achieve this, communities have utilized and exploited natural resources, resulting in sustained environmental degradation and pollution. Among various adverse ecological effects, nutrient contamination in water is posing serious problems for the water bodies worldwide. Nitrogen and phosphorus are the basic constituents for the growth and reproduction of living organisms and occur naturally in the soil, air, and water. However, human activities are affecting their natural cycles and causing excessive dumping into the surface and groundwater systems. Higher concentrations of nitrogen and phosphorus-based nutrients in water resources lead to eutrophication, reduction in sunlight, lower dissolved oxygen levels, changing rates of plant growth, reproduction patterns, and overall deterioration of water quality. Economically, this pollution can impact the fishing industry, recreational businesses, property values, and tourism. Also, using nutrient-polluted lakes or rivers as potable water sources may result in excess nitrates in drinking water, production of disinfection by-products, and associated health effects. Nutrients contamination in water commonly originates from point and non-point sources. Point sources are the specific discharge locations, like wastewater treatment plants (WWTP), industries, and municipal waste systems; whereas, non-point sources are discrete dischargers, like agricultural lands and storm water runoffs. Compared to non-point sources, point sources are easier to identify, regulate, and treat. WWTPs receive sewage from domestic, business, and industrial settings. With growing pollution concerns, nutrients removal and recovery at treatment plants is gaining significant attention. Newer chemical and biological nutrient removal processes are emerging to treat wastewater. Nitrogen removal mainly involves nitrification-denitrification processes; whereas, phosphorus removal includes biological uptake, chemical precipitation, or filtration. In regards to non-point sources, authorities are encouraging best management practices to control pollution loads to waterways. Governments are opting for novel strategies like source nutrient reduction schemes, bioremediation processes, stringent effluent limits, and nutrient trading programs. Source nutrient reduction strategies such as discouraging or banning use of phosphorus-rich detergents and selective chemicals, industrial pretreatment programs, and stormwater management programs can be effective by reducing nutrient loads to WWTPs. Bioremediation techniques such as riparian areas, natural and constructed wetlands, and treatment ponds can capture nutrients from agricultural lands or sewage treatment plant effluents. Nutrient trading programs allow purchase/sale of equivalent environmental credits between point and non-point nutrient dischargers to manage overall nutrient discharges in watersheds at lower costs. Nutrient pollution impacts are quite evident and documented in many parts of the world. Governments and environmental organizations are undertaking several waterways remediation projects to improve water quality and restore aquatic ecosystems. Shrinking freshwater reserves and rising water demands are compelling communities to make efficient use of the available water resources. With smarter choices and useful strategies, nutrient pollution in the water can be contained to a reasonable extent. As responsible members of the community, it is important for us to understand this key environmental issue as well as to learn the current and future needs to alleviate this problem.
Ronald van Nooijen, Demetris Koutsoyiannis, and Alla Kolechkina
Humanity has been modifying the natural water cycle by building large-scale water infrastructure for millennia. For most of that time, the principles of hydraulics and control theory were only imperfectly known. Moreover, the feedback from the artificial system to the natural system was not taken into account, either because it was too small to notice or took too long to appear. In the 21st century, humanity is all too aware of the effects of our adaptation of the environment to our needs on the planetary system as a whole. It is necessary to see the environment, both natural and hman-made as one integrated system. Moreover, due to the legacy of the past, the behaviour of the man-madeparts of this system needs to be adapted in a way that leads to a sustainable ecosystem. The water cycle plays a central role in that ecosystem. It is therefore essential that the behaviour of existing and planned water infrastructure fits into the natural system and contributes to its well-being. At the same time, it must serve the purpose for which it was constructed. As there are no natural feedbacks to govern its behaviour, it will be necessary to create such feedbacks, possibly in the form of real-time control systems. To do so, it would be beneficial if all persons involved in the decision process that establishes the desired system behaviour understand the basics of control systems in general and their application to different water systems in particular. This article contains a discussion of the prerequisites for and early development of automatic control of water systems, an introduction to the basics of control theory with examples, a short description of optimal control theory in general, a discussion of model predictive control in water resource management, an overview of key aspects of automatic control in water resource management, and different types of applications. Finally, some challenges faced by practitioners are mentioned.
Gerrit de Rooij
Henry Darcy was an engineer who built the drinking water supply system of the French city of Dijon in the mid-19th century. In doing so, he developed an interest in the flow of water through sands, and, together with Charles Ritter, he experimented (in a hospital, for unclear reasons) with water flow in a vertical cylinder filled with different sands to determine the laws of flow of water through sand. The results were published in an appendix to Darcy’s report on his work on Dijon’s water supply. Darcy and Ritter installed mercury manometers at the bottom and near the top of the cylinder, and they observed that the water flux density through the sand was proportional to the difference between the mercury levels. After mercury levels are converted to equivalent water levels and recast in differential form, this relationship is known as Darcy’s Law, and until this day it is the cornerstone of the theory of water flow in porous media. The development of groundwater hydrology and soil water hydrology that originated with Darcy’s Law is tracked through seminal contributions over the past 160 years. Darcy’s Law was quickly adopted for calculating groundwater flow, which blossomed after the introduction of a few very useful simplifying assumptions that permitted a host of analytical solutions to groundwater problems, including flows toward pumped drinking water wells and toward drain tubes. Computers have made possible ever more advanced numerical solutions based on Darcy’s Law, which have allowed tailor-made computations for specific areas. In soil hydrology, Darcy’s Law itself required modification to facilitate its application for different soil water contents. The understanding of the relationship between the potential energy of soil water and the soil water content emerged early in the 20th century. The mathematical formalization of the consequences for the flow rate and storage change of soil water was established in the 1930s, but only after the 1970s did computers become powerful enough to tackle unsaturated flows head-on. In combination with crop growth models, this allowed Darcy-based models to aid in the setup of irrigation practices and to optimize drainage designs. In the past decades, spatial variation of the hydraulic properties of aquifers and soils has been shown to affect the transfer of solutes from soils to groundwater and from groundwater to surface water. More recently, regional and continental-scale hydrology have been required to quantify the role of the terrestrial hydrological cycle in relation to climate change. Both developments may pose new areas of application, or show the limits of applicability, of a law derived from a few experiments on a cylinder filled with sand in the 1850s.
Stephen Foster and John Chilton
This chapter first provides a concise account of the basic principles and concepts underlying scientific groundwater management, and it then both summarises the policy approach to developing an adaptive scheme of management and protection for groundwater resources that is appropriately integrated across relevant sectors and assesses the governance needs, roles and planning requirements to implement the selected policy approach.
Amy W. Ando and Noelwah R. Netusil
Green stormwater infrastructure (GSI), a decentralized approach for managing stormwater that uses natural systems or engineered systems mimicking the natural environment, is being adopted by cities around the world to manage stormwater runoff. The primary benefits of such systems include reduced flooding and improved water quality. GSI projects, such as green roofs, urban tree planting, rain gardens and bioswales, rain barrels, and green streets may also generate cobenefits such as aesthetic improvement, reduced net CO2 emissions, reduced air pollution, and habitat improvement. GSI adoption has been fueled by the promise of environmental benefits along with evidence that GSI is a cost-effective stormwater management strategy, and methods have been developed by economists to quantify those benefits to support GSI planning and policy efforts. A body of multidisciplinary research has quantified significant net benefits from GSI, with particularly robust evidence regarding green roofs, urban trees, and green streets. While many GSI projects generate positive benefits through ecosystem service provision, those benefits can vary with details of the location and the type and scale of GSI installation. Previous work reveals several pitfalls in estimating the benefits of GSI that scientists should avoid, such as double counting values, counting transfer payments as benefits, and using values for benefits like avoided carbon emissions that are biased. Important gaps remain in current knowledge regarding the benefits of GSI, including benefit estimates for some types of GSI elements and outcomes, understanding how GSI benefits last over time, and the distribution of GSI benefits among different groups in urban areas.