201-220 of 317 Results


Market Failures, the Environment, and Human Health  

Karyn Morrissey

Knowledge of the important role that the environment plays in determining human health predates the modern public health era. However, the tendency to see health, disease, and their determinants as attributes of individuals rather than characteristics of communities meant that the role of the environment in human health was seldom accorded sufficient importance during much of the 20th century. Instead, research began to focus on specific risk factors that correlated with diseases of greatest concern, i.e., the non-communicable diseases such as cardiovascular disease, asthma, and diabetes. Many of these risk factors (e.g., smoking, alcohol consumption, and diet) were aspects of individual lifestyle and behaviors, freely chosen by the individual. Within this individual-centric framework of human health, the standard economic model for human health became primarily the Grossman model of health and health care demand. In this model, an individual’s health stock may be increased by investing in health (by consuming health services, for example) or decreased by endogenous (age) or exogenous (smoking) individual factors. Within this model, individuals used their available resources, their budget, to purchase goods and services that either increased or decreased their health stock. Grossman’s model provides a consumption-based approach to human health, where individuals purchase goods and services required to improve their individual health in the marketplace. Grossman’s model of health assumes that the goods and services required to optimize good health can be purchased through market-based interactions and that these goods and services are optimally priced—that the value of the goods and services are reflected in their price. In reality, many types of goods and services that are good for human health are not available to purchase, or if they are available they are undervalued in the free market. Across the environmental and health literature, these goods and services are, today, broadly referred to as “ecosystem services for human health.” However, the quasi-public good nature of ecosystem services for human health means that the private market will generate a suboptimal environment for both individual and public health outcomes. In the face of continued austerity and scarce public resources, understanding the role of the environment in human health may help to alleviate future health care demand by decreasing (or increasing) environmental risk (or benefits) associated with health outcomes. However, to take advantage of the role that the environment plays in human health requires a fundamental reorientation of public health policy and spending to include environmental considerations.


Material and Energy Flow Analysis  

Vincent Moreau and Guillaume Massard

The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems. Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for. In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems. In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.


Measuring Soil Loss and Subsequent Nutrient and Organic Matter Loss on Farmland  

Vincenzo Bagarello and Vito Ferro

Field plots are often used to obtain experimental data (soil loss values corresponding to different climate, soil, topographic, crop, and management conditions) for predicting and evaluating soil erosion and sediment yield. Plots are used to study physical phenomena affecting soil detachment and transport, and their sizes are determined according to the experimental objectives and the type of data to be obtained. Studies on interrill erosion due to rainfall impact and overland flow need small plot width (2–3 m) and length (< 10 m), while studies on rill erosion require plot lengths greater than 6–13 m. Sites must be selected to represent the range of uniform slopes prevailing in the farming area under consideration. Plots equipped to study interrill and rill erosion, like those used for developing the Universal Soil Loss Equation (USLE), measure erosion from the top of a slope where runoff begins; they must be wide enough to minimize the edge or border effects and long enough to develop downslope rills. Experimental stations generally include bounded runoff plots of known rea, slope steepness, slope length, and soil type, from which both runoff and soil loss can be monitored. Once the boundaries defining the plot area are fixed, a collecting equipment must be used to catch the plot runoff. A conveyance system (H-flume or pipe) carries total runoff to a unit sampling the sediment and a storage system, such as a sequence of tanks, in which sediments are accumulated. Simple methods have been developed for estimating the mean sediment concentration of all runoff stored in a tank by using the vertical concentration profile measured on a side of the tank. When a large number of plots are equipped, the sampling of suspension and consequent oven-drying in the laboratory are highly time-consuming. For this purpose, a sampler that can extract a column of suspension, extending from the free surface to the bottom of the tank, can be used. For large plots, or where runoff volumes are high, a divisor that splits the flow into equal parts and passes one part in a storage tank as a sample can be used. Examples of these devices include the Geib multislot divisor and the Coshocton wheel. Specific equipment and procedures must be employed to detect the soil removed by rill and gully erosion. Because most of the soil organic matter is found close to the soil surface, erosion significantly decreases soil organic matter content. Several studies have demonstrated that the soil removed by erosion is 1.3–5 times richer in organic matter than the remaining soil. Soil organic matter facilitates the formation of soil aggregates, increases soil porosity, and improves soil structure, facilitating water infiltration. The removal of organic matter content can influence soil infiltration, soil structure, and soil erodibility.


Mineral Dust Cycle  

Irina Sokolik

There is scientific consensus that human activities have been altering the atmospheric composition and are a key driver of global climate and environmental changes since pre-industrial times (IPCC, 2013). It is a pressing priority to understand the Earth system response to atmospheric aerosol input from diverse sources, which so far remain one of the largest uncertainties in climate studies (Boucher et al., 2014; Forster et al., 2007). As the second most abundant component (in terms of mass) of atmospheric aerosols, mineral dust exerts tremendous impacts on Earth’s climate and environment through various interaction and feedback processes. Dust can also have beneficial effects where it deposits: Central and South American rain forests get most of their mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the midwestern United States, ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Accurate assessments of dust emission are of great importance to improvements in quantifying the diverse dust impacts.


Mining, Ecological Engineering, and Metals Extraction for the 21st Century  

Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris

The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated. Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.” Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources. From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations. The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.


The Mirage of Supply-Side Development: The Hydraulic Mission and the Politics of Agriculture and Water in the Nile Basin  

Harry Verhoeven

In an era of calamitous climate change, entrenched malnutrition, and the chronic exclusion of hundreds of millions of people from access to affordable energy, food, and water, evaluating the policy options of African states to address these challenges matters more than ever. In the Nile Basin especially, a region notorious for its poverty, violent instability and lack of industrialisation, states have invested their scarce resources and political capital in a “hydraulic mission” in the belief that they can engineer their way out of international marginalization. Incumbents have bet on large-scale hydro-infrastructure and capital-intensive agriculture to boost food production, strengthen energy security, and deal with water scarcity, despite the woeful track-record of such a supply-side approach to development. While ruling elites in the Nile Basin have portrayed the hydraulic mission as the natural way of developing the region’s resources—supposedly validated by the historical achievements of Pharaonic civilization and its mastery over its tough environment—this is a modern fiction, spun to justify politically expedient projects and the exclusion of broad layers of the population. In the last two hundred years, the hydraulic mission has made three major political contributions that underline its strategic usefulness to centralizing elites: it has enabled the building of modern states and a growing bureaucratic apparatus around a riverain political economy; it has generated new national narratives that have allowed unpopular regimes to rebrand themselves as protectors of the nation; and it has facilitated the forging of external alliances, linking the resources and elites of Egypt, Ethiopia, and Sudan to global markets and centers of influence. Mega-dams, huge canals and irrigation for export are fundamentally about power and the powerful—and the privileging of some interests and social formations over others. The one-sided focus on increasing supply—based on the false premise that this will allow ordinary people to access more food and water—transfers control over livelihoods from one (broad) group of people to (a much narrower) other one by legitimizing top-down interventionism and dislocation. What presents itself as a strategy of water resources and agricultural development is really about (re)constructing hierarchies between people. The mirage of supply-side development continues to seduce elites at the helm of the state because it keeps them in power and others out of it.


Modeling the Impact of Environment on Infectious Diseases  

Giovanni Lo Iacono and Gordon L. Nichols

The introduction of pasteurization, antibiotics, and vaccinations, as well as improved sanitation, hygiene, and education, were critical in reducing the burden of infectious diseases and associated mortality during the 19th and 20th centuries and were driven by an improved understanding of disease transmission. This advance has led to longer average lifespans and the expectation that, at least in the developed world, infectious diseases were a problem of the past. Unfortunately this is not the case; infectious diseases still have a significant impact on morbidity and mortality worldwide. Moreover, the world is witnessing the emergence of new pathogens, the reemergence of old ones, and the spread of antibiotic resistance. Furthermore, effective control of infectious diseases is challenged by many factors, including natural disasters, extreme weather, poverty, international trade and travel, mass and seasonal migration, rural–urban encroachment, human demographics and behavior, deforestation and replacement with farming, and climate change. The importance of environmental factors as drivers of disease has been hypothesized since ancient times; and until the late 19th century, miasma theory (i.e., the belief that diseases were caused by evil exhalations from unhealthy environments originating from decaying organic matter) was a dominant scientific paradigm. This thinking changed with the microbiology era, when scientists correctly identified microscopic living organisms as the pathogenic agents and developed evidence for transmission routes. Still, many complex patterns of diseases cannot be explained by the microbiological argument alone, and it is becoming increasingly clear that an understanding of the ecology of the pathogen, host, and potential vectors is required. There is increasing evidence that the environment, including climate, can affect pathogen abundance, survival, and virulence, as well as host susceptibility to infection. Measuring and predicting the impact of the environment on infectious diseases, however, can be extremely challenging. Mathematical modeling is a powerful tool to elucidate the mechanisms linking environmental factors and infectious diseases, and to disentangle their individual effects. A common mathematical approach used in epidemiology consists in partitioning the population of interest into relevant epidemiological compartments, typically individuals unexposed to the disease (susceptible), infected individuals, and individuals who have cleared the infection and become immune (recovered). The typical task is to model the transitions from one compartment to another and to estimate how these populations change in time. There are different ways to incorporate the impact of the environment into this class of models. Two interesting examples are water-borne diseases and vector-borne diseases. For water-borne diseases, the environment can be represented by an additional compartment describing the dynamics of the pathogen population in the environment—for example, by modeling the concentration of bacteria in a water reservoir (with potential dependence on temperature, pH, etc.). For vector-borne diseases, the impact of the environment can be incorporated by using explicit relationships between temperature and key vector parameters (such as mortality, developmental rates, biting rate, as well as the time required for the development of the pathogen in the vector). Despite the tremendous advancements, understanding and mapping the impact of the environment on infectious diseases is still a work in progress. Some fundamental aspects, for instance, the impact of biodiversity on disease prevalence, are still a matter of (occasionally fierce) debate. There are other important challenges ahead for the research exploring the potential connections between infectious diseases and the environment. Examples of these challenges are studying the evolution of pathogens in response to climate and other environmental changes; disentangling multiple transmission pathways and the associated temporal lags; developing quantitative frameworks to study the potential effect on infectious diseases due to anthropogenic climate change; and investigating the effect of seasonality. Ultimately, there is an increasing need to develop models for a truly “One Health” approach, that is, an integrated, holistic approach to understand intersections between disease dynamics, environmental drivers, economic systems, and veterinary, ecological, and public health responses.


Monitoring and Modeling of Outdoor Air Pollution  

Stefan Reis

Air pollution has been a major threat to human health, ecosystems, and agricultural crops ever since the onset of widespread use of fossil fuel combustion and emissions of harmful substances into ambient air. As a basis for the development, implementation, and compliance assessment of air pollution control policies, monitoring networks for priority air pollutants were established, primarily for regulatory purposes. With increasing understanding of emission sources and the release and environmental fate of chemicals and toxic substances into ambient air, as well as atmospheric transport and chemical conversion processes, increasingly complex air pollution models have entered the scene. Today, highly accurate equipment is available to measure trace gases and aerosols in the atmosphere. In addition, sophisticated atmospheric chemistry transport models—which are routinely compared to and validated and assessed against measurements—are used to model dispersion and chemical processes affecting the composition of the atmosphere, and the resulting ambient concentrations of harmful pollutants. The models also provide methods to quantify the deposition of pollutants, such as acidifying and eutrophying substances, in vegetation, soils, and freshwater ecosystems. This article provides a general overview of the underlying concepts and key features of monitoring and modeling systems for outdoor air pollution.


Multi-Objective Robust Planning Tools  

Jazmin Zatarain Salazar, Andrea Castelletti, and Matteo Giuliani

Shared water resource systems spark a number of conflicts related to their multi sectorial, regional, and intergenerational use. They are also vulnerable to a myriad of uncertainties stemming from changes in the hydrology, population demands, and climate change. Planning and management under these conditions are extremely challenging. Fortunately, our capability to approach these problems has evolved dramatically over the last few decades. Increased computational power enables the testing of multiple hypotheses and expedites the results across a range of planning alternatives. Advances in flexible multi-objective optimization tools facilitate the analyses of many competing interests. Further, major shifts in the way uncertainties are treated allow analysts to characterize candidate planning alternatives by their ability to fail or succeed instead of relying on fallible predictions. Embracing the fact that there are indeterminate uncertainties whose probabilistic descriptions are unknown, and acknowledging relationships whose actions and outcomes are not well-characterized in planning problems, have improved our ability to perform diligent analysis. Multi-objective robust planning of water systems emerged in response to the need to support planning and management decisions that are better prepared for unforeseen future conditions and that can be adapted to changes in assumptions. A suite of robustness frameworks has emerged to address planning and management problems in conditions of deep uncertainty. That is, events not readily identified or that we know so little about that their likelihood of occurrence cannot be described. Lingering differences remain within existing frameworks. These differences are manifested in the way in which alternative plans are specified, the views about how the future will unfold, and how the fitness of candidate planning strategies is assessed. Differences in the experimental design can yield diverging conclusions about the robustness and vulnerabilities of a system. Nonetheless, the means to ask a suite of questions and perform a more ambitious analysis is available in the early 21st century. Future challenges will entail untangling different conceptions about uncertainty, defining what aspects of the system are important and to whom, and how these values and assumptions will change over time.


National Parks in Developed Countries  

Leslie Richardson and Bruce Peacock

Economics plays an important role not only in the management of national parks in developed countries, but also in demonstrating the contribution of these areas to societal well-being. The beneficial effect of park tourism on jobs and economic activity in communities near these protected areas has at times been a factor in their establishment. These economic impacts continue to be highlighted as a way to demonstrate the benefit and return on investment of national parks to local economies. However, the economic values supported by national parks extend far beyond local economic benefits. Parks provide unique recreation opportunities, health benefits, preservation of wildlife and habitat, and a wide range of ecosystem services that the public assigns an economic value to. In addition, value is derived from the existence of national parks and their preservation for future generations. These nonmarket benefits can be difficult to quantify, but they are essential for understanding and communicating the economic importance of parks. Economic methods used to estimate these values have been refined and tested for nearly seven decades, and they have come a long way in helping to elucidate the extent of the nonmarket benefits of protected areas. In many developed countries, national parks have regulations and policies that outline a framework for the consideration of economic values in decision-making contexts. For instance, large oil spills in the United States, such as the Exxon Valdez spill of 1989 and the Deepwater Horizon spill of 2010, highlighted the need to better understand public values for affected park resources, leading to the extensive use of nonmarket values in natural resource damage assessments. Of course, rules and enforcement issues vary widely across countries, and the potential for economics to inform the day-to-day operations of national parks is much broader than what is currently outlined in such policies. While economics is only one piece of the puzzle in managing national parks, it provides a valuable tool for evaluating resource tradeoffs and for incorporating public preferences into the decision-making process, leading to greater transparency and assurance that national parks are managed for the benefit of society. Understanding the full extent of the economic benefits supported by national parks helps to further the mission of these protected areas in developed countries.


Natural Environments, Health, and Well-Being  

Matilda van den Bosch

Human beings are part of natural ecosystems and depend on them for their survival. In a rapidly changing environment and with increasing urbanization, this dependence is challenged. Natural environments affect human health and well-being both directly and indirectly. Urban green and blue areas provide opportunities for stress recovery and physical activity. They offer spaces for social interactions in the neighborhood and places for children’s play. Chronic stress, physical inactivity, and lack of social cohesion are three major risk factors for noncommunicable diseases, and therefore abundant urban greenery is an important asset for health promotion. Through numerous ecosystem services natural environments play a fundamental role in protecting health. Various populations depend on nature for basic material, such as fresh water, wood, fuel, and nutritious food. Biodiverse natural areas are also necessary for regulating the environment and for mitigating and adapting to climate change. For example, tree canopy cover can reduce the urban heat island effect substantially, preventing excess morbidity during heat waves. This natural heat-reducing effect also lessens the need for air conditioning systems and as a consequence decreases energy spending. Urban trees also support storm-water management, preventing flooding and related health issues. Air pollution is a major threat to population health. Urban trees sequester pollutants and, even though the effect may be relatively small, given the severity of the problem it may still have some public-health implications. The evidence around the effects of natural environments on health and well-being is steadily increasing. Several pathways and mechanisms are suggested, such as health services through functional ecosystems, early life exposure to biodiverse microbiota, which is important for the immune-system development, and sensory exposure, which has direct neurobiological impact supporting cognitive development and stress resilience. Support for several pathways is at hand that shows lower mortality rates and prevalence of cardiovascular and respiratory diseases, healthier pregnancy outcomes, reduced health inequalities, and improved mental health in urban areas with greater amounts of green and blue space. Altogether, the interactions between healthy natural environments and healthy people are multiple and complex, and require interdisciplinary attention and action for full understanding and resilient development of both nature and human beings.



Philip Carl Salzman

Nomadism is a technique of population movement used to accomplish a variety of goals. It is used for primary production when the resources to be tapped are distributed thinly over a wide space, or are located in different places in a large region. Commonly nomadism is a technique used in a spatially extensive adaptation. Pastoralists raising domestic animals on natural pasture move from grazed areas to areas with fresh pasture, and from dry areas to those with water. Nomadism follows regular patterns where the resources tapped are reliable and thus predictable. This is common in macro-environmental adaptations to factors such as seasons and altitude. Some pastoralists have mountain adaptations, migrating to high altitudes in summer and low altitudes in winter, an adaptation called transhumance in Europe. Nomadic patterns are more irregular when rainfall patterns, and thus pasturage, are erratic and unpredictable, as is common in desert areas with low rainfall. Among some pastoral peoples, all of the households in the community move together. Among other pastoral peoples, a sector of the populations is nomadic; young and/or mature men migrate with the livestock, while women, children, and elders remain in a stationary home settlement. This is also the pattern in European transhumance. Many pastoral peoples produce primarily for their own subsistence; it is common that they have multi-resource or mixed economies, engaging also in hunting and gathering, horticulture, agriculture, and arboriculture. Economic activities are not limited to primary production; patterns of predation, including raiding and extortion, against other pastoralists, farmers, and traders are widespread. Other pastoral peoples are heavily market-oriented, producing for sale, or have symbiotic relations with hunters or cultivators; it is normal that they are more specialized in their production. But pastoralists can be found at all points on a continuum between subsistence- and market-oriented.


Nutrient Pollution and Wastewater Treatment Systems  

Archis R. Ambulkar

Since the industrial revolution, societies across the globe have observed significant urbanization and population growth. Newer technologies, industries, and manufacturing plants have evolved over the period to develop sophisticated infrastructures and amenities for mankind. To achieve this, communities have utilized and exploited natural resources, resulting in sustained environmental degradation and pollution. Among various adverse ecological effects, nutrient contamination in water is posing serious problems for the water bodies worldwide. Nitrogen and phosphorus are the basic constituents for the growth and reproduction of living organisms and occur naturally in the soil, air, and water. However, human activities are affecting their natural cycles and causing excessive dumping into the surface and groundwater systems. Higher concentrations of nitrogen and phosphorus-based nutrients in water resources lead to eutrophication, reduction in sunlight, lower dissolved oxygen levels, changing rates of plant growth, reproduction patterns, and overall deterioration of water quality. Economically, this pollution can impact the fishing industry, recreational businesses, property values, and tourism. Also, using nutrient-polluted lakes or rivers as potable water sources may result in excess nitrates in drinking water, production of disinfection by-products, and associated health effects. Nutrients contamination in water commonly originates from point and non-point sources. Point sources are the specific discharge locations, like wastewater treatment plants (WWTP), industries, and municipal waste systems; whereas, non-point sources are discrete dischargers, like agricultural lands and storm water runoffs. Compared to non-point sources, point sources are easier to identify, regulate, and treat. WWTPs receive sewage from domestic, business, and industrial settings. With growing pollution concerns, nutrients removal and recovery at treatment plants is gaining significant attention. Newer chemical and biological nutrient removal processes are emerging to treat wastewater. Nitrogen removal mainly involves nitrification-denitrification processes; whereas, phosphorus removal includes biological uptake, chemical precipitation, or filtration. In regards to non-point sources, authorities are encouraging best management practices to control pollution loads to waterways. Governments are opting for novel strategies like source nutrient reduction schemes, bioremediation processes, stringent effluent limits, and nutrient trading programs. Source nutrient reduction strategies such as discouraging or banning use of phosphorus-rich detergents and selective chemicals, industrial pretreatment programs, and stormwater management programs can be effective by reducing nutrient loads to WWTPs. Bioremediation techniques such as riparian areas, natural and constructed wetlands, and treatment ponds can capture nutrients from agricultural lands or sewage treatment plant effluents. Nutrient trading programs allow purchase/sale of equivalent environmental credits between point and non-point nutrient dischargers to manage overall nutrient discharges in watersheds at lower costs. Nutrient pollution impacts are quite evident and documented in many parts of the world. Governments and environmental organizations are undertaking several waterways remediation projects to improve water quality and restore aquatic ecosystems. Shrinking freshwater reserves and rising water demands are compelling communities to make efficient use of the available water resources. With smarter choices and useful strategies, nutrient pollution in the water can be contained to a reasonable extent. As responsible members of the community, it is important for us to understand this key environmental issue as well as to learn the current and future needs to alleviate this problem.


Oats and Other Forage Crops  

Daren Redfearn

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article. Oats and the other small grains have been “rediscovered” with the drive towards intensifying agricultural production, integrating crops and livestock into diversified systems, and increasing environmental stewardship. Globally, oats and other winter annual small grains such as wheat, cereal rye, triticale, and barley, have been used primarily for grain production. The secondary market following grain production has been restricted to straw, used mainly as livestock bedding. In regions where livestock are economically important, oats and the other annual small grain crops can be used as a grazed forage or fodder crop, hay, or silage. There are several characteristics that make oats and other small grains suitable for multiple agricultural uses. All the small grains are fairly easy to establish, have rapid growth, can be productive, and have a high nutritional value for livestock. Recent improvements in cultivar development have allowed oats and wheat to be grown across a broader range of stressful environmental conditions. Similarly, cultivar development in oats and wheat has improved grazing tolerance, which is important in dual-purpose systems that emphasize both grazing and grain production. On a worldwide scale, oats and other annual small grains are economically and environmentally important forage crops, especially when used as focused components within intensified agricultural systems. Challenges include development of improved cultivars of oats and other small grains for use in intensified agricultural systems, including both grazing and no grazing, that serve as short rotation crops, dual-purpose crops, or are designed to mitigate a specific environmental issue.


The Oceans and Human Health  

Lora Fleming, Michael Depledge, Niall McDonough, Mathew White, Sabine Pahl, Melanie Austen, Anders Goksoyr, Helena Solo-Gabriele, and John Stegeman

The interdisciplinary study of oceans and human health is an area of increasing global importance. There is a growing body of evidence that the health of the oceans and that of humans are inextricably linked and that how we interact with and affect our oceans and seas will significantly influence our future on earth. Since the emergence of modern humans, the oceans have served as a source of culture, livelihood, expansion, trade, food, and other resources. However, the rapidly rising global population and the continuing alterations of the coastal environment are placing greater pressure on coastal seas and oceans. Negative human impacts, including pollution (chemical, microbial, material), habitat destruction (e.g., bottom trawling, dredging), and overfishing, affect not only ecosystem health, but also human health. Conversely, there is potential to promote human health and well-being through sustainable interactions with the coasts and oceans, such as the restoration and preservation of coastal and marine ecosystems. The study of oceans and human health is inherently interdisciplinary, bringing together the natural and social sciences as well as diverse stakeholder communities (including fishers, recreational users, private enterprise, and policymakers). Reviewing history and policy with regard to oceans and human health, in addition to known and potential risks and benefits, provides insights into new areas and avenues of global cooperation, with the possibility for collaboratively addressing the local and global challenges of our interactions with the oceans, both now and in the future.


Optimal and Real-Time Control of Water Infrastructures  

Ronald van Nooijen, Demetris Koutsoyiannis, and Alla Kolechkina

Humanity has been modifying the natural water cycle by building large-scale water infrastructure for millennia. For most of that time, the principles of hydraulics and control theory were only imperfectly known. Moreover, the feedback from the artificial system to the natural system was not taken into account, either because it was too small to notice or took too long to appear. In the 21st century, humanity is all too aware of the effects of our adaptation of the environment to our needs on the planetary system as a whole. It is necessary to see the environment, both natural and hman-made as one integrated system. Moreover, due to the legacy of the past, the behaviour of the man-madeparts of this system needs to be adapted in a way that leads to a sustainable ecosystem. The water cycle plays a central role in that ecosystem. It is therefore essential that the behaviour of existing and planned water infrastructure fits into the natural system and contributes to its well-being. At the same time, it must serve the purpose for which it was constructed. As there are no natural feedbacks to govern its behaviour, it will be necessary to create such feedbacks, possibly in the form of real-time control systems. To do so, it would be beneficial if all persons involved in the decision process that establishes the desired system behaviour understand the basics of control systems in general and their application to different water systems in particular. This article contains a discussion of the prerequisites for and early development of automatic control of water systems, an introduction to the basics of control theory with examples, a short description of optimal control theory in general, a discussion of model predictive control in water resource management, an overview of key aspects of automatic control in water resource management, and different types of applications. Finally, some challenges faced by practitioners are mentioned.


Organic Farming  

Theodore J. K. Radovich

Organic farming occupies a unique position among the world’s agricultural systems. While not the only available model for sustainable food production, organic farmers and their supporters have been the most vocal advocates for a fully integrated agriculture that recognizes a link between the health of the land, the food it produces, and those that consume it. Advocacy for the biological basis of agriculture and the deliberate restriction or prohibition of many agricultural inputs arose in response to potential and observed negative environmental impacts of new agricultural technologies introduced in the 20th century. A primary focus of organic farming is to enhance soil ecological function by building soil organic matter that in turn enhances the biota that soil health and the health of the agroecosystem depends on. The rapid growth in demand for organic products in the late 20th and early 21st centuries is based on consumer perception that organically grown food is better for the environment and human health. Although there have been some documented trends in chemical quality differences between organic and non-organic products, the meaningful impact of the magnitude of these differences is unclear. There is stronger evidence to suggest that organic systems pose less risk to the environment, particularly with regard to water quality; however, as intensity of management in organic farming increases, the potential risk to the environment is expected to also increase. In the early 21st century there has been much discussion centered on the apparent bifurcation of organic farming into two approaches: “input substitution” and “system redesign.” The former approach is a more recent phenomenon associated with pragmatic considerations of scaling up the size of operations and long distance shipping to take advantage of distant markets. Critics argue that this approach represents a “conventionalization” of organic agriculture that will erode potential benefits of organic farming to the environment, human health, and social welfare. A current challenge of organic farming systems is to reconcile the different views among organic producers regarding issues arising from the rapid growth of organic farming.


Origin and Development of Agriculture in New Guinea, Island Melanesia, and Polynesia  

Tim Denham

Early agricultural and arboricultural practices in the Pacific are based on vegetative principles, namely, the asexual propagation and transplantation of plants. A vegetative orientation is reflected in the exploitation of underground storage organs (USOs) within Near Oceania, as well as Island Southeast Asia, during the Pleistocene. During the early Holocene, people in the New Guinea region (including Near Oceania) began to intensify the management of plant resources in different landscapes. The increased degree of plant management, as well as associated environmental transformation, is most clearly manifest in the agricultural chronology at Kuk Swamp in the highlands of Papua New Guinea. At Kuk, shifting cultivation was potentially practiced during the early Holocene, with mounded cultivation by c. 7000–6400 cal BP and ditched drainage of wetlands for cultivation by c. 4400–4000 cal BP. Comparable agricultural records are lacking for other regions of Near Oceania; lowland sites indicate a range of arboricultural practices focused on fruit- and nut-bearing trees during the Terminal Pleistocene and throughout the Holocene, as well as potentially sago during the late Holocene. By c. 4000–3000 cal BP, indigenous agricultural and arboricultural elements were integrated with new cultural traits from Southeast Asia, including domestic animals, pottery and potentially new varieties of traditional crops. From c. 3250 to 2800 cal BP, different elements of agricultural and arboricultural practices from lowland New Guinea and Island Melanesia were taken by Lapita pottery–bearing colonists into the western Pacific. A later period of agricultural expansion occurred around c. 1000–750 cal BP with the colonization of eastern Polynesia. Agricultural practices and crops were variably taken and adapted to different islands and island groups across the Pacific. Additional transformations to agriculture occurred with the Polynesian adoption of the sweet potato (Ipomoea batatas), a South American domesticate, as well as following protohistoric and historic encounters.


Payments for Ecosystem Services: Program Design and Participation  

Natasha James and Erin Sills

Payments for ecosystem or environmental services (PES) are broadly defined as payments (in kind or in cash) to participants (often landowners) who volunteer to provide the services either to a specific user or to society at large. Payments are typically conditional on agreed rules of natural resource management rather than on delivery of the services. The rules range from protection of native ecosystems to installation of conservation practices. The earliest proponents of PES were economists who argued that they are a cost-effective way to conserve forests, manage watersheds, and protect biodiversity. Political support for PES rests on the claim that these programs can alleviate poverty among participants as well as protect the environment. More recent literature and experience with PES reveals barriers to achieving cost-effectiveness and poverty alleviation, including many related to the distribution of participation. The Costa Rican experience illustrates the choices that must be made and the potential for innovation in the design of PES programs.


Payments versus Direct Controls for Environmental Externalities in Agriculture  

Alfons Weersink and David Pannell

The production of food, fiber, and fuel often results in negative externalities due to impacts on soil, water, air, or habitat. There are two broad ways to incentivize farmers to alter their land use or management practices on that land to benefit the environment: (1) provide payments to farmers who adopt environmentally beneficial actions and (2) introduce direct controls or regulations that require farmers to undertake certain actions, backed up with penalties for noncompliance. Both the provision of payments for environmentally beneficial management practices (BMPs) and a regulatory requirement for use of a BMP alter the incentives faced by farmers, but they do so in different ways, with different implications and consequences for farmers, for the policy, for politics, and consequently for the environment. These two incentive-based mechanisms are recommended where the private incentives conflict with the public interest, and only where the private incentives are not so strong as to outweigh the public benefits. The biggest differences between them probably relate to equity/distributional outcomes and politics rather than efficiency. Governments often seem to prefer to employ beneficiary-pays mechanisms in cases where they seek to alter farmers’ existing practices, and polluter-pays mechanisms when they seek to prevent farmers from changing from their current practices to something worse for the environment. The digital revolution has the potential to help farmers produce more food on less land and with fewer inputs. In addition to reducing input levels and identifying unprofitable management zones to set aside, the technology could also alter the transaction costs of the policy options.