You are looking at 101-120 of 154 articles
Vito Ferro and Vincenzo Bagarello
Field plots are often used to obtain experimental data (soil loss values corresponding to different climate, soil, topographic, crop, and management conditions) for predicting and evaluating soil erosion and sediment yield. Plots are used to study physical phenomena affecting soil detachment and transport, and their sizes are determined according to the experimental objectives and the type of data to be obtained. Studies on interrill erosion due to rainfall impact and overland flow need small plot width (2–3 m) and length (< 10 m), while studies on rill erosion require plot lengths greater than 6–13 m. Sites must be selected to represent the range of uniform slopes prevailing in the farming area under consideration. Plots equipped to study interrill and rill erosion, like those used for developing the Universal Soil Loss Equation (USLE), measure erosion from the top of a slope where runoff begins; they must be wide enough to minimize the edge or border effects and long enough to develop downslope rills. Experimental stations generally include bounded runoff plots of known rea, slope steepness, slope length, and soil type, from which both runoff and soil loss can be monitored. Once the boundaries defining the plot area are fixed, a collecting equipment must be used to catch the plot runoff. A conveyance system (H-flume or pipe) carries total runoff to a unit sampling the sediment and a storage system, such as a sequence of tanks, in which sediments are accumulated. Simple methods have been developed for estimating the mean sediment concentration of all runoff stored in a tank by using the vertical concentration profile measured on a side of the tank. When a large number of plots are equipped, the sampling of suspension and consequent oven-drying in the laboratory are highly time-consuming. For this purpose, a sampler that can extract a column of suspension, extending from the free surface to the bottom of the tank, can be used. For large plots, or where runoff volumes are high, a divisor that splits the flow into equal parts and passes one part in a storage tank as a sample can be used. Examples of these devices include the Geib multislot divisor and the Coshocton wheel. Specific equipment and procedures must be employed to detect the soil removed by rill and gully erosion. Because most of the soil organic matter is found close to the soil surface, erosion significantly decreases soil organic matter content. Several studies have demonstrated that the soil removed by erosion is 1.3–5 times richer in organic matter than the remaining soil. Soil organic matter facilitates the formation of soil aggregates, increases soil porosity, and improves soil structure, facilitating water infiltration. The removal of organic matter content can influence soil infiltration, soil structure, and soil erodibility.
There is scientific consensus that human activities have been altering the atmospheric composition and are a key driver of global climate and environmental changes since pre-industrial times (IPCC, 2013). It is a pressing priority to understand the Earth system response to atmospheric aerosol input from diverse sources, which so far remain one of the largest uncertainties in climate studies (Boucher et al., 2014; Forster et al., 2007). As the second most abundant component (in terms of mass) of atmospheric aerosols, mineral dust exerts tremendous impacts on Earth’s climate and environment through various interaction and feedback processes. Dust can also have beneficial effects where it deposits: Central and South American rain forests get most of their mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the midwestern United States, ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Accurate assessments of dust emission are of great importance to improvements in quantifying the diverse dust impacts.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated.
Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.”
Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources.
From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations.
The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.
Giovanni Lo Iacono and Gordon L. Nichols
The introduction of pasteurization, antibiotics, and vaccinations, as well as improved sanitation, hygiene, and education, were critical in reducing the burden of infectious diseases and associated mortality during the 19th and 20th centuries and were driven by an improved understanding of disease transmission. This advance has led to longer average lifespans and the expectation that, at least in the developed world, infectious diseases were a problem of the past. Unfortunately this is not the case; infectious diseases still have a significant impact on morbidity and mortality worldwide. Moreover, the world is witnessing the emergence of new pathogens, the reemergence of old ones, and the spread of antibiotic resistance. Furthermore, effective control of infectious diseases is challenged by many factors, including natural disasters, extreme weather, poverty, international trade and travel, mass and seasonal migration, rural–urban encroachment, human demographics and behavior, deforestation and replacement with farming, and climate change.
The importance of environmental factors as drivers of disease has been hypothesized since ancient times; and until the late 19th century, miasma theory (i.e., the belief that diseases were caused by evil exhalations from unhealthy environments originating from decaying organic matter) was a dominant scientific paradigm. This thinking changed with the microbiology era, when scientists correctly identified microscopic living organisms as the pathogenic agents and developed evidence for transmission routes. Still, many complex patterns of diseases cannot be explained by the microbiological argument alone, and it is becoming increasingly clear that an understanding of the ecology of the pathogen, host, and potential vectors is required.
There is increasing evidence that the environment, including climate, can affect pathogen abundance, survival, and virulence, as well as host susceptibility to infection. Measuring and predicting the impact of the environment on infectious diseases, however, can be extremely challenging. Mathematical modeling is a powerful tool to elucidate the mechanisms linking environmental factors and infectious diseases, and to disentangle their individual effects. A common mathematical approach used in epidemiology consists in partitioning the population of interest into relevant epidemiological compartments, typically individuals unexposed to the disease (susceptible), infected individuals, and individuals who have cleared the infection and become immune (recovered). The typical task is to model the transitions from one compartment to another and to estimate how these populations change in time. There are different ways to incorporate the impact of the environment into this class of models. Two interesting examples are water-borne diseases and vector-borne diseases. For water-borne diseases, the environment can be represented by an additional compartment describing the dynamics of the pathogen population in the environment—for example, by modeling the concentration of bacteria in a water reservoir (with potential dependence on temperature, pH, etc.). For vector-borne diseases, the impact of the environment can be incorporated by using explicit relationships between temperature and key vector parameters (such as mortality, developmental rates, biting rate, as well as the time required for the development of the pathogen in the vector).
Despite the tremendous advancements, understanding and mapping the impact of the environment on infectious diseases is still a work in progress. Some fundamental aspects, for instance, the impact of biodiversity on disease prevalence, are still a matter of (occasionally fierce) debate. There are other important challenges ahead for the research exploring the potential connections between infectious diseases and the environment. Examples of these challenges are studying the evolution of pathogens in response to climate and other environmental changes; disentangling multiple transmission pathways and the associated temporal lags; developing quantitative frameworks to study the potential effect on infectious diseases due to anthropogenic climate change; and investigating the effect of seasonality. Ultimately, there is an increasing need to develop models for a truly “One Health” approach, that is, an integrated, holistic approach to understand intersections between disease dynamics, environmental drivers, economic systems, and veterinary, ecological, and public health responses.
Air pollution has been a major threat to human health, ecosystems, and agricultural crops ever since the onset of widespread use of fossil fuel combustion and emissions of harmful substances into ambient air. As a basis for the development, implementation, and compliance assessment of air pollution control policies, monitoring networks for priority air pollutants were established, primarily for regulatory purposes. With increasing understanding of emission sources and the release and environmental fate of chemicals and toxic substances into ambient air, as well as atmospheric transport and chemical conversion processes, increasingly complex air pollution models have entered the scene. Today, highly accurate equipment is available to measure trace gases and aerosols in the atmosphere. In addition, sophisticated atmospheric chemistry transport models—which are routinely compared to and validated and assessed against measurements—are used to model dispersion and chemical processes affecting the composition of the atmosphere, and the resulting ambient concentrations of harmful pollutants. The models also provide methods to quantify the deposition of pollutants, such as acidifying and eutrophying substances, in vegetation, soils, and freshwater ecosystems. This article provides a general overview of the underlying concepts and key features of monitoring and modeling systems for outdoor air pollution.
Leslie Richardson and Bruce Peacock
Economics plays an important role not only in the management of national parks in developed countries, but also in demonstrating the contribution of these areas to societal well-being. The beneficial effect of park tourism on jobs and economic activity in communities near these protected areas has at times been a factor in their establishment. These economic impacts continue to be highlighted as a way to demonstrate the benefit and return on investment of national parks to local economies. However, the economic values supported by national parks extend far beyond local economic benefits. Parks provide unique recreation opportunities, health benefits, preservation of wildlife and habitat, and a wide range of ecosystem services that the public assigns an economic value to. In addition, value is derived from the existence of national parks and their preservation for future generations. These nonmarket benefits can be difficult to quantify, but they are essential for understanding and communicating the economic importance of parks. Economic methods used to estimate these values have been refined and tested for nearly seven decades, and they have come a long way in helping to elucidate the extent of the nonmarket benefits of protected areas.
In many developed countries, national parks have regulations and policies that outline a framework for the consideration of economic values in decision-making contexts. For instance, large oil spills in the United States, such as the Exxon Valdez spill of 1989 and the Deepwater Horizon spill of 2010, highlighted the need to better understand public values for affected park resources, leading to the extensive use of nonmarket values in natural resource damage assessments. Of course, rules and enforcement issues vary widely across countries, and the potential for economics to inform the day-to-day operations of national parks is much broader than what is currently outlined in such policies. While economics is only one piece of the puzzle in managing national parks, it provides a valuable tool for evaluating resource tradeoffs and for incorporating public preferences into the decision-making process, leading to greater transparency and assurance that national parks are managed for the benefit of society. Understanding the full extent of the economic benefits supported by national parks helps to further the mission of these protected areas in developed countries.
Matilda van den Bosch
Human beings are part of natural ecosystems and depend on them for their survival. In a rapidly changing environment and with increasing urbanization, this dependence is challenged. Natural environments affect human health and well-being both directly and indirectly. Urban green and blue areas provide opportunities for stress recovery and physical activity. They offer spaces for social interactions in the neighborhood and places for children’s play. Chronic stress, physical inactivity, and lack of social cohesion are three major risk factors for noncommunicable diseases, and therefore abundant urban greenery is an important asset for health promotion.
Through numerous ecosystem services natural environments play a fundamental role in protecting health. Various populations depend on nature for basic material, such as fresh water, wood, fuel, and nutritious food. Biodiverse natural areas are also necessary for regulating the environment and for mitigating and adapting to climate change. For example, tree canopy cover can reduce the urban heat island effect substantially, preventing excess morbidity during heat waves. This natural heat-reducing effect also lessens the need for air conditioning systems and as a consequence decreases energy spending. Urban trees also support storm-water management, preventing flooding and related health issues. Air pollution is a major threat to population health. Urban trees sequester pollutants and, even though the effect may be relatively small, given the severity of the problem it may still have some public-health implications.
The evidence around the effects of natural environments on health and well-being is steadily increasing. Several pathways and mechanisms are suggested, such as health services through functional ecosystems, early life exposure to biodiverse microbiota, which is important for the immune-system development, and sensory exposure, which has direct neurobiological impact supporting cognitive development and stress resilience. Support for several pathways is at hand that shows lower mortality rates and prevalence of cardiovascular and respiratory diseases, healthier pregnancy outcomes, reduced health inequalities, and improved mental health in urban areas with greater amounts of green and blue space.
Altogether, the interactions between healthy natural environments and healthy people are multiple and complex, and require interdisciplinary attention and action for full understanding and resilient development of both nature and human beings.
Philip Carl Salzman
Nomadism is a technique of population movement used to accomplish a variety of goals. It is used for primary production when the resources to be tapped are distributed thinly over a wide space, or are located in different places in a large region. Commonly nomadism is a technique used in a spatially extensive adaptation. Pastoralists raising domestic animals on natural pasture move from grazed areas to areas with fresh pasture, and from dry areas to those with water.
Nomadism follows regular patterns where the resources tapped are reliable and thus predictable. This is common in macro-environmental adaptations to factors such as seasons and altitude. Some pastoralists have mountain adaptations, migrating to high altitudes in summer and low altitudes in winter, an adaptation called transhumance in Europe. Nomadic patterns are more irregular when rainfall patterns, and thus pasturage, are erratic and unpredictable, as is common in desert areas with low rainfall.
Among some pastoral peoples, all of the households in the community move together. Among other pastoral peoples, a sector of the populations is nomadic; young and/or mature men migrate with the livestock, while women, children, and elders remain in a stationary home settlement. This is also the pattern in European transhumance.
Many pastoral peoples produce primarily for their own subsistence; it is common that they have multi-resource or mixed economies, engaging also in hunting and gathering, horticulture, agriculture, and arboriculture. Economic activities are not limited to primary production; patterns of predation, including raiding and extortion, against other pastoralists, farmers, and traders are widespread. Other pastoral peoples are heavily market-oriented, producing for sale, or have symbiotic relations with hunters or cultivators; it is normal that they are more specialized in their production. But pastoralists can be found at all points on a continuum between subsistence- and market-oriented.
Archis R. Ambulkar
Since the industrial revolution, societies across the globe have observed significant urbanization and population growth. Newer technologies, industries, and manufacturing plants have evolved over the period to develop sophisticated infrastructures and amenities for mankind. To achieve this, communities have utilized and exploited natural resources, resulting in sustained environmental degradation and pollution. Among various adverse ecological effects, nutrient contamination in water is posing serious problems for the water bodies worldwide.
Nitrogen and phosphorus are the basic constituents for the growth and reproduction of living organisms and occur naturally in the soil, air, and water. However, human activities are affecting their natural cycles and causing excessive dumping into the surface and groundwater systems. Higher concentrations of nitrogen and phosphorus-based nutrients in water resources lead to eutrophication, reduction in sunlight, lower dissolved oxygen levels, changing rates of plant growth, reproduction patterns, and overall deterioration of water quality. Economically, this pollution can impact the fishing industry, recreational businesses, property values, and tourism. Also, using nutrient-polluted lakes or rivers as potable water sources may result in excess nitrates in drinking water, production of disinfection by-products, and associated health effects.
Nutrients contamination in water commonly originates from point and non-point sources. Point sources are the specific discharge locations, like wastewater treatment plants (WWTP), industries, and municipal waste systems; whereas, non-point sources are discrete dischargers, like agricultural lands and storm water runoffs. Compared to non-point sources, point sources are easier to identify, regulate, and treat. WWTPs receive sewage from domestic, business, and industrial settings. With growing pollution concerns, nutrients removal and recovery at treatment plants is gaining significant attention. Newer chemical and biological nutrient removal processes are emerging to treat wastewater. Nitrogen removal mainly involves nitrification-denitrification processes; whereas, phosphorus removal includes biological uptake, chemical precipitation, or filtration. In regards to non-point sources, authorities are encouraging best management practices to control pollution loads to waterways.
Governments are opting for novel strategies like source nutrient reduction schemes, bioremediation processes, stringent effluent limits, and nutrient trading programs. Source nutrient reduction strategies such as discouraging or banning use of phosphorus-rich detergents and selective chemicals, industrial pretreatment programs, and stormwater management programs can be effective by reducing nutrient loads to WWTPs. Bioremediation techniques such as riparian areas, natural and constructed wetlands, and treatment ponds can capture nutrients from agricultural lands or sewage treatment plant effluents. Nutrient trading programs allow purchase/sale of equivalent environmental credits between point and non-point nutrient dischargers to manage overall nutrient discharges in watersheds at lower costs.
Nutrient pollution impacts are quite evident and documented in many parts of the world. Governments and environmental organizations are undertaking several waterways remediation projects to improve water quality and restore aquatic ecosystems. Shrinking freshwater reserves and rising water demands are compelling communities to make efficient use of the available water resources. With smarter choices and useful strategies, nutrient pollution in the water can be contained to a reasonable extent. As responsible members of the community, it is important for us to understand this key environmental issue as well as to learn the current and future needs to alleviate this problem.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Oats and the other small grains have been “rediscovered” with the drive towards intensifying agricultural production, integrating crops and livestock into diversified systems, and increasing environmental stewardship. Globally, oats and other winter annual small grains such as wheat, cereal rye, triticale, and barley, have been used primarily for grain production. The secondary market following grain production has been restricted to straw, used mainly as livestock bedding. In regions where livestock are economically important, oats and the other annual small grain crops can be used as a grazed forage or fodder crop, hay, or silage. There are several characteristics that make oats and other small grains suitable for multiple agricultural uses. All the small grains are fairly easy to establish, have rapid growth, can be productive, and have a high nutritional value for livestock. Recent improvements in cultivar development have allowed oats and wheat to be grown across a broader range of stressful environmental conditions. Similarly, cultivar development in oats and wheat has improved grazing tolerance, which is important in dual-purpose systems that emphasize both grazing and grain production. On a worldwide scale, oats and other annual small grains are economically and environmentally important forage crops, especially when used as focused components within intensified agricultural systems. Challenges include development of improved cultivars of oats and other small grains for use in intensified agricultural systems, including both grazing and no grazing, that serve as short rotation crops, dual-purpose crops, or are designed to mitigate a specific environmental issue.
Lora Fleming, Michael Depledge, Niall McDonough, Mathew White, Sabine Pahl, Melanie Austen, Anders Goksoyr, Helena Solo-Gabriele, and John Stegeman
The interdisciplinary study of oceans and human health is an area of increasing global importance. There is a growing body of evidence that the health of the oceans and that of humans are inextricably linked and that how we interact with and affect our oceans and seas will significantly influence our future on earth. Since the emergence of modern humans, the oceans have served as a source of culture, livelihood, expansion, trade, food, and other resources. However, the rapidly rising global population and the continuing alterations of the coastal environment are placing greater pressure on coastal seas and oceans. Negative human impacts, including pollution (chemical, microbial, material), habitat destruction (e.g., bottom trawling, dredging), and overfishing, affect not only ecosystem health, but also human health. Conversely, there is potential to promote human health and well-being through sustainable interactions with the coasts and oceans, such as the restoration and preservation of coastal and marine ecosystems.
The study of oceans and human health is inherently interdisciplinary, bringing together the natural and social sciences as well as diverse stakeholder communities (including fishers, recreational users, private enterprise, and policymakers). Reviewing history and policy with regard to oceans and human health, in addition to known and potential risks and benefits, provides insights into new areas and avenues of global cooperation, with the possibility for collaboratively addressing the local and global challenges of our interactions with the oceans, both now and in the future.
Theodore J. K. Radovich
Organic farming occupies a unique position among the world’s agricultural systems. While not the only available model for sustainable food production, organic farmers and their supporters have been the most vocal advocates for a fully integrated agriculture that recognizes a link between the health of the land, the food it produces, and those that consume it. Advocacy for the biological basis of agriculture and the deliberate restriction or prohibition of many agricultural inputs arose in response to potential and observed negative environmental impacts of new agricultural technologies introduced in the 20th century. A primary focus of organic farming is to enhance soil ecological function by building soil organic matter that in turn enhances the biota that soil health and the health of the agroecosystem depends on.
The rapid growth in demand for organic products in the late 20th and early 21st centuries is based on consumer perception that organically grown food is better for the environment and human health. Although there have been some documented trends in chemical quality differences between organic and non-organic products, the meaningful impact of the magnitude of these differences is unclear. There is stronger evidence to suggest that organic systems pose less risk to the environment, particularly with regard to water quality; however, as intensity of management in organic farming increases, the potential risk to the environment is expected to also increase. In the early 21st century there has been much discussion centered on the apparent bifurcation of organic farming into two approaches: “input substitution” and “system redesign.” The former approach is a more recent phenomenon associated with pragmatic considerations of scaling up the size of operations and long distance shipping to take advantage of distant markets. Critics argue that this approach represents a “conventionalization” of organic agriculture that will erode potential benefits of organic farming to the environment, human health, and social welfare. A current challenge of organic farming systems is to reconcile the different views among organic producers regarding issues arising from the rapid growth of organic farming.
Early agricultural and arboricultural practices in the Pacific are based on vegetative principles, namely, the asexual propagation and transplantation of plants. A vegetative orientation is reflected in the exploitation of underground storage organs (USOs) within Near Oceania, as well as Island Southeast Asia, during the Pleistocene. During the early Holocene, people in the New Guinea region (including Near Oceania) began to intensify the management of plant resources in different landscapes. The increased degree of plant management, as well as associated environmental transformation, is most clearly manifest in the agricultural chronology at Kuk Swamp in the highlands of Papua New Guinea. At Kuk, shifting cultivation was potentially practiced during the early Holocene, with mounded cultivation by c. 7000–6400 cal BP and ditched drainage of wetlands for cultivation by c. 4400–4000 cal BP. Comparable agricultural records are lacking for other regions of Near Oceania; lowland sites indicate a range of arboricultural practices focused on fruit- and nut-bearing trees during the Terminal Pleistocene and throughout the Holocene, as well as potentially sago during the late Holocene. By c. 4000–3000 cal BP, indigenous agricultural and arboricultural elements were integrated with new cultural traits from Southeast Asia, including domestic animals, pottery and potentially new varieties of traditional crops. From c. 3250 to 2800 cal BP, different elements of agricultural and arboricultural practices from lowland New Guinea and Island Melanesia were taken by Lapita pottery–bearing colonists into the western Pacific. A later period of agricultural expansion occurred around c. 1000–750 cal BP with the colonization of eastern Polynesia. Agricultural practices and crops were variably taken and adapted to different islands and island groups across the Pacific. Additional transformations to agriculture occurred with the Polynesian adoption of the sweet potato (Ipomoea batatas), a South American domesticate, as well as following protohistoric and historic encounters.
Natasha James and Erin Sills
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Payments for ecosystem services (PES) programs are broadly defined as voluntary programs that pay (in-kind or cash) for provision of environmental services either to a specific user or to society at large, with payments conditional on agreed-upon rules of natural resource management. PES programs have been established in a variety of contexts to address environmental issues such as forest degradation, watershed management, and biodiversity protection. They often pay landowners for conservation practices, including protection of native ecosystems.
The early literature on PES is grounded in the Coase Theorem, which suggests that negative environmental externalities can be reduced through voluntary, market-like transactions, as long as transaction costs are low and property rights are clearly defined. In the context of PES, the Coase Theorem suggests payments negotiated directly between beneficiaries and producers of ecosystem services could result in an equilibrium price and quantity that maximize welfare, thus creating a more cost-effective conservation program. In addition, in both developed and developing countries, there is a high spatial correlation between areas that could supply ecosystem services and areas where poverty is high. This leads to equity arguments for PES as a way for society to compensate the relatively poor providers of ecosystem services. Thus, many programs have dual environmental and social policy objectives of ecosystem service provision and rural development or poverty alleviation. That is, PES programs are expected to result in “win-win” scenarios by simultaneously reducing negative environmental externalities and helping the poor.
In practice, PES institutions diverge significantly from the market transactions suggestion by the Coasian framework. Governments often administer PES programs, using public revenues to make payments on a defined per-hectare schedule (not determined by the market or by the ecosystem services produced) to landowners who apply for the programs. Both the transaction costs of applying to the program and the selection rules shape the distribution of participation across ecological zones and landowner types. There are often trade-offs between minimizing transactions costs and maximizing benefits, and between providing ecosystem services and social objectives.
There is some evidence these departures from theory and other implementation issues, such as nonexistent or inconsistent monitoring, have made the “win-win” scenario elusive. However, through a case study of the Costa Rican program, we show how program administrators can design and implement institutional changes to PES programs in order to address concerns about cost efficiency and the distribution of participation.
Alfons Weersink and David Pannell
The production of food, fiber, and fuel often results in negative externalities due to impacts on soil, water, air, or habitat. There are two broad ways to incentivize farmers to alter their land use or management practices on that land to benefit the environment: (1) provide payments to farmers who adopt environmentally beneficial actions and (2) introduce direct controls or regulations that require farmers to undertake certain actions, backed up with penalties for noncompliance. Both the provision of payments for environmentally beneficial management practices (BMPs) and a regulatory requirement for use of a BMP alter the incentives faced by farmers, but they do so in different ways, with different implications and consequences for farmers, for the policy, for politics, and consequently for the environment. These two incentive-based mechanisms are recommended where the private incentives conflict with the public interest, and only where the private incentives are not so strong as to outweigh the public benefits. The biggest differences between them probably relate to equity/distributional outcomes and politics rather than efficiency. Governments often seem to prefer to employ beneficiary-pays mechanisms in cases where they seek to alter farmers’ existing practices, and polluter-pays mechanisms when they seek to prevent farmers from changing from their current practices to something worse for the environment. The digital revolution has the potential to help farmers produce more food on less land and with fewer inputs. In addition to reducing input levels and identifying unprofitable management zones to set aside, the technology could also alter the transaction costs of the policy options.
The fight against agricultural and household pests accompanies the history of humanity, and a total ban on the use of pesticides seems unlikely to happen in the foreseeable future. Currently, about 100,000 different chemicals, inorganic and organic, are currently in the market, grouped according to their function as insecticides, herbicides, fungicides, fumigants, rodenticides, fertilizers, growth regulators, etc. against specific pests, such as snails or human parasites, or their chemical structure—organochlorines, organophosphates, pyrethroids, carbamates, dithiocarbamates, organotin compounds, phthalimides, phenoxy acids, heterocyclic azole compounds, coumarins, etc. Runoff from agricultural land and rain precipitation and dry deposition from the atmosphere can extend exposure to the general environment through the transport of pesticides to streams and ground-water. Also, the prolonged bio-persistence of organochlorines generates their accumulation in the food chain, and their atmospheric drift toward remote geographical areas is mentioned as the cause of elevated fat contents in Arctic mammals. Current regulation in the developed world and the phasing out of more toxic pesticides have greatly reduced the frequency of acute intoxications, although less stringent regulations in the developing world contribute to a complex pattern of exposure circumstances worldwide. Nonetheless, evidence is growing about long-term health effects following high-level, long-lasting exposure to specific pesticides, including asthma and other allergic diseases, immunotoxicity, endocrine disruption, cancer, and central and peripheral nervous system effects. Major reasons for uncertainty in interpreting epidemiological findings of pesticide effects include the complex pattern of overlapping exposure due to multiple treatments applied to different crops and their frequent changes over time to overcome pest resistance. Further research will have to address specific agrochemicals with well-characterized exposure patterns.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Pollution problems in aquatic sediments and on land can be quite varied—from the widespread contamination of a coastal bay receiving untreated urban or industrial discharge to the local leakage from underground petroleum tanks or pipelines. Such problems are related to the range of sediment and soil in which they occur. Sediments and soil particles can be carriers, receptors, and sources for contaminants. The effectiveness of these roles is largely related to their adsorptive capacity and is governed mainly by particle size, mineralogy, and organic matter as well as site-specific geochemical conditions. Sustainable use of land and marine areas requires a source-to-sink system perspective in order to prescribe remedial actions. Measures can focus on preventing release from the source, spreading along selective pathways, stabilization, and isolation to protect the receptor. Therefore, many traditional scientific goals, such as provenance (sediment source) identification, the interpretation of sediment transport modes and directions, and post-depositional (diagenetic) changes, are applicable and complementary tools to increase predictability between sampled sites.
The carrier function of aquatic sediments is emphasized when contaminates are transported to the site of accumulation. Ground pollution in terrestrial settings, on the other hand, is often due to more local sources. Nevertheless, retention and ecological exposure is dependent on the particle-solute interactions. The stratigraphic architecture of ground environments can also decisively influence the spread of contaminants, contrasting with the largely two-dimensional redistribution of eroded aquatic sediments. Diffuse pollution sources, including agriculture, urban, transportation, and industrial sources, contribute significantly to overall environmental stress. Quantitative modeling of contaminant fluxes is increasingly possible with database availability, but relative risk ranking is still a necessary simplification in many decision-support evaluations due to the complexity of sediment and ground environments.
Mesoamerica is one of the world’s primary centers of domestication where agriculture arose independently. Paleoethnobotany (or archaeobotany), along with archaeology, epigraphy, and ethnohistorical and ethnobotanical data, provide increasingly important insights into the ancient agriculture of Lowland Mesoamerica (below 1000 m above sea level). Moreover, new advances in the analysis of microbotanical remains in the form of pollen, phytoliths, and starch-grain analysis and chemical analysis of organic residues have further contributed to our understanding of ancient plant use in this region. Prehistoric and traditional agriculture in the lowlands of Mesoamerica—notably the Maya lowlands, the Gulf Coast, and the Pacific Coast of southern Chiapas (Mexico) and Guatemala—from the Archaic (ca. 8000/7000–2000
Rene Van Acker, Motior Rahman, and S. Zahra H. Cici
The global area sown to genetically modified (GM) varieties of leading commercial crops (soybean, maize, canola, and cotton) has expanded over 100-fold over two decades. Thirty countries are producing GM crops and just five countries (United States, Brazil, Argentina, Canada, and India) account for almost 90% of the GM production. Only four crops account for 99% of worldwide GM crop area. Almost 100% of GM crops on the market are genetically engineered with herbicide tolerance (HT), and insect resistance (IR) traits. Approximately 70% of cultivated GM crops are HT, and GM HT crops have been credited with facilitating no-tillage and conservation tillage practices that conserve soil moisture and control soil erosion, and that also support carbon sequestration and reduced greenhouse gas emissions. Crop production and productivity increased significantly during the era of the adoption of GM crops; some of this increase can be attributed to GM technology and the yield protection traits that it has made possible even if the GM traits implemented to-date are not yield traits per se. GM crops have also been credited with helping to improve farm incomes and reduce pesticide use. Practical concerns around GM crops include the rise of insect pests and weeds that are resistant to pesticides. Other concerns around GM crops include broad seed variety access for farmers and rising seed costs as well as increased dependency on multinational seed companies. Citizens in many countries and especially in European countries are opposed to GM crops and have voiced concerns about possible impacts on human and environmental health. Nonetheless, proponents of GM crops argue that they are needed to enhance worldwide food production. The novelty of the technology and its potential to bring almost any trait into crops mean that there needs to remain dedicated diligence on the part of regulators to ensure that no GM crops are deregulated that may in fact pose risks to human health or the environment. The same will be true for the next wave of new breeding technologies, which include gene editing technologies.
The Quaternary period of Earth history, which commenced ca. 2.6 Ma ago, is noted for a series of dramatic shifts in global climate between long, cool (“icehouse”) and short, temperate (“greenhouse”) stages. This also coincides with the extinction of later Australopithecine hominins and evolution of modern Homo sapiens.
Wide recognition of a fourth, Quaternary, order of geologic time emerged in Europe between ca. 1760–1830 and became closely identified with the concept of an ice age. This most recent episode in Earth history is also the best preserved in stratigraphic and landscape records. Indeed, much of its character and processes continue in present time, which prompted early geologists’ recognition of the concept of uniformitarianism—the present is the key to the past.
Quaternary time was quickly divided into a dominant Pleistocene (“most recent”) epoch, characterized by cyclical growth and decay of major continental ice sheets and peripheral permafrost. Disappearance of most of these ice sheets, except in Antarctica and Greenland today, ushered in the Holocene (“wholly modern”) epoch, once thought to terminate the Ice Age but now seen as the current interglacial or temperate stage, commencing ca. 11.7 ka ago. Covering 30–50% of Earth’s land surface at their maxima, ice sheets and permafrost squeezed remaining biomes into a narrower circum-equatorial zone, where research indicated the former occurrence of pluvial and desiccation events. Early efforts to correlate them with mid-high latitude glacials and interglacials revealed the complex and often asynchronous Pleistocene record.
Nineteenth-century recognition of just four glaciations reflected a reliance on geomorphology and short terrestrial stratigraphic records, concentrated in northern hemisphere mid- and high-latitudes, until the 1970s. Correlation of δ16-18 O isotope signals from seafloor sediments (from ocean drilling programs after the 1960s) with polar ice core signals from the 1980s onward has revolutionized our understanding of the Quaternary, facilitating a sophisticated, time-constrained record of events and environmental reconstructions from regional to global scales. Records from oceans and ice sheets, some spanning 105–106 years, are augmented by similar long records from loess, lake sediments, and speleothems (cave sediments). Their collective value is enhanced by innovative analytical and dating tools.
Over 100 Marine Isotope Stages (MIS) are now recognized in the Quaternary, with dramatic climate shifts at decadal and centennial timescales—with the magnitude of 22 MIS in the past 900,000 years considered to reflect significant ice sheet accumulation and decay. Each cycle between temperate and cool conditions (odd- and even-numbered MIS respectively) is time-asymmetric, with progressive cooling over 80,000 to 100,000 years, followed by an abrupt termination then rapid return to temperate conditions for a few thousand years.
The search for causes of Quaternary climate and environmental change embraces all strands of Earth System Science. Strong correlation between orbital forcing and major climate changes (summarized as the Milankovitch mechanism) is displacing earlier emphasis on radiative (direct solar) forcing, but uncertainty remains over how the orbital signal is amplified or modulated. Tectonic forcing (ocean-continent distributions, tectonic uplift, and volcanic outgassing), atmosphere-biogeochemical and greenhouse gas exchange, ocean-land surface albedo and deep- and surface-ocean circulation are all contenders and important agents in their own right.
Modern understanding of Quaternary environments and processes feeds an exponential growth of multidisciplinary research, numerical modeling, and applications. Climate modeling exploits mutual benefits to science and society of “hindcasting,” using paleoclimate data to aid understanding of the past and increasing confidence in modeling forecasts. Pursuit of more detailed and sophisticated understanding of ocean-atmosphere-cryosphere-biosphere interaction proceeds apace.
The Quaternary is also the stage on which human evolution plays. And the essential distinction between natural climate variability and human forcing is now recognized as designating, in present time, a potential new Anthropocene epoch. Quaternary past and present are major keys to its future.