You are looking at 121-140 of 160 articles
Allergenic pollen is produced by the flowers of a number of trees, grasses, and weeds found throughout the world. Human exposure to such pollen grains can exacerbate pollen-related asthma and allergenic conditions such as allergic rhinitis (hay fever). While allergenic pollen comes from three main groups of plants—certain trees, grasses, and weeds—many people are sensitive to pollen from one or a few taxa only. Weather, climate, and environmental conditions have a significant impact on the levels and varieties of pollen grains present in the air. These allergenic conditions significantly reduce the quality of life of affected individuals and have been shown to have a major economic impact.
Pollen production depends on both the current meteorological conditions (including day length, temperature, irradiation, precipitation, and wind speed/direction), and the water availability and other environmental and meteorological conditions experienced in the previous year. The climate affects the types of vegetation and taxa that can grow in a particular location through availability of different habitats. Land-use or land management is also crucial, and so this field of study has implications for vegetation management practices and policy.
Given the influential effects of weather and climate on pollen, and the significant health impacts globally, the total effect of any future environmental and climatic changes on aeroallergen production and spread will be significant. The overall impact of climate change on pollen production and spread remains highly uncertain, and there is a need for further understanding of pollen-related health impact information. There are a number of ways air quality interacts with the impact of pollen. Further understanding of the risks of co-exposure to both pollen and air pollutants is needed to better inform public health policy. Furthermore, thunderstorms have been linked to asthma epidemics, especially during the grass pollen seasons. It is thought that allergenic pollen plays a role in this “thunderstorm asthma.”
To reduce the exposure to, or impact from, pollen grains in the air, a number of adaptation and mitigation options may be adopted. Many of these would need to be done either through policy changes, or at a local or regional level, although some can be done by individuals to minimize their exposure to pollen they are sensitive to. Improved aeroallergen forecast models could be developed to provide detailed taxon-specific, localized information to the public. One challenge will be combining the many different sources of aeroallergen data that are likely to become available in future into numerical forecast systems. Examples of these potential inputs are automated observations of aeroallergens, real-time phenological observations and remote sensing of vegetation, social sensing, DNA analysis of specific aeroallergens, and data from symptom trackers or personal monitors. All of these have the potential to improve the forecasts and information available to the public.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Pollution problems in aquatic sediments and on land can be quite varied—from the widespread contamination of a coastal bay receiving untreated urban or industrial discharge to the local leakage from underground petroleum tanks or pipelines. Such problems are related to the range of sediment and soil in which they occur. Sediments and soil particles can be carriers, receptors, and sources for contaminants. The effectiveness of these roles is largely related to their adsorptive capacity and is governed mainly by particle size, mineralogy, and organic matter as well as site-specific geochemical conditions. Sustainable use of land and marine areas requires a source-to-sink system perspective in order to prescribe remedial actions. Measures can focus on preventing release from the source, spreading along selective pathways, stabilization, and isolation to protect the receptor. Therefore, many traditional scientific goals, such as provenance (sediment source) identification, the interpretation of sediment transport modes and directions, and post-depositional (diagenetic) changes, are applicable and complementary tools to increase predictability between sampled sites.
The carrier function of aquatic sediments is emphasized when contaminates are transported to the site of accumulation. Ground pollution in terrestrial settings, on the other hand, is often due to more local sources. Nevertheless, retention and ecological exposure is dependent on the particle-solute interactions. The stratigraphic architecture of ground environments can also decisively influence the spread of contaminants, contrasting with the largely two-dimensional redistribution of eroded aquatic sediments. Diffuse pollution sources, including agriculture, urban, transportation, and industrial sources, contribute significantly to overall environmental stress. Quantitative modeling of contaminant fluxes is increasingly possible with database availability, but relative risk ranking is still a necessary simplification in many decision-support evaluations due to the complexity of sediment and ground environments.
Mesoamerica is one of the world’s primary centers of domestication where agriculture arose independently. Paleoethnobotany (or archaeobotany), along with archaeology, epigraphy, and ethnohistorical and ethnobotanical data, provide increasingly important insights into the ancient agriculture of Lowland Mesoamerica (below 1000 m above sea level). Moreover, new advances in the analysis of microbotanical remains in the form of pollen, phytoliths, and starch-grain analysis and chemical analysis of organic residues have further contributed to our understanding of ancient plant use in this region. Prehistoric and traditional agriculture in the lowlands of Mesoamerica—notably the Maya lowlands, the Gulf Coast, and the Pacific Coast of southern Chiapas (Mexico) and Guatemala—from the Archaic (ca. 8000/7000–2000
Rene Van Acker, Motior Rahman, and S. Zahra H. Cici
The global area sown to genetically modified (GM) varieties of leading commercial crops (soybean, maize, canola, and cotton) has expanded over 100-fold over two decades. Thirty countries are producing GM crops and just five countries (United States, Brazil, Argentina, Canada, and India) account for almost 90% of the GM production. Only four crops account for 99% of worldwide GM crop area. Almost 100% of GM crops on the market are genetically engineered with herbicide tolerance (HT), and insect resistance (IR) traits. Approximately 70% of cultivated GM crops are HT, and GM HT crops have been credited with facilitating no-tillage and conservation tillage practices that conserve soil moisture and control soil erosion, and that also support carbon sequestration and reduced greenhouse gas emissions. Crop production and productivity increased significantly during the era of the adoption of GM crops; some of this increase can be attributed to GM technology and the yield protection traits that it has made possible even if the GM traits implemented to-date are not yield traits per se. GM crops have also been credited with helping to improve farm incomes and reduce pesticide use. Practical concerns around GM crops include the rise of insect pests and weeds that are resistant to pesticides. Other concerns around GM crops include broad seed variety access for farmers and rising seed costs as well as increased dependency on multinational seed companies. Citizens in many countries and especially in European countries are opposed to GM crops and have voiced concerns about possible impacts on human and environmental health. Nonetheless, proponents of GM crops argue that they are needed to enhance worldwide food production. The novelty of the technology and its potential to bring almost any trait into crops mean that there needs to remain dedicated diligence on the part of regulators to ensure that no GM crops are deregulated that may in fact pose risks to human health or the environment. The same will be true for the next wave of new breeding technologies, which include gene editing technologies.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The Quaternary period of Earth history, which commenced ca. 2.6 Ma ago, is noted for a series of dramatic shifts in global climate between long, cool (“icehouse”) and short, temperate (“greenhouse”) stages. This also coincides with the extinction of later Australopithecine hominins and evolution of modern Homo sapiens.
Wide recognition of a fourth, Quaternary, order of geologic time emerged in Europe between ca. 1760–1830 and became closely identified with the concept of an ice age. This most recent episode in Earth history is also the best preserved in stratigraphic and landscape records. Indeed, much of its character and processes continue in present time, which prompted early geologists’ recognition of the concept of uniformitarianism—the present is the key to the past.
Quaternary time was quickly divided into a dominant Pleistocene (“most recent”) epoch, characterized by cyclical growth and decay of major continental ice sheets and peripheral permafrost. Disappearance of most of these ice sheets, except in Antarctica and Greenland today, ushered in the Holocene (“wholly modern”) epoch, once thought to terminate the Ice Age but now seen as the current interglacial or temperate stage, commencing ca. 11.7 ka ago. Covering 30–50% of Earth’s land surface at their maxima, ice sheets and permafrost squeezed remaining biomes into a narrower circum-equatorial zone, where research indicated the former occurrence of pluvial and desiccation events. Early efforts to correlate them with mid-high latitude glacials and interglacials revealed the complex and often asynchronous Pleistocene record.
Nineteenth-century recognition of just four glaciations reflected a reliance on geomorphology and short terrestrial stratigraphic records, concentrated in northern hemisphere mid- and high-latitudes, until the 1970s. Correlation of δ16-18 O isotope signals from seafloor sediments (from ocean drilling programs after the 1960s) with polar ice core signals from the 1980s onward has revolutionized our understanding of the Quaternary, facilitating a sophisticated, time-constrained record of events and environmental reconstructions from regional to global scales. Records from oceans and ice sheets, some spanning 105–106 years, are augmented by similar long records from loess, lake sediments, and speleothems (cave sediments). Their collective value is enhanced by innovative analytical and dating tools.
Over 100 Marine Isotope Stages (MIS) are now recognized in the Quaternary, with dramatic climate shifts at decadal and centennial timescales—with the magnitude of 22 MIS in the past 900,000 years considered to reflect significant ice sheet accumulation and decay. Each cycle between temperate and cool conditions (odd- and even-numbered MIS respectively) is time-asymmetric, with progressive cooling over 80,000 to 100,000 years, followed by an abrupt termination then rapid return to temperate conditions for a few thousand years.
The search for causes of Quaternary climate and environmental change embraces all strands of Earth System Science. Strong correlation between orbital forcing and major climate changes (summarized as the Milankovitch mechanism) is displacing earlier emphasis on radiative (direct solar) forcing, but uncertainty remains over how the orbital signal is amplified or modulated. Tectonic forcing (ocean-continent distributions, tectonic uplift, and volcanic outgassing), atmosphere-biogeochemical and greenhouse gas exchange, ocean-land surface albedo and deep- and surface-ocean circulation are all contenders and important agents in their own right.
Modern understanding of Quaternary environments and processes feeds an exponential growth of multidisciplinary research, numerical modeling, and applications. Climate modeling exploits mutual benefits to science and society of “hindcasting,” using paleoclimate data to aid understanding of the past and increasing confidence in modeling forecasts. Pursuit of more detailed and sophisticated understanding of ocean-atmosphere-cryosphere-biosphere interaction proceeds apace.
The Quaternary is also the stage on which human evolution plays. And the essential distinction between natural climate variability and human forcing is now recognized as designating, in present time, a potential new Anthropocene epoch. Quaternary past and present are major keys to its future.
Peter J. Schubert
Renewable energy was used exclusively by the first humans and is likely to be the predominant source for future humans. Between these times the use of extracted resources such as coal, oil, and natural gas has created an explosion of population and affluence, but also of pollution and dependency. This article explores the advent of energy sources in a broad social context including economics, finance, and policy. The means of producing renewable energy are described in an accessible way, highlighting the broad range of considerations in their development, deployment, and ability to scale to address the entirety of human enterprises.
Resilience thinking in relation to the environment has emerged as a lens of inquiry that serves a platform for interdisciplinary dialogue and collaboration. Resilience is about cultivating the capacity to sustain development in the face of expected and surprising change and diverse pathways of development and potential thresholds between them. The evolution of resilience thinking is coupled to social-ecological systems and a truly intertwined human-environment planet. Resilience as persistence, adaptability and, transformability of complex adaptive social-ecological systems is the focus, clarifying the dynamic and forward-looking nature of the concept. Resilience thinking emphasizes that social-ecological systems, from the individual, to community, to society as a whole, are embedded in the biosphere. The biosphere connection is an essential observation if sustainability is to be taken seriously. In the continuous advancement of resilience thinking there are efforts aimed at capturing resilience of social-ecological systems and finding ways for people and institutions to govern social-ecological dynamics for improved human well-being, at the local, across levels and scales, to the global. Consequently, in resilience thinking, development issues for human well-being, for people and planet, are framed in a context of understanding and governing complex social-ecological dynamics for sustainability as part of a dynamic biosphere.
Scott M. Moore
It has long been accepted that non-renewable natural resources like oil and gas are often the subject of conflict between both nation-states and social groups. But since the end of the Cold War, the idea that renewable resources like water and timber might also be a cause of conflict has steadily gained credence. This is particularly true in the case of water: in the early 1990s, a senior World Bank official famously predicted that “the wars of the next century will be fought over water,” while two years ago Indian strategist Brahma Chellaney made a splash in North America by claiming that water would be “Asia’s New Battleground.” But it has not quite turned out that way. The world has, so far, avoided inter-state conflict over water in the 21st century, but it has witnessed many localized conflicts, some involving considerable violence. As population growth, economic development, and climate change place growing strains on the world’s fresh water supplies, the relationship between resource scarcity, institutions, and conflict has become a topic of vocal debate among social and environmental scientists.
The idea that water scarcity leads to conflict is rooted in three common assertions. The first of these arguments is that, around the world, once-plentiful renewable resources like fresh water, timber, and even soils are under increasing pressure, and are therefore likely to stoke conflict among increasing numbers of people who seek to utilize dwindling supplies. A second, and often corollary, argument holds that water’s unique value to human life and well-being—namely that there are no substitutes for water, as there are for most other critical natural resources—makes it uniquely conductive to conflict. Finally, a third presumption behind the water wars hypothesis stems from the fact that many water bodies, and nearly all large river basins, are shared between multiple countries. When an upstream country can harm its downstream neighbor by diverting or controlling flows of water, the argument goes, conflict is likely to ensue.
But each of these assertions depends on making assumptions about how people react to water scarcity, the means they have at their disposal to adapt to it, and the circumstances under which they are apt to cooperate rather than to engage in conflict. Untangling these complex relationships promises a more refined understanding of whether and how water scarcity might lead to conflict in the 21st century—and how cooperation can be encouraged instead.
Rewilding aims at maintaining or even increasing biodiversity through the restoration of ecological and evolutionary processes using extant keystone species or ecological replacements of extinct keystone species that drive these processes. It is hailed by some as the most exciting and promising conservation strategy to slow down or stop what is considered to be the greatest mass extinction of species since the extinction of the dinosaurs 65 million years ago. Others have raised serious concerns about the many scientific and societal uncertainties and risks of rewilding. Moreover, despite its growing popularity, rewilding has made only limited inroads within the conservation mainstream and still has to prove itself in practice.
Rewilding differs from traditional restoration in at least two important respects. Whereas restoration has typically focused on the recovery of plants communities, rewilding has drawn attention to animals, particularly large carnivores and large herbivores. Whereas restoration aims to return an ecosystem back to some historical condition, rewilding is forward-looking rather than backward-looking: it examines the past not so much to recreate it, but to learn from the past how to activate and maintain the natural processes that are crucial for biodiversity conservation.
Rewilding makes use of a variety of techniques to re-establish these natural processes. Besides the familiar method of reintroducing animals in areas where populations have decreased dramatically or even gone extinct, rewilders also employ some more controversial methods, including back breeding to restore wild traits in domesticated species, taxon substitution to replace extinct species by closely related species with similar roles within an ecosystem, and de-extinction to bring extinct species back to life again using advanced biotechnological technologies such as cloning and gene editing.
Rewilding has clearly gained the most traction in North America and Europe, which have several key features in common. Both regions have recently experienced a spontaneous return of wildlife. Rewilders on both sides of the Atlantic are aware, however, that this wildlife resurgence is not that impressive, given that we are in the midst of the sixth mass extinction, which is characterized by the loss of large-bodied animals known as megafauna. The common goal is to bring back such megafaunal species because of their importance for maintaining and enhancing biodiversity. Last, both North American and European rewilders perceive the extinction crisis through the lens of the island theory, which shows that the number of species in an area depends on its size and degree of isolation—hence their special attention to the spatial aspects of rewilding.
But rewilding projects on both sides of the Atlantic not only have much in common, they also differ in certain aspects. North American rewilders have adopted the late Pleistocene as a reference period and have emphasized the role of predation by large carnivores, while European rewilders have opted for the mid-Holocene and put more focus on naturalistic grazing by large herbivores.
Ortwin Renn and Andreas Klinke
Risk perception is an important component of risk governance, but it cannot and should not determine environmental policies. The reality is that people suffer and die as a result of false information or perception biases. It is particularly important to be aware of intuitive heuristics and common biases in making inferences from information in a situation where personal or institutional decisions have far-reaching consequences. The gap between risk assessment and risk perception is an important aspect of environmental policymaking. Communicators, risk managers, as well as representatives of the media, stakeholders, and the affected public should be well informed about the results of risk perception and risk response studies. They should be aware of typical patterns of information processing and reasoning when they engage in designing communication programs and risk management measures. At the same time, the potential recipients of information should be cognizant of the major psychological and social mechanisms of perception as a means to avoid painful errors.
To reach this goal of mutual enlightenment, it is crucial to understand the mechanisms and processes of how people perceive risks (with emphasis on environmental risks) and how they behave on the basis of their perceptions. Based on the insights from cognitive psychology, social psychology, micro-sociology, and behavioral studies, one can distill some basic lessons for risk governance that reflect universal characteristics of perception and that can be taken for granted in many different cultures and risk contexts.
This task of mutual enlightenment on the basis of evidence-based research and investigations is constrained by complexity, uncertainty, and ambiguity in describing, assessing, and analyzing risks, in particular environmental risks. The idea that the “truth” needs to be framed in a way that the targeted audience understands the message is far too simple. In a stochastic and nonlinear understanding of (environmental) risk there are always several (scientifically) legitimate ways of representing scientific insights and causal inferences. Much knowledge in risk and disaster assessment is based on incomplete models, simplified simulations, and expert judgments with a high degree of uncertainty and ambiguity. The juxtaposition of scientific truth, on one hand, and erroneous risk perception, on the other hand, does not reflect the real situation and lends itself to a vision of expertocracy that is neither functionally correct nor democratically justified. The main challenge is to initiate a dialogue that incorporates the limits and uncertainties of scientific knowledge and also starts a learning process by which obvious misperceptions are corrected and the legitimate corridor of interpretation is jointly defined.
In essence, expert opinion and lay perception need to be perceived as complementing, rather than competing with each other. The very essence of responsible action is to make viable and morally justified decisions in the face of uncertainty based on a range of scientifically legitimate expert assessments. These assessments have to be embedded into the context of criteria for acceptable risks, trade-offs between risks to humans and ecosystems, fair risk and benefit distribution, and precautionary measures. These criteria most precisely reflect the main points of lay perception. For a rational politics of risk, it is, therefore, imperative to collect both ethically justifiable evaluation criteria and standards and the best available systematic knowledge that inform us about the performance of each risk source or disaster-reduction option according to criteria that have been identified and approved in a legitimate due process. Ultimately, decisions on acceptable risks have to be based on a subjective mix of factual evidence, attitudes toward uncertainties, and moral standards.
Mehrad Bastani, Nurcin Celik, and Danielle Coogan
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The volume of municipal solid waste produced in the United States has increased by 68% since 1980, up from 151 million to over 254 million tons per year. As the output of municipal waste has grown, more attention has been placed on the occupations associated with waste management. In 2014, the occupation of refuse and recyclable material collection was ranked as the 6th most dangerous job in the United States, with a rate of 27.1 deaths per 100,000 workers. With the revelation of reported exposure statistics among solid waste workers in the United States, the problem of the identification and assessment of occupational health risks among solid waste workers is receiving more consideration.
From the generation of waste to its disposal, solid waste workers are exposed to substantial levels of physical, chemical, and biological toxins. Current waste management systems in the United States involve significant risk of contact with waste hazards, highlighting that prevention methods such as monitoring exposures, personal protection, engineering controls, job education and training, and other interventions are under-utilized. To recognize and address occupational hazards encountered by solid waste workers, it is necessary to discern potential safety concerns and their causes, as well as their direct and/or indirect impacts on the various types of workers. In solid waste management, the major industries processing solid waste are introduced as recycling, incineration, landfill, and composting. Thus, the reported exposures and potential occupational health risks need to be identified for workers in each of the aforementioned industries. Then, by acquiring data on reported exposure among solid waste workers, multiple county-level and state-level quantitative assessments for major occupational risks can be conducted using statistical assessment methods. To assess health risks among solid waste workers, the following questions must be answered: How can the methods of solid waste management be categorized? Which are the predominant occupational health risks among solid waste workers, and how can they be identified? Which practical and robust assessment methods are useful for evaluating occupational health risks among solid waste workers? What are possible solutions that can be implemented to reduce the occupational health hazard rates among solid waste workers?
Growing a cover crop between main crops imitates natural ecosystems where the soil is continuously covered with vegetation. This is an important management practice in preserving soil nutrient resources and reducing nitrogen (N) losses to waters. Cover crops also provide other functions that are important for the resilience and long-term stability of cropping systems, such as reduced erosion, increased soil fertility, carbon sequestration, increased soil phosphorus (P) availability, and suppression of weeds and pathogens.
Much is known about how to use cover crops to reduce N leaching, for climates where there is a water surplus outside the growing season. Non-legume cover crops reduce N leaching by 20%–80% and legumes reduce it by, on average, 23%. There are both synergies and possible conflicts between different environmental and production aspects that should be considered when developing efficient and multifunctional cover crop systems, but contradictions about different functions provided by cover crops can sometimes be overcome with site-specific adaptation of measures. One example is cover crop effects on P losses. Cover crops reduce losses of total P, but extract soil P to available forms and may increase losses of dissolved P. How to use this effect to increase soil P availability on subtropical soils needs further studies. Knowledge and examples of how to maximize the positive effects of cover crops on cropping systems are improving, thereby increasing the sustainability of agriculture. One example is combined weed suppression in order to reduce dependence on herbicides or intensive mechanical treatment.
James B. London
Coastal zone management (CZM) has evolved since the enactment of the U.S. Coastal Zone Management Act of 1972, which was the first comprehensive program of its type. The newer iteration of Integrated Coastal Zone Management (ICZM), as applied to the European Union (2000, 2002), establishes priorities and a comprehensive strategy framework. While coastal management was established in large part to address issues of both development and resource protection in the coastal zone, conditions have changed. Accelerated rates of sea level rise (SLR) as well as continued rapid development along the coasts have increased vulnerability. The article examines changing conditions over time and the role of CZM and ICZM in addressing increased climate related vulnerabilities along the coast.
The article argues that effective adaptation strategies will require a sound information base and an institutional framework that appropriately addresses the risk of development in the coastal zone. The information base has improved through recent advances in technology and geospatial data quality. Critical for decision-makers will be sound information to identify vulnerabilities, formulate options, and assess the viability of a set of adaptation alternatives. The institutional framework must include the political will to act decisively and send the right signals to encourage responsible development patterns. At the same time, as communities are likely to bear higher costs for adaptation, it is important that they are given appropriate tools to effectively weigh alternatives, including the cost avoidance associated with corrective action. Adaptation strategies must be pro-active and anticipatory. Failure to act strategically will be fiscally irresponsible.
Food security is dependent on the work of plant scientists and breeders who develop new varieties of crops that are high yielding, nutritious, and tolerate a range of biotic and abiotic stresses. These scientists and breeders need access to novel genetic material to evaluate and to use in their breeding programs; seed- (gene-)banks are the main source of novel genetic material. There are more than 1,750 genebanks around the world that are storing the orthodox (desiccation tolerant) seeds of crops and their wild relatives. These seeds are stored at low moisture content and low temperature to extend their longevity and ensure that seeds with high viability can be distributed to end-users. Thus, seed genebanks serve two purposes: the long-term conservation of plant genetic resources, and the distribution of seed samples.
Globally, there are more than 7,400,000 accessions held in genebanks; an accession is a supposedly distinct, uniquely identifiable germplasm sample which represents a particular landrace, variety, breeding line, or population. Genebank staff manage their collections to ensure that suitable material is available and that the viability of the seeds remains high. Accessions are regenerated if viability declines or if stocks run low due to distribution. Many crops come under the auspices of the International Treaty on Plant Genetic Resources for Food and Agriculture and germplasm is shared using the Standard Material Transfer Agreement. The Treaty collates information on the sharing of germplasm with a view to ensuring that farmers ultimately benefit from making their agrobiodiversity available.
Ongoing research related to genebanks covers a range of disciplines, including botany, seed and plant physiology, genetics, geographic information science, and law.
Maria Cristina Fossi and Cristina Panti
A vigorous effort to identify and study sentinel species of marine ecosystem in the world’s oceans has developed over the past 50 years. The One Health concept recognizes that the health of humans is connected to the health of animals and the environment. Species ranging from invertebrate to large marine vertebrates have acted as “sentinels” of the exposure to environmental stressors and health impacts on the environment that may also affect human health. Sentinel species can signal warnings, at different levels, about the potential impacts on a specific ecosystem. These warnings can help manage the abiotic and anthropogenic stressors (e.g., climate change, chemical and microbial pollutants, marine litter) affecting ecosystems, biota, and human health.
The effects of exposure to multiple stressors, including pollutants, in the marine environment may be seen at multiple trophic levels of the ecosystem. Attention has focused on the large marine vertebrates, for several reasons. In the past, the use of large marine vertebrates in monitoring and assessing the marine ecosystem has been criticized. The fact that these species are pelagic and highly mobile has led to the suggestion that they are not useful indicators or sentinel species. In recent years, however, an alternative view has emerged: when we have a sufficient understanding of differences in species distribution and behavior in space and time, these species can be extremely valuable sentinels of environmental quality.
Knowledge of the status of large vertebrate populations is crucial for understanding the health of the ecosystem and instigating mitigation measures for the conservation of large vertebrates. For example, it is well known that the various cetacean species exhibit different home ranges and occupy different habitats. This knowledge can be used in “hot spot” areas, such as the Mediterranean Basin, where different species can serve as sentinels of marine environmental quality. Organisms that have relatively long life spans (such as cetaceans) allow for the study of chronic diseases, including reproductive alterations, abnormalities in growth and development, and cancer. As apex predators, marine mammals feed at or near the top of the food chain. As the result of biomagnification, the levels of anthropogenic contaminants found in the tissues of top predators and long-living species are typically high. Finally, the application of consistent examination procedures and biochemical, immunological, and microbiological techniques, combined with pathological examination and behavioral analysis, has led to the development of health assessment methods at the individual and population levels in wild marine mammals. With these tools in hand, investigators have begun to explore and understand the relationships between exposures to environmental stressors and a range of disease end points in sentinel species (ranging from invertebrates to marine mammals) as an indicator of ecosystem health and a harbinger of human health and well-being.
Jean-François Bissonnette and Rodolphe De Koninck
Plantation farming emerged as a large-scale system of specialized agriculture in the tropics under European colonialism, in opposition to smallholding subsistence agriculture. Despite large-scale plantations in the tropics, smallholdings have consistently formed the backbone of rural economies, to the extent that they have become the main producers of some of the former plantation crops. In the early 21st century, oil palm has become the third most important cash crop in the world in terms of area cultivated, largely due to the expansion of this crop in Malaysia and Indonesia. Although in these countries, oil palm is primarily cultivated in large plantations, smallholders cultivate a large share of the territory devoted to this crop. This is related to the programs set up by governments of Malaysia and Indonesia during the second half of the 20th century, to provide smallholders with land plots in capital intensive large-scale oil palm schemes. Despite the relative success encountered by these programs in both countries, policymakers have continued to insist on the development of private centrally managed large-scale plantations. Yet, smallholding family farming has remained the most resilient economic activity in rural areas of the tropics. This system has proven adaptive to environmental change and, given proper access to markets and capital, particularly responsive to market signals. Today, many small-holdings are still characterized by the diversity of crops cultivated, low use of chemical inputs, reliance on family labor, and high levels of ecological knowledge. These are some of the main factors explaining why small family farms have proven more efficient than large plantations and, in the long term, more economically and ecologically resilient. Yet, large-scale land acquisitions for monocrop production remain a current issue, highlighting the paradox of the latest stage of agrarian capitalism and of its persistent built-in disregard for environmental deterioration.
Frank W. Geels
Addressing persistent environmental problems such as climate change or biodiversity loss requires shifts to new kinds of energy, mobility, housing, and agro-food systems. These shifts are called socio-technical transitions because they involve not just changes in technology but also changes in consumer practices, policies, cultural meanings, infrastructures, and business models. Socio-technical transitions to sustainability are challenging for mainstream social sciences because they are multiactor, long-term, goal-oriented, disruptive, contested, and nonlinear processes. Sustainability transitions are being investigated by a new research community, which uses a socio-technical Multi-Level Perspective (MLP) as one of its orienting frameworks. Focusing on multidimensional struggles between “green” innovations and entrenched systems, the MLP suggests that transitions involve alignments of processes within and between three analytical levels: niche innovations, socio-technical regimes, and an exogenous socio-technical landscape. To understand more specific change mechanisms, the MLP mobilizes ideas from evolutionary economics, sociology of innovation, and institutional theory. Different phases, actors, and struggles are distinguished to understand the complexities of sustainability transitions, while still providing analytical traction and policy advice. The MLP draws attention to socio-technical systems as a new unit of analysis, which is more comprehensive than a micro-focus on individuals and more concrete than a macro-focus on a green economy. It also forms a new analytical framework that spans several stale dichotomies in environmental social science debates related to agency or structure and behavioral or technical change. The MLP accommodates stability and change and offers an integrative view on transitions, ranging from local projects to niche innovations to sector-level regimes and broader societal contexts. This new interdisciplinary research is attracting increasing attention from the European Environment Agency, International Panel on Climate Change (IPCC), and Organization for Economic Cooperation and Development (OECD).
Soils, the earth’s skin, are at the intersection of the lithosphere, hydrosphere, atmosphere, and biosphere. The persistence of life on our planet depends on the maintenance of soils as they constitute the biological engines of earth. Human population has increased exponentially in recent decades, along with the demand for food, materials, and energy, which have caused a shift from low-yield and subsistence agriculture to a more productive, high-cost, and intensive agriculture. However, soils are very fragile ecosystems and require centuries for their development, thus within the human timescale they are not renewable resources. Modern and intensive agriculture implies serious concern about the conservation of soil as living organism, i.e., of its capacity to perform the vast number of biochemical processes needed to complete the biogeochemical cycles of plant nutrients, such as nitrogen and phosphorus, crucial for crop primary production. Most practices related to intensive agriculture determine a deterioration even in the short-middle term of their physical, chemical, and biological properties, which all together contribute to soil quality, along with an overexploitation of soils as living organisms. Recent trends are turning toward styles of agriculture management that are more sustainable or conservative for soil quality.
Usually, use of soils for agricultural purposes deflect them at various degrees from the “natural” soil development processes (pedogenesis), and this shift may be assumed as a divergence from soil sustainability principles. For decades, the misuse of land due to intensive crop management has deteriorated soil health and quality. A huge plethora of microorganisms inhabits soils, thus acting as “the biological engine of the earth”; indeed, this microbiota serves the soil ecosystem, performing several fundamental functions. Therefore, management practices might be planned looking at the safeguard of soil microbial diversity and resilience. In addition, each unexpected alteration in numberless soil biochemical processes, being regulated by microbial communities, may represent an early and sensible signal of soil homeostasis weakening and, consequently, warn about soil conservation. Within the vast number of soil biochemical processes and connected features (bioindicators) virtually effective to measure the sustainable soil exploitation, those related to the mineralization or immobilization of the main nutrients (C and N), including enzyme activity (functioning) and composition (diversity) of microbial communities, exert a fundamental role because of their involvement in soil metabolism. Comparing the influence of many cropping factors (tillage, mulching and cover crops, rotations, mineral and organic fertilization) under both intensive and sustainable managements on soil microbial diversity and functioning, through both chemical and biological soil quality indicators, makes it possible to identify the most hazardous diversions from soil sustainability principles.
David A. Robinson, Fiona Seaton, Katrina Sharps, Amy Thomas, Francis Parry Roberts, Martine van der Ploeg, Laurence Jones, Jannes Stolte, Maria Puig de la Bellacasa, Paula Harrison, and Bridget Emmett
Soils provide important functions, which according to the European Commission include: biomass production (e.g., agriculture and forestry); storing, filtering, and transforming nutrients, substances, and water; harboring biodiversity (habitats, species, and genes); forming the physical and cultural environment for humans and their activities; providing raw materials; acting as a carbon pool; and forming an archive of geological and archaeological heritage, all of which support human society and planetary life. The basis of these functions is the soil natural capital, the stocks of soil material. Soil functions feed into a range of ecosystem services which in turn contribute to the United Nations sustainable development goals (SDGs). This overarching framework hides a range of complex, often nonlinear, biophysical interactions with feedbacks and perhaps yet to be discovered tipping points. Moreover, interwoven with this biophysical complexity are the interactions with human society and the socioeconomic system which often drives our attitudes toward, and the management and exploitation of, our environment.
Challenges abound, both social and environmental, in terms of how to feed an increasingly populous and material world, while maintaining some semblance of thriving ecosystems to pass on to future generations. How do we best steward the resources we have, keep them from degradation, and restore them where necessary as soils underpin life? How do we measure and quantify the soil resources we have, how are they changing in time and space, what can we predict about their future use and function? What is the value of soil resources, and how should we express it? This article explores how soil properties and processes underpin ecosystem services, how to measure and model them, and how to identify the wider benefits they provide to society. Furthermore, it considers value frameworks, including caring for our resources.
Salt accumulation in soils, affecting agricultural productivity, environmental health, and the economy of the community, is a global phenomenon since the decline of ancient Mesopotamian civilization by salinity. The global distribution of salt-affected soils is estimated to be around 830 million hectares extending over all the continents, including Africa, Asia, Australasia, and the Americas. The concentration and composition of salts depend on several resources and processes of salt accumulation in soil layers. Major types of soil salinization include groundwater associated salinity, non–groundwater-associated salinity, and irrigation-induced salinity. There are several soil processes which lead to salt build-up in the root zone interfering with the growth and physiological functions of plants.
Salts, depending on the ionic composition and concentration, can also affect many soil processes, such as soil water dynamics, soil structural stability, solubility of essential nutrients, and pH and pE of soil water—all indirectly hindering plant growth. The direct effect of salinity includes the osmotic effect affecting water and nutrient uptake and the toxicity or deficiency due to high concentration of certain ions. The plan of action to resolve the problems associated with soil salinization should focus on prevention of salt accumulation, removal of accumulated salts, and adaptation to a saline environment. Successful utilization of salinized soils needs appropriate soil and irrigation management and improvement of plants by breeding and genetic engineering techniques to tolerate different levels of salinity and associated abiotic stress.