You are looking at 1-20 of 30 articles
Jan Zalasiewicz and Colin Waters
The Anthropocene hypothesis—that humans have impacted “the environment” but also changed the Earth’s geology—has spread widely through the sciences and humanities. This hypothesis is being currently tested to see whether the Anthropocene may become part of the Geological Time Scale. An Anthropocene Working Group has been established to assemble the evidence. The decision regarding formalization is likely to be taken in the next few years, by the International Commission on Stratigraphy, the body that oversees the Geological Time Scale. Whichever way the decision goes, there will remain the reality of the phenomenon and the utility of the concept.
The evidence, as outlined here, rests upon a broad range of signatures reflecting humanity’s significant and increasing modification of Earth systems. These may be visible as markers in physical deposits in the form of the greatest expansion of novel minerals in the last 2.4 billion years of Earth history and development of ubiquitous materials, such as plastics, unique to the Anthropocene. The artefacts we produce to live as modern humans will form the technofossils of the future. Human-generated deposits now extend from our natural habitat on land into our oceans, transported at rates exceeding the sediment carried by rivers by an order of magnitude. That influence now extends increasingly underground in our quest for minerals, fuel, living space, and to develop transport and communication networks. These human trace fossils may be preserved over geological durations and the evolution of technology has created a new technosphere, yet to evolve into balance with other Earth systems.
The expression of the Anthropocene can be seen in sediments and glaciers in chemical markers. Carbon dioxide in the atmosphere has risen by ~45 percent above pre–Industrial Revolution levels, mainly through combustion, over a few decades, of a geological carbon-store that took many millions of years to accumulate. Although this may ultimately drive climate change, average global temperature increases and resultant sea-level rises remain comparatively small, as yet. But the shift to isotopically lighter carbon locked into limestones and calcareous fossils will form a permanent record. Nitrogen and phosphorus contents in surface soils have approximately doubled through increased use of fertilizers to increase agricultural yields as the human population has also doubled in the last 50 years. Industrial metals, radioactive fallout from atomic weapons testing, and complex organic compounds have been widely dispersed through the environment and become preserved in sediment and ice layers.
Despite radical changes to flora and fauna across the planet, the Earth still has most of its complement of biological species. However, current trends of habitat loss and predation may push the Earth into the sixth mass extinction event in the next few centuries. At present the dramatic changes relate to trans-global species invasions and population modification through agricultural development on land and contamination of coastal zones.
Considering the entire range of environmental signatures, it is clear that the global, large and rapid scale of change related to the mid-20th century is the most obvious level to consider as the start of the Anthropocene Epoch.
Lora Fleming, Niccolò Tempini, Harriet Gordon-Brown, Gordon L. Nichols, Christophe Sarran, Paolo Vineis, Giovanni Leonardi, Brian Golding, Andy Haines, Anthony Kessel, Virginia Murray, Michael Depledge, and Sabina Leonelli
Big data refers to large, complex, potentially linkable data from diverse sources, ranging from the genome and social media, to individual health information and the contributions of citizen science monitoring, to large-scale long-term oceanographic and climate modeling and its processing in innovative and integrated “data mashups.” Over the past few decades, thanks to the rapid expansion of computer technology, there has been a growing appreciation for the potential of big data in environment and human health research.
The promise of big data mashups in environment and human health includes the ability to truly explore and understand the “wicked environment and health problems” of the 21st century, from tracking the global spread of the Zika and Ebola virus epidemics to modeling future climate change impacts and adaptation at the city or national level. Other opportunities include the possibility of identifying environment and health hot spots (i.e., locations where people and/or places are at particular risk), where innovative interventions can be designed and evaluated to prevent or adapt to climate and other environmental change over the long term with potential (co-) benefits for health; and of locating and filling gaps in existing knowledge of relevant linkages between environmental change and human health. There is the potential for the increasing control of personal data (both access to and generation of these data), benefits to health and the environment (e.g., from smart homes and cities), and opportunities to contribute via citizen science research and share information locally and globally.
At the same time, there are challenges inherent with big data and data mashups, particularly in the environment and human health arena. Environment and health represent very diverse scientific areas with different research cultures, ethos, languages, and expertise. Equally diverse are the types of data involved (including time and spatial scales, and different types of modeled data), often with no standardization of the data to allow easy linkage beyond time and space variables, as data types are mostly shaped by the needs of the communities where they originated and have been used. Furthermore, these “secondary data” (i.e., data re-used in research) are often not even originated for this purpose, a particularly relevant distinction in the context of routine health data re-use. And the ways in which the research communities in health and environmental sciences approach data analysis and synthesis, as well as statistical and mathematical modeling, are widely different.
There is a lack of trained personnel who can span these interdisciplinary divides or who have the necessary expertise in the techniques that make adequate bridging possible, such as software development, big data management and storage, and data analyses. Moreover, health data have unique challenges due to the need to maintain confidentiality and data privacy for the individuals or groups being studied, to evaluate the implications of shared information for the communities affected by research and big data, and to resolve the long-standing issues of intellectual property and data ownership occurring throughout the environment and health fields. As with other areas of big data, the new “digital data divide” is growing, where some researchers and research groups, or corporations and governments, have the access to data and computing resources while others do not, even as citizen participation in research initiatives is increasing. Finally with the exception of some business-related activities, funding, especially with the aim of encouraging the sustainability and accessibility of big data resources (from personnel to hardware), is currently inadequate; there is widespread disagreement over what business models can support long-term maintenance of data infrastructures, and those that exist now are often unable to deal with the complexity and resource-intensive nature of maintaining and updating these tools.
Nevertheless, researchers, policy makers, funders, governments, the media, and members of the general public are increasingly recognizing the innovation and creativity potential of big data in environment and health and many other areas. This can be seen in how the relatively new and powerful movement of Open Data is being crystalized into science policy and funding guidelines. Some of the challenges and opportunities, as well as some salient examples, of the potential of big data and big data mashup applications to environment and human health research are discussed.
Human activities in the Anthropocene are influencing the twin processes of biodiversity generation and loss in complex ways that threaten the maintenance of biodiversity levels that underpin human well-being. Yet many scientists and practitioners still present a simplistic view of biodiversity as a static stock rather than one determined by a dynamic interplay of feedback processes that are affected by anthropogenic drivers. Biodiversity describes the variety of life on Earth, from the genes within an organism to the ecosystem level. However, this article focuses on variation among living organisms, both within and between species. Within species, biodiversity is reflected in genetic, and consequent phenotypic, variations among individuals. Genetic diversity is generated by germ line mutations, genetic recombination during sexual reproduction, and immigration of new genotypes into populations. Across species, biodiversity is reflected in the number of different species present and also, by some metrics, in the evenness of their relative abundance. At this level, biodiversity is generated by processes of speciation and immigration of new species into an area. Anthropogenic drivers affect all these biodiversity generation processes, while the levels of genetic diversity can feed back and affect the level of species diversity, and vice versa. Therefore, biodiversity maintenance is a complex balance of processes and the biodiversity levels at any point in time may not be at equilibrium.
A major concern for humans is that our activities are driving rapid losses of biodiversity, which outweigh by orders of magnitude the processes of biodiversity generation. A wide range of species and genetic diversity could be necessary for the provision of ecosystem functions and services (e.g., in maintaining the nutrient cycling, plant productivity, pollination, and pest control that underpin crop production). The importance of biodiversity becomes particularly marked over longer time periods, and especially under varying environmental conditions.
In terms of biodiversity losses, there are natural processes that cause roughly continuous, low-level losses, but there is also strong evidence from fossil records for transient events in which exceptionally large loss of biodiversity has occurred. These major extinction episodes are thought to have been caused by various large-scale environmental perturbations, such as volcanic eruptions, sea-level falls, climatic changes, and asteroid impacts. From all these events, biodiversity has shown recovery over subsequent calmer periods, although the composition of higher-level evolutionary taxa can be significantly altered.
In the modern era, biodiversity appears to be undergoing another mass extinction event, driven by large-scale human impacts. The primary mechanisms of biodiversity loss caused by humans vary over time and by geographic region, but they include overexploitation, habitat loss, climate change, pollution (e.g., nitrogen deposition), and the introduction of non-native species. It is worth noting that human activities may also lead to increases in biodiversity in some areas through species introductions and climatic changes, although these overall increases in species richness may come at the cost of loss of native species, and with uncertain effects on ecosystem service delivery. Genetic diversity is also affected by human activities, with many examples of erosion of diversity through crop and livestock breeding or through the decline in abundance of wild species populations. Significant future challenges are to develop better ways to monitor the drivers of biodiversity loss and biodiversity levels themselves, making use of new technologies, and improving coverage across geographic regions and taxonomic scope. Rather than treating biodiversity as a simple stock at equilibrium, developing a deeper understanding of the complex interactions—both between environmental drivers and between genetic and species diversity—is essential to manage and maintain the benefits that biodiversity delivers to humans, as well as to safeguard the intrinsic value of the Earth’s biodiversity for future generations.
James M. MacDonald
Industrialized livestock production can be characterized by five key attributes: confinement feeding of animals, separation of feed and livestock production, specialization, large size, and close vertical linkages with buyers. Industrialized livestock operations—popularly known as CAFOs, for Concentrated Animal Feeding Operations—have spread rapidly in developed and developing countries; by the early 21st century, they accounted for three quarters of poultry production and over half of global pork production, and held a growing foothold in dairy production.
Industrialized systems have created significant improvements in agricultural productivity, leading to greater output of meat and dairy products for given commitments of land, feed, labor, housing, and equipment. They have also been effective at developing, applying, and disseminating research leading to persistent improvements in animal genetics, breeding, feed formulations, and biosecurity. The reduced prices associated with productivity improvements support increased meat and dairy product consumption in low and middle income countries, while reducing the resources used for such consumption in higher income countries.
The high-stocking densities associated with confined feeding also exacerbate several social costs associated with livestock production. Animals in high-density environments may be exposed to diseases, subject to attacks from other animals, and unable to engage in natural behaviors, raising concerns about higher levels of fear, pain, stress, and boredom. Such animal welfare concerns have realized greater salience in recent years.
By consolidating large numbers of animals in a location, industrial systems also concentrate animal wastes, often in levels that exceed the capacity of local cropland to absorb the nutrients in manure. While the productivity improvements associated with industrial systems reduce the resource demands of agriculture, excessive localized concentrations of manure can lean to environmental damage through contamination of ground and surface water and through volatilization of nitrogen nutrients into airborne pollutants.
Finally, animals in industrialized systems are often provided with antibiotics in their feed or water, in order to treat and prevent disease, but also to realize improved feed absorption (“a production purpose”). Bacteria are developing resistance to many important antibiotic drugs; the extensive use of such drugs in human and animal medicine has contributed to the spread of antibiotic resistance, with consequent health risks to humans.
The social costs associated with industrialized production have led to a range of regulatory interventions, primarily in North America and Europe, as well as private sector attempts to alter the incentives that producers face through the development of labels and through associated adjustments within supply chains.
Paolo Inglese and Giuseppe Sortino
In May, every year since 1857, in the great park of Sans-Souci in Potsdam just outside Berlin—a park begun in 1745 by Emperor Frederick II of Hohenzollern and expanded a century later by Frederick William IV—the doors of the great Orangerie open in and a Renaissance-style garden called Sizilianischer Garten is set up. On horse-drawn carriages, large olive and citrus trees are brought outdoors, and are then raised in masters.
For the young European who, in the second half of the 18th century and in the first decades of the following, traveled to Italy to see and study Renaissance culture and the remains of Greek civilization, the citrus species and fruits and groves of southern Italy became the ultimate symbol of beauty and a sort of status symbol of wealth, particularly that of landowners. Nothing is more expressive of the fascination of their fruit than Abu-l-Hasan Ali’s 12th-century writings: “Come on, enjoy your harvested orange: happiness is present when it is present. / Welcome the cheeks of the branches, and welcome the stars of the trees! / It seems that the sky has lavished gold and that the earth has formed some shiny spheres.”
Indeed, Citrus spp. are among the most important crops and consumed fruit worldwide. Their co-evolution due to a millennial agricultural utilization resulted in a complexity of species and cultivated varieties derived by natural or induced mutations, crossing and breeding the “original” species (Citrus medica, Citrus maxima, Citrus reticulate, Fortunella japonica) and their main progenies (C. aurantium, C. sinensis, Citrus limon, Citrus paradisi, Citrus clementina, etc.). Citrus spread from the original tropical and subtropical regions of southeast Asia toward the Mediterranean countries of Europe and North Africa and, after 1492, in the Americas, not to mention South Africa and Australia, where they still have a very important role. Citrus species, wherever they have been cultivated, quickly became the protagonists of the letters and the arts, as well as the markets and gastronomy, and can even be found in religious ceremonies, such as for Feast of Tabernacles (Sukkot). Studies on Citrus botany, cultivation, and utilization have been pursued since the early stages of the fruit’s domestication and grew following their introduction in Europe, the Americas, Africa, and Australia. Citrus research involves many different aspects: such as the study of citrus origin and botanical classification; citrus growing, propagation, and orchard management; citrus fruit quality, utilization and industry; citrus gardening and ornamentals; citrus in arts and manufacturing.
Confidence in the projected impacts of climate change on agricultural systems has increased substantially since the first Intergovernmental Panel on Climate Change (IPCC) reports. In Africa, much work has gone into downscaling global climate models to understand regional impacts, but there remains a dearth of local level understanding of impacts and communities’ capacity to adapt. It is well understood that Africa is vulnerable to climate change, not only because of its high exposure to climate change, but also because many African communities lack the capacity to respond or adapt to the impacts of climate change. Warming trends have already become evident across the continent, and it is likely that the continent’s 2000 mean annual temperature change will exceed +2°C by 2100. Added to this warming trend, changes in precipitation patterns are also of concern: Even if rainfall remains constant, due to increasing temperatures, existing water stress will be amplified, putting even more pressure on agricultural systems, especially in semiarid areas. In general, high temperatures and changes in rainfall patterns are likely to reduce cereal crop productivity, and new evidence is emerging that high-value perennial crops will also be negatively impacted by rising temperatures. Pressures from pests, weeds, and diseases are also expected to increase, with detrimental effects on crops and livestock.
Much of African agriculture’s vulnerability to climate change lies in the fact that its agricultural systems remain largely rain-fed and underdeveloped, as the majority of Africa’s farmers are small-scale farmers with few financial resources, limited access to infrastructure, and disparate access to information. At the same time, as these systems are highly reliant on their environment, and farmers are dependent on farming for their livelihoods, their diversity, context specificity, and the existence of generations of traditional knowledge offer elements of resilience in the face of climate change. Overall, however, the combination of climatic and nonclimatic drivers and stressors will exacerbate the vulnerability of Africa’s agricultural systems to climate change, but the impacts will not be universally felt. Climate change will impact farmers and their agricultural systems in different ways, and adapting to these impacts will need to be context-specific.
Current adaptation efforts on the continent are increasing across the continent, but it is expected that in the long term these will be insufficient in enabling communities to cope with the changes due to longer-term climate change. African famers are increasingly adopting a variety of conservation and agroecological practices such as agroforestry, contouring, terracing, mulching, and no-till. These practices have the twin benefits of lowering carbon emissions while adapting to climate change as well as broadening the sources of livelihoods for poor farmers, but there are constraints to their widespread adoption. These challenges vary from insecure land tenure to difficulties with knowledge-sharing.
While African agriculture faces exposure to climate change as well as broader socioeconomic and political challenges, many of its diverse agricultural systems remain resilient. As the continent with the highest population growth rate, rapid urbanization trends, and rising GDP in many countries, Africa’s agricultural systems will need to become adaptive to more than just climate change as the uncertainties of the 21st century unfold.
Shu Ting Chang and Solomon P. Wasser
The word mushroom may mean different things to different people in different countries. Specialist studies on the value of mushrooms and their products should have a clear definition of the term mushroom. In a broad sense, “Mushroom is a distinctive fruiting body of a macrofungus, which produce spores that can be either epigeous or hypogeous and large enough to be seen with the naked eye and to be picked by hand.” Thus, mushrooms need not be members of the group Basidiomycetes, as commonly associated, nor aerial, nor fleshy, nor edible. This definition is not perfect, but it has been accepted as a workable term to estimate the number of mushrooms on Earth (approximately 16,000 species according to the rules of International Code of Nomenclature). The most cultivated mushrooms are saprophytes and are heterotrophic for carbon compounds. Even though their cells have walls, they are devoid of chlorophyll and cannot perform photosynthesis. They are also devoid of vascular xylem and phloem. Furthermore, their cell walls contain chitin, which also occurs in the exoskeleton of insects and other arthropods. They absorb O2 and release CO2. In fact, they may be functionally more closely related to animal cells than plants. However, they are sufficiently distinct both from plants and animals and belong to a separate group in the Fungi Kingdom. They rise up from lignocellulosic wastes: yet, they become bountiful and nourishing. Mushrooms can greatly benefit environmental conditions. They biosynthesize their own food from agricultural crop residues, which, like solar energy, are readily available; otherwise, their byproducts and wastes would cause health hazards. The spent compost/substrate could be used to grow other species of mushrooms, as fodder for livestock, as a soil conditioner and fertilizer, and in environmental bioremediation. The cultivation of mushrooms dates back many centuries; Auricularia auricula-judae, Lentinula edodes, and Agaricus bisporus have, for example, been cultivated since 600
Mushrooms can be used as food, tonics, medicines, cosmeceuticals, and as natural biocontrol agents in plant protection with insecticidal, fungicidal, bactericidal, herbicidal, nematocidal, and antiphytoviral activities. The multidimensional nature of the global mushroom cultivation industry, its role in addressing critical issues faced by humankind, and its positive contributions are presented. Furthermore, mushrooms can serve as agents for promoting equitable economic growth in society. Since the lignocellulose wastes are available in every corner of the world, they can be properly used in the cultivation of mushrooms, and therefore could pilot a so-called white agricultural revolution in less developed countries and in the world at large. Mushrooms demonstrate a great impact on agriculture and the environment, and they have great potential for generating a great socio-economic impact in human welfare on local, national, and global levels.
Mainaak Mukhopadhyay and Tapan Kumar Mondal
Tea, the globally admired, non-alcoholic, caffeine-containing beverage, is manufactured from the tender leaves of the tea [Camellia sinensis (L.)] plant. It is basically a woody, perennial crop with a lifespan of more than 100 years. Cultivated tea plants are natural hybrids of the three major taxa or species, China, Assam (Indian), or Cambod (southern) hybrids based on the morphological characters (principally leaf size). Planting materials are either seedlings (10–18 months old) developed from either hybrid, polyclonal, or biclonal seeds, or clonal cuttings developed from single-leaf nodal cuttings of elite genotypes. Plants are forced to remain in the vegetative stage as bushes by following cultural practices like centering, pruning, and plucking, and they are harvested generally from the second year onward at regular intervals of 7–10 days in the tropics and subtropics, with up to 60 years as the economic lifespan. Originally, the Chinese were the first to use tea as a medicinal beverage, around 2000 years ago, and today, around half of the world’s population drink tea. It is primarily consumed as black tea (fermented tea), although green tea (non-fermented) and oolong tea (semifermented) are also consumed in many countries. Tea is also used as vegetables such as “leppet tea” in Burma and “meing tea” in Thailand.
Green tea has extraordinary antioxidant properties, and black tea plays a positive role in treating cardiovascular ailments. Tea in general has considerable therapeutic value and can cure many diseases. Global tea production (black, green, and instant) has increased significantly during the past few years. China, as the world’s largest tea producer, accounts for more than 38% of the total global production of made tea [i.e. ready to drink tea] annually, while production in India, the second-largest producer. India recorded total production of 1233.14 million kg made tea during 2015–2016, which is the highest ever production so far.
Since it is an intensive monoculture, tea cultivation has environmental impacts. Application of weedicides, pesticides, and inorganic fertilizers creates environmental hazards. Meanwhile, insecticides often eliminate the fauna of a vast tract of land. Soil degradation is an additional concern because the incessant use of fertilizers and herbicides compound soil erosion. Apart from those issues, chemical runoff into bodies of water can also create problems. Finally, during tea manufacturing, fossil fuel is used to dry the processed leaves, which also increases environmental pollution.
Deforestation in Brazilian Amazonia destroys environmental services that are important for the whole world, and especially for Brazil itself. These services include maintaining biodiversity, avoiding global warming, and recycling water that provides rainfall to Amazonia, to other parts of Brazil, such as São Paulo, and to neighboring countries, such as Argentina. The forest also maintains the human populations and cultures that depend on it. Deforestation rates have gone up and down over the years with major economic cycles. A peak of 27,772 km2/year was reached in 2004, followed by a major decline to 4571 km2/year in 2012, after which the rate trended upward, reaching 7989 km2/year in 2016 (equivalent to about 1.5 hectares per minute). Most (70%) of the decline occurred by 2007, and the slowing in this period is almost entirely explained by declining prices of export commodities such as soy and beef. Government repression measures explain the continued decline from 2008 to 2012, but an important part of the effect of the repression program hinges on a fragile base: a 2008 decision that makes the absence of pending fines a prerequisite for obtaining credit for agriculture and ranching. This could be reversed at the stroke of a pen, and this is a priority for the powerful “ruralist” voting bloc in the National Congress. Massive plans for highways, dams, and other infrastructure in Amazonia, if carried out, will add to forces in the direction of increased deforestation.
Deforestation occurs for a wide variety of reasons that vary in different historical periods, in different locations, and in different phases of the process at any given location. Economic cycles, such as recessions and the ups and downs of commodity markets, are one influence. The traditional economic logic, where people deforest to make a profit by producing products from agriculture and ranching, is important but only a part of the story. Ulterior motives also drive deforestation. Land speculation is critical in many circumstances, where the increase in land values (bid up, for example, as a safe haven to protect money from hyperinflation) can yield much higher returns than anything produced by the land. Even without the hyperinflation that came under control in 1994, highway projects can yield speculative fortunes to those who are lucky or shrewd enough to have holdings along the highway route. The practical way to secure land holdings is to deforest for cattle pasture. This is also critical to obtaining and defending legal title to the land. In the past, it has also been the key to large ranches gaining generous fiscal incentives from the government. Money laundering also makes deforestation attractive, allowing funds from drug trafficking, tax evasion, and corruption to be converted to “legal” money. Deforestation receives impulses from logging, mining, and, especially, road construction. Soybeans and cattle ranching are the main replacements for forest, and recently expanded export markets are giving strength to these drivers. Population growth and household dynamics are important for areas dominated by small farmers. Extreme degradation, where tree mortality from logging and successive droughts and forest fires replace forest with open nonforest vegetation, is increasing as a kind of deforestation, and is likely to increase much more in the future.
Controlling deforestation requires addressing its multiple causes. Repression through fines and other command-and-control measures is essential to avoid a presumption of impunity, but these controls must be part of a broader program that addresses underlying causes. The many forms of government subsidies for deforestation must be removed or redirected, and the various ulterior motives must be combated. Industry agreements restricting commodity purchases from properties with illegal deforestation (or from areas cleared after a specified cutoff) have a place in efforts to contain forest loss, despite some problems. A “soy moratorium” has been in effect since 2006, and a “cattle agreement” since 2009. Creation and defense of protected areas is an important part of deforestation control, including both indigenous lands and a variety of kinds of “conservation units.” Containing infrastructure projects is essential if deforestation is to be held in check: once roads are built, much of what happens is outside the government’s control. The notion that the 2005–2012 deforestation slowdown means that the process is under control and that infrastructure projects can be built at will is extremely dangerous. One must also abandon myths that divert efforts to contain deforestation; these include “sustainable logging” and the use of “green” funds for expensive programs to reforest degraded lands rather than retain areas of remaining natural forests. Finally, one must provide alternatives to support the rural population of small farmers. Large investors, on the other hand, can fend for themselves. Tapping the value of the environmental services of the forest has been proposed as an alternative basis for sustaining both the rural population and the forest. Despite some progress, a variety of challenges remain. One thing is clear: most of Brazil’s Amazonian deforestation is not “development.” Trading the forest for a vast expanse of extensive cattle pasture does little to secure the well-being of the region’s rural population, is not sustainable, and sacrifices Amazonia’s most valuable resources.
Regimes of environmental stress are exceedingly complex. Particular stressors exist within continua of intensity of environmental factors. Those factors interact with each other, and their detrimental effects on organisms are manifest only at relatively high or low strengths of exposure—in fact, many of them are beneficial at intermediate levels of intensity. Although a diversity of environmental factors is manifest at any time and place, only one or a few of them tend to be dominant as stressors. It is useful to distinguish between stressors that occur as severe events (disturbances) and those that are chronic in their exposure, and to aggregate the kinds of stressors into categories (while noting some degree of overlap among them).
Climatic stressors are associated with extremes of temperature, solar radiation, wind, moisture, and combinations of these factors. They act as stressors if their condition is either insufficient or excessive, in comparison with the needs and comfort zones of organisms or ecosystem processes. Chemical stressors involve environments in which the availability of certain substances is too low to satisfy biological needs, or high enough to cause toxicity or another physiological detriment to organisms or to higher-level attributes of ecosystems. Wildfire is a disturbance that involves the combustion of much of the biomass of an ecosystem, affecting organisms by heat, physical damage, and toxic substances. Physical stress is a disturbance in which an exposure to kinetic energy is intense enough to damage organisms and ecosystems (such as a volcanic blast, seismic sea wave, ice scouring, or anthropogenic explosion or trampling).
Biological stressors are associated with interactions occurring among organisms. They may be directly caused by such trophic interactions as herbivory, predation, and parasitism. They may also indirectly affect the intensity of physical or chemical stressors, as when competition affects the availability of nutrients, moisture, or space.
Extreme environments are characterized by severe regimes of stressors, which result in relatively impoverished ecosystem development. This may be a consequence of either natural or anthropogenic stressors. If a regime of environmental stress intensifies, the resulting responses include a degradation of the structure and function of affected ecosystems and of ecological integrity more generally. In contrast, a relaxation of environmental stress allows some degree of ecosystem recovery.
Jean Louis Weber
Environmental accounting is an attempt to broaden the scope of the accounting frameworks used to assess economic performance, to take stock of elements that are not recorded in public or private accounting books. These gaps occur because the various costs of using nature are not captured, being considered, in many cases, as externalities that can be forwarded to others or postponed. Positive externalities—the natural resource—are depleted with no recording in National Accounts (while companies do record them as depreciation elements). Depletion of renewable resource results in degradation of the environment, which adds to negative externalities resulting from pollution and fragmentation of cyclic and living systems. Degradation, or its financial counterpart in depreciation, is not recorded at all. Therefore, the indicators of production, income, consumption, saving, investment, and debts on which many economic decisions are taken are flawed, or at least incomplete and sometimes misleading, when immediate benefits are in fact losses in the long run, when we consume the reproductive functions of our capital. Although national accounting has been an important driving force in change, environmental accounting encompasses all accounting frameworks including national accounts, financial accounting standards, and accounts established to assess the costs and benefits of plans and projects.
There are several approaches to economic environmental accounting at the national level. Of these approaches, one purpose is the calculation of genuine economic welfare by taking into account losses from environmental damage caused by economic activity and gains from unrecorded services provided by Nature. Here, particular attention is given to the calculation of a “Green GDP” or “Adjusted National Income” and/or “Genuine Savings” as well as natural assets value and depletion. A different view considers the damages caused to renewable natural capital and the resulting maintenance and restoration costs. Besides approaches based on benefits and costs, more descriptive accounts in physical units are produced with the purpose of assessing resource use efficiency. With regard to natural assets, the focus can be on assets directly used by the economy, or more broadly, on ecosystem capacity to deliver services, ecosystem resilience, and its possible degradation. These different approaches are not necessarily contradictory, although controversies can be noted in the literature.
The discussion focuses on issues such as the legitimacy of combining values obtained with shadow prices (needed to value the elements that are not priced by the market) with the transaction values recorded in the national accounts, the relative importance of accounts in monetary vs. physical units, and ultimately, the goals for environmental accounting. These goals include assessing the sustainability of the economy in terms of conservation (or increase) of the net income flow and total economic wealth (the weak sustainability paradigm), in relation to the sustainability of the ecosystem, which supports livelihoods and well-being in the broader sense (strong sustainability).
In 2012, the UN Statistical Commission adopted an international statistical standard called, the “System of Environmental-Economic Accounting Central Framework” (SEEA CF). The SEEA CF covers only items for which enough experience exists to be proposed for implementation by national statistical offices. A second volume on SEEA-Experimental Ecosystem Accounting (SEEA-EEA) was added in 2013 to supplement the SEEA CF with a research agenda and the development of tests. Experiments of the SEEA-EEA are developing at the initiative of the World Bank (WAVES), UN Environment Programme (VANTAGE, ProEcoServ), or the UN Convention on Biological Diversity (CBD) (SEEA-Ecosystem Natural Capital Accounts-Quick Start Package [ENCA-QSP]).
Beside the SEEA and in relation to it, other environmental accounting frameworks have been developed for specific purposes, including material flow accounting (MFA), which is now a regular framework at the Organisation for Economic Co-operation and Development (OECD) to report on the Green Growth strategy, the Intergovernmental Panel on Climate Change (IPCC) guidelines for the UN Framework Convention on Climate Change (UNFCCC), reporting greenhouse gas emissions and carbon sequestration. Can be considered as well the Ecological Footprint accounts, which aim at raising awareness that our resource use is above what the planet can deliver, or the Millennium Ecosystem Assessment of 2005, which presents tables and an overall assessment in an accounting style. Environmental accounting is also a subject of interest for business, both as a way to assess impacts—costs and benefits of projects—and to define new accounting standards to assess their long term performance and risks.
George Morris and Patrick Saunders
Most people today readily accept that their health and disease are products of personal characteristics such as their age, gender, and genetic inheritance; the choices they make; and, of course, a complex array of factors operating at the level of society. Individuals frequently have little or no control over the cultural, economic, and social influences that shape their lives and their health and well-being. The environment that forms the physical context for their lives is one such influence and comprises the places where people live, learn work, play, and socialize, the air they breathe, and the food and water they consume. Interest in the physical environment as a component of human health goes back many thousands of years and when, around two and a half millennia ago, humans started to write down ideas about health, disease, and their determinants, many of these ideas centered on the physical environment.
The modern public health movement came into existence in the 19th century as a response to the dreadful unsanitary conditions endured by the urban poor of the Industrial Revolution. These conditions nurtured disease, dramatically shortening life. Thus, a public health movement that was ultimately to change the health and prosperity of millions of people across the world was launched on an “environmental conceptualization” of health. Yet, although the physical environment, especially in towns and cities, has changed dramatically in the 200 years since the Industrial Revolution, so too has our understanding of the relationship between the environment and human health and the importance we attach to it.
The decades immediately following World War II were distinguished by declining influence for public health as a discipline. Health and disease were increasingly “individualized”—a trend that served to further diminish interest in the environment, which was no longer seen as an important component in the health concerns of the day. Yet, as the 20th century wore on, a range of factors emerged to r-establish a belief in the environment as a key issue in the health of Western society. These included new toxic and infectious threats acting at the population level but also the renaissance of a “socioecological model” of public health that demanded a much richer and often more subtle understanding of how local surroundings might act to both improve and damage human health and well-being.
Yet, just as society has begun to shape a much more sophisticated response to reunite health with place and, with this, shape new policies to address complex contemporary challenges, such as obesity, diminished mental health, and well-being and inequities, a new challenge has emerged. In its simplest terms, human activity now seriously threatens the planetary processes and systems on which humankind depends for health and well-being and, ultimately, survival. Ecological public health—the need to build health and well-being, henceforth on ecological principles—may be seen as the society’s greatest 21st-century imperative. Success will involve nothing less than a fundamental rethink of the interplay between society, the economy, and the environment. Importantly, it will demand an environmental conceptualization of the public health as no less radical than the environmental conceptualization that launched modern public health in the 19th century, only now the challenge presents on a vastly extended temporal and spatial scale.
Juha Merilä and Ary A. Hoffmann
Changing climatic conditions have both direct and indirect influences on abiotic and biotic processes and represent a potent source of novel selection pressures for adaptive evolution. In addition, climate change can impact evolution by altering patterns of hybridization, changing population size, and altering patterns of gene flow in landscapes. Given that scientific evidence for rapid evolutionary adaptation to spatial variation in abiotic and biotic environmental conditions—analogous to that seen in changes brought by climate change—is ubiquitous, ongoing climate change is expected to have large and widespread evolutionary impacts on wild populations. However, phenotypic plasticity, migration, and various kinds of genetic and ecological constraints can preclude organisms from evolving much in response to climate change, and generalizations about the rate and magnitude of expected responses are difficult to make for a number of reasons.
First, the study of microevolutionary responses to climate change is a young field of investigation. While interest in evolutionary impacts of climate change goes back to early macroevolutionary (paleontological) studies focused on prehistoric climate changes, microevolutionary studies started only in the late 1980s. The discipline gained real momentum in the 2000s after the concept of climate change became of interest to the general public and funding organizations. As such, no general conclusions have yet emerged. Second, the complexity of biotic changes triggered by novel climatic conditions renders predictions about patterns and strength of natural selection difficult. Third, predictions are complicated also because the expression of genetic variability in traits of ecological importance varies with environmental conditions, affecting expected responses to climate-mediated selection.
There are now several examples where organisms have evolved in response to selection pressures associated with climate change, including changes in the timing of life history events and in the ability to tolerate abiotic and biotic stresses arising from climate change. However, there are also many examples where expected selection responses have not been detected. This may be partly explainable by methodological difficulties involved with detecting genetic changes, but also by various processes constraining evolution.
There are concerns that the rates of environmental changes are too fast to allow many, especially large and long-lived, organisms to maintain adaptedness. Theoretical studies suggest that maximal sustainable rates of evolutionary change are on the order of 0.1 haldanes (i.e., phenotypic standard deviations per generation) or less, whereas the rates expected under current climate change projections will often require faster adaptation. Hence, widespread maladaptation and extinctions are expected. These concerns are compounded by the expectation that the amount of genetic variation harbored by populations and available for selection will be reduced by habitat destruction and fragmentation caused by human activities, although in some cases this may be countered by hybridization. Rates of adaptation will also depend on patterns of gene flow and the steepness of climatic gradients. Theoretical studies also suggest that phenotypic plasticity (i.e., nongenetic phenotypic changes) can affect evolutionary genetic changes, but relevant empirical evidence is still scarce. While all of these factors point to a high level of uncertainty around evolutionary changes, it is nevertheless important to consider evolutionary resilience in enhancing the ability of organisms to adapt to climate change.
Mark V. Barrow
The prospect of extinction, the complete loss of a species or other group of organisms, has long provoked strong responses. Until the turn of the 18th century, deeply held and widely shared beliefs about the order of nature led to a firm rejection of the possibility that species could entirely vanish. During the 19th century, however, resistance to the idea of extinction gave way to widespread acceptance following the discovery of the fossil remains of numerous previously unknown forms and direct experience with contemporary human-driven decline and the destruction of several species. In an effort to stem continued loss, at the turn of the 19th century, naturalists, conservationists, and sportsmen developed arguments for preventing extinction, created wildlife conservation organizations, lobbied for early protective laws and treaties, pushed for the first government-sponsored parks and refuges, and experimented with captive breeding. In the first half of the 20th century, scientists began systematically gathering more data about the problem through global inventories of endangered species and the first life-history and ecological studies of those species.
The second half of the 20th and the beginning of the 21st centuries have been characterized both by accelerating threats to the world’s biota and greater attention to the problem of extinction. Powerful new laws, like the U.S. Endangered Species Act of 1973, have been enacted and numerous international agreements negotiated in an attempt to address the issue. Despite considerable effort, scientists remain fearful that the current rate of species loss is similar to that experienced during the five great mass extinction events identified in the fossil record, leading to declarations that the world is facing a biodiversity crisis. Responding to this crisis, often referred to as the sixth extinction, scientists have launched a new interdisciplinary, mission-oriented discipline, conservation biology, that seeks not just to understand but also to reverse biota loss. Scientists and conservationists have also developed controversial new approaches to the growing problem of extinction: rewilding, which involves establishing expansive core reserves that are connected with migratory corridors and that include populations of apex predators, and de-extinction, which uses genetic engineering techniques in a bid to resurrect lost species. Even with the development of new knowledge and new tools that seek to reverse large-scale species decline, a new and particularly imposing danger, climate change, looms on the horizon, threatening to undermine those efforts.
Hans Keune and Timo Assmuth
Framing and dealing with complexity are crucially important in environment and human health science, policy, and practice. Complexity is a key feature of most environment and human health issues, which by definition include aspects of the environment and human health, both of which constitute complex phenomena. The number and range of factors that may play a role in an environment and human health issue are enormous, and the issues have a multitude of characteristics and consequences. Framing this complexity is crucial because it will involve key decisions about what to take into account when addressing environment and human health issues and how to deal with them. This is not merely a technical process of scientific framing, but also a methodological decision-making process with both scientific and societal implications. In general, the benefits and risks related to such issues cannot be generalized or objectified, and will be distributed unevenly, resulting in health and environmental inequalities. Even more generally, framing is crucial because it reflects cultural factors and historical contingencies, perceptions and mindsets, political processes, and associated values and worldviews. Framing is at the core of how we as humans relate to, and deal with, environment and human health, as scientists, policymakers, and practitioners, with models, policies, or actions.
Agriculture is at the very center of the human enterprise; its trappings are in evidence all around, yet the agricultural past is an exceptionally distant place from modern America. While the majority of Americans once raised a significant portion of their own food, that ceased to be the case at the beginning of the 20th century. Only a very small portion of the American population today has a personal connection to agriculture. People still must eat, but the process by which food arrives on their plates is less evident than ever. The evolution of that process, with all of its many participants, is the stuff of agricultural history. The task of the agricultural historian is to make that past evident, and usable, for an audience that is divorced from the production of food. People need to know where their food comes from, past and present, and what has gone into the creation of the modern food system.
The term ecological design was coined in a 1996 book by Sim van der Ryn and Stewart Cowan, in which the authors argued for a seamless integration of human activities with natural processes to minimize destructive environmental impact. Following their cautionary statements, William McDonough and Michael Braungart published in 2002 their manifesto book From Cradle to Cradle, which proposed a circular political economy to replace the linear logic of “cradle to grave.” These books have been foundational in architecture and design discussions on sustainability and establishing the technical dimension, as well as the logic, of efficiency, optimization, and evolutionary competition in environmental debates. From Cradle to Cradle evolved into a production model implemented by a number of companies, organizations, and governments around the world, and it also has become a registered trademark and a product certification.
Popularized recently, these developments imply a very short history for the growing field of ecological design. However, their accounts hark as far back as Ernst Haeckel’s definition of the field of ecology in 1866 as an integral link between living organisms and their surroundings (Generelle Morphologie der Organismen, 1866); and Henry David Thoreau’s famous 1854 manual for self-reliance and living in proximity with natural surroundings, in the cabin that he built at Walden Pond, Massachusetts (Walden; or, Life in the Woods, 1854).
Since World War II, contrary to the position of ecological design as a call to fit harmoniously within the natural world, there has been a growing interest in a form of synthetic naturalism, (Closed Worlds; The Rise and Fall of Dirty Physiology, 2015), where the laws of nature and metabolism are displaced from the domain of wilderness to the domain of cities, buildings, and objects. With the rising awareness of what John McHale called disturbances in the planetary reservoir (The Future of the Future, 1969), the field of ecological design has signified not only the integration of the designed object or space in the natural world, but also the reproduction of the natural world in design principles and tools through technological mediation. This idea of architecture and design producing nature paralleled what Buckminster Fuller, John McHale, and Ian McHarg, among others, referred to as world planning; that is, to understand ecological design as the design of the planet itself as much as the design of an object, building, or territory. Unlike van der Ryn and Cowan’s argumentation, which focused on a deep appreciation for nature’s equilibrium, ecological design might commence with the synthetic replication of natural systems.
These conflicting positions reflect only a small fraction of the ubiquitous terms used to describe the field of ecological design, including green, sustain, alternative, resilient, self-sufficient, organic, and biotechnical. In the context of this study, this paper will argue that ecological design starts with the reconceptualization of the world as a complex system of flows rather than a discrete compilation of objects, which visual artist and theorist György Kepes has described as one of the fundamental reorientations of the 20th century (Art and Ecological Consciousness, 1972).
The economic tool of individual transferable quotas (ITQs) gives their owners exclusive and transferable rights to catch a given portion of the total allowable catch (TAC) of a given fish stock. Authorities establish TACs and then divide them among individual fishers or firms in the form of individual catch quotas, usually a percentage of the TAC. ITQs are transferable through selling and buying in an open market. The main arguments by proponents of ITQs is that they eliminate the need to “race for the fish” and thus increase economic returns while eliminating overcapacity and overfishing. In general, fisheries’ management objectives consist of ecological (sustainable use of fish stocks), economic (no economic waste), and social (mainly the equitable distribution of fisheries benefits) issues. There is evidence to show that ITQs do indeed reduce economic waste and increase profits for those remaining in fisheries. However, they do not perform well in terms of sustainability or socially. A proposal that integrates ITQs in a comprehensive and effective ecosystem-based fisheries management system that is more likely to perform much better than ITQs with respect to ecological, economic, and social objectives is presented in this article.
Ann E. Ferris, Richard Garbaccio, Alex Marten, and Ann Wolverton
Concern regarding the economic impacts of environmental regulations has been part of the public dialogue since the beginning of the U.S. EPA. Even as large improvements in environmental quality occurred, government and academia began to examine the potential consequences of regulation for economic growth and productivity. In general, early studies found measurable but not severe effects on the overall national economy. Although price increases due to regulatory requirements outweighed the stimulative effect of investments in pollution abatement, they nearly offset one another. However, these studies also highlighted potentially substantial effects on local labor markets due to the regional and industry concentration of plant closures.
More recently, a substantial body of work examined industry-specific effects of environmental regulation on the productivity of pollution-intensive firms most likely to face pollution control costs, as well as on plant location and employment decisions within firms. Most econometric-based studies found relatively small or no effect on sector-specific productivity and employment, though firms were less likely to open plants in locations subject to more stringent regulation compared to other U.S. locations. In contrast, studies that used economy-wide models to explicitly account for sectoral linkages and intertemporal effects found substantial sector-specific effects due to environmental regulation, including in sectors that were not directly regulated.
It is also possible to think about the overall impacts of environmental regulation on the economy through the lens of benefit-cost analysis. While this type of approach does not speak to how the costs of regulation are distributed across sectors, it has the advantage of explicitly weighing the benefits of environmental improvements against their costs. If benefits are greater than costs, then overall social welfare is improved. When conducting such exercises, it is important to anticipate the ways in which improvements in environmental quality may either directly improve the productivity of economic factors—such as through the increased productivity of outdoor workers—or change the composition of the economy as firms and households change their behavior. If individuals are healthier, for example, they may choose to reallocate their time between work and leisure. Although introducing a role for pollution in production and household behavior can be challenging, studies that have partially accounted for this interconnection have found substantial impacts of improvements in environmental quality on the overall economy.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated.
Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.”
Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources.
From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations.
The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.