You are looking at 21-40 of 170 articles
In 2018 barley accounts for only 5% of the cereal production worldwide, and regionally for up to 40% of cereal production. The cereal represents the oldest crop species and is one of the best adapted crop plants to a broad diversity of climates and environments.
Originating from the wild progenitor species Hordeum vulgare ssp. spontaneum, biogeographically located in the Fertile Crescent of the Near East, the domesticated form developed as a founder crop in aceramic Neolithic societies 11,000 years ago, was cultivated in monocultures in Bronze Age Mesopotamia, entered the New World after 1492
Its stress tolerance in response to increased aridity and salinity on one hand and adaptability to cool climates on the other, partially explains its broad range of applications for subsistence and economy across different cultures, such as for baking, cooking, beer brewing and as an animal feed.
Although the use of fermented starch for producing alcoholic beverages and foods is globally documented in archaeological contexts dating from at least the beginning of the Holocene era, it becomes concrete only in societies with a written culture, such as Bronze Age Mesopotamia and Egypt, where beer played a considerable role in everyday diet and its production represented an important sector of productivity.
In 2004 approximately 85% of barley production was destined for feeding animals. However, as a component of the human diet, studies on the health benefits of the micronutrients in barley have found that it has a positive effect on blood cholesterol and glucose levels, and in turn impacts cardiovascular health and diabetes control. The increasing number of barley-breeding programs worldwide focus on improving the processing characteristics, nutritional value, and stress tolerance of barley within the context of global climate change.
Kevin J. Boyle and Christopher F. Parmeter
Benefit transfer is the projection of benefits from one place and time to another time at the same place or to a new place. Thus, benefit transfer includes the adaptation of an original study to a new policy application at the same location or the adaptation to a different location. The appeal of a benefit transfer is that it can be cost effective, both monetarily and in time. Using previous studies, analysts can select existing results to construct a transferred value for the desired amenity influenced by the policy change. Benefit transfer practices are not unique to valuing ecosystem service and are generally applicable to a variety of changes in ecosystem services. An ideal benefit transfer will scale value estimates to both the ecosystem services and the preferences of those who hold values. The article outlines the steps in a benefit transfer, types of transfers, accuracy of transferred values, and challenges when conducting ecosystem transfers and ends with recommendations for the implementation of benefit transfers to support decision-making.
Lora Fleming, Niccolò Tempini, Harriet Gordon-Brown, Gordon L. Nichols, Christophe Sarran, Paolo Vineis, Giovanni Leonardi, Brian Golding, Andy Haines, Anthony Kessel, Virginia Murray, Michael Depledge, and Sabina Leonelli
Big data refers to large, complex, potentially linkable data from diverse sources, ranging from the genome and social media, to individual health information and the contributions of citizen science monitoring, to large-scale long-term oceanographic and climate modeling and its processing in innovative and integrated “data mashups.” Over the past few decades, thanks to the rapid expansion of computer technology, there has been a growing appreciation for the potential of big data in environment and human health research.
The promise of big data mashups in environment and human health includes the ability to truly explore and understand the “wicked environment and health problems” of the 21st century, from tracking the global spread of the Zika and Ebola virus epidemics to modeling future climate change impacts and adaptation at the city or national level. Other opportunities include the possibility of identifying environment and health hot spots (i.e., locations where people and/or places are at particular risk), where innovative interventions can be designed and evaluated to prevent or adapt to climate and other environmental change over the long term with potential (co-) benefits for health; and of locating and filling gaps in existing knowledge of relevant linkages between environmental change and human health. There is the potential for the increasing control of personal data (both access to and generation of these data), benefits to health and the environment (e.g., from smart homes and cities), and opportunities to contribute via citizen science research and share information locally and globally.
At the same time, there are challenges inherent with big data and data mashups, particularly in the environment and human health arena. Environment and health represent very diverse scientific areas with different research cultures, ethos, languages, and expertise. Equally diverse are the types of data involved (including time and spatial scales, and different types of modeled data), often with no standardization of the data to allow easy linkage beyond time and space variables, as data types are mostly shaped by the needs of the communities where they originated and have been used. Furthermore, these “secondary data” (i.e., data re-used in research) are often not even originated for this purpose, a particularly relevant distinction in the context of routine health data re-use. And the ways in which the research communities in health and environmental sciences approach data analysis and synthesis, as well as statistical and mathematical modeling, are widely different.
There is a lack of trained personnel who can span these interdisciplinary divides or who have the necessary expertise in the techniques that make adequate bridging possible, such as software development, big data management and storage, and data analyses. Moreover, health data have unique challenges due to the need to maintain confidentiality and data privacy for the individuals or groups being studied, to evaluate the implications of shared information for the communities affected by research and big data, and to resolve the long-standing issues of intellectual property and data ownership occurring throughout the environment and health fields. As with other areas of big data, the new “digital data divide” is growing, where some researchers and research groups, or corporations and governments, have the access to data and computing resources while others do not, even as citizen participation in research initiatives is increasing. Finally with the exception of some business-related activities, funding, especially with the aim of encouraging the sustainability and accessibility of big data resources (from personnel to hardware), is currently inadequate; there is widespread disagreement over what business models can support long-term maintenance of data infrastructures, and those that exist now are often unable to deal with the complexity and resource-intensive nature of maintaining and updating these tools.
Nevertheless, researchers, policy makers, funders, governments, the media, and members of the general public are increasingly recognizing the innovation and creativity potential of big data in environment and health and many other areas. This can be seen in how the relatively new and powerful movement of Open Data is being crystalized into science policy and funding guidelines. Some of the challenges and opportunities, as well as some salient examples, of the potential of big data and big data mashup applications to environment and human health research are discussed.
Human activities in the Anthropocene are influencing the twin processes of biodiversity generation and loss in complex ways that threaten the maintenance of biodiversity levels that underpin human well-being. Yet many scientists and practitioners still present a simplistic view of biodiversity as a static stock rather than one determined by a dynamic interplay of feedback processes that are affected by anthropogenic drivers. Biodiversity describes the variety of life on Earth, from the genes within an organism to the ecosystem level. However, this article focuses on variation among living organisms, both within and between species. Within species, biodiversity is reflected in genetic, and consequent phenotypic, variations among individuals. Genetic diversity is generated by germ line mutations, genetic recombination during sexual reproduction, and immigration of new genotypes into populations. Across species, biodiversity is reflected in the number of different species present and also, by some metrics, in the evenness of their relative abundance. At this level, biodiversity is generated by processes of speciation and immigration of new species into an area. Anthropogenic drivers affect all these biodiversity generation processes, while the levels of genetic diversity can feed back and affect the level of species diversity, and vice versa. Therefore, biodiversity maintenance is a complex balance of processes and the biodiversity levels at any point in time may not be at equilibrium.
A major concern for humans is that our activities are driving rapid losses of biodiversity, which outweigh by orders of magnitude the processes of biodiversity generation. A wide range of species and genetic diversity could be necessary for the provision of ecosystem functions and services (e.g., in maintaining the nutrient cycling, plant productivity, pollination, and pest control that underpin crop production). The importance of biodiversity becomes particularly marked over longer time periods, and especially under varying environmental conditions.
In terms of biodiversity losses, there are natural processes that cause roughly continuous, low-level losses, but there is also strong evidence from fossil records for transient events in which exceptionally large loss of biodiversity has occurred. These major extinction episodes are thought to have been caused by various large-scale environmental perturbations, such as volcanic eruptions, sea-level falls, climatic changes, and asteroid impacts. From all these events, biodiversity has shown recovery over subsequent calmer periods, although the composition of higher-level evolutionary taxa can be significantly altered.
In the modern era, biodiversity appears to be undergoing another mass extinction event, driven by large-scale human impacts. The primary mechanisms of biodiversity loss caused by humans vary over time and by geographic region, but they include overexploitation, habitat loss, climate change, pollution (e.g., nitrogen deposition), and the introduction of non-native species. It is worth noting that human activities may also lead to increases in biodiversity in some areas through species introductions and climatic changes, although these overall increases in species richness may come at the cost of loss of native species, and with uncertain effects on ecosystem service delivery. Genetic diversity is also affected by human activities, with many examples of erosion of diversity through crop and livestock breeding or through the decline in abundance of wild species populations. Significant future challenges are to develop better ways to monitor the drivers of biodiversity loss and biodiversity levels themselves, making use of new technologies, and improving coverage across geographic regions and taxonomic scope. Rather than treating biodiversity as a simple stock at equilibrium, developing a deeper understanding of the complex interactions—both between environmental drivers and between genetic and species diversity—is essential to manage and maintain the benefits that biodiversity delivers to humans, as well as to safeguard the intrinsic value of the Earth’s biodiversity for future generations.
Peter Kareiva and Isaac Kareiva
The concept of biodiversity hotspots arose as a science-based framework with which to identify high-priority areas for habitat protection and conservation—often in the form of nature reserves. The basic idea is that with limited funds and competition from humans for land, we should use range maps and distributional data to protect areas that harbor the greatest biodiversity and that have experienced the greatest habitat loss. In its early application, much analysis and scientific debate went into asking the following questions: Should all species be treated equally? Do endemic species matter more? Should the magnitude of threat matter? Does evolutionary uniqueness matter? And if one has good data on one broad group of organisms (e.g., plants or birds), does it suffice to focus on hotspots for a few taxonomic groups and then expect to capture all biodiversity broadly? Early applications also recognized that hotspots could be identified at a variety of spatial scales—from global to continental, to national to regional, to even local. Hence, within each scale, it is possible to identify biodiversity hotspots as targets for conservation.
In the last 10 years, the concept of hotspots has been enriched to address some key critiques, including the problem of ignoring important areas that might have low biodiversity but that certainly were highly valued because of charismatic wild species or critical ecosystem services. Analyses revealed that although the spatial correlation between high-diversity areas and high-ecosystem-service areas is low, it is possible to use quantitative algorithms that achieve both high protection for biodiversity and high protection for ecosystem services without increasing the required area as much as might be expected.
Currently, a great deal of research is aimed at asking about what the impact of climate change on biodiversity hotspots is, as well as to what extent conservation can maintain high biodiversity in the face of climate change. Two important approaches to this are detailed models and statistical assessments that relate species distribution to climate, or alternatively “conserving the stage” for high biodiversity, whereby the stage entails regions with topographies or habitat heterogeneity of the sort that is expected to generate high species richness.
Finally, conservation planning has most recently embraced what is in some sense the inverse of biodiversity hotspots—what we might call conservation wastelands. This approach recognizes that in the Anthropocene epoch, human development and infrastructure are so vast that in addition to using data to identify biodiversity hotspots, we should use data to identify highly degraded habitats and ecosystems. These degraded lands can then become priority development areas—for wind farms, solar energy facilities, oil palm plantations, and so forth. By specifying degraded lands, conservation plans commonly pair maps of biodiversity hotspots with maps of degraded lands that highlight areas for development. By putting the two maps together, it should be possible to achieve much more effective conservation because there will be provision of habitat for species and for economic development—something that can obtain broader political support than simply highlighting biodiversity hotspots.
Although the concept of biodiversity emerged 30 years ago, patterns and processes influencing ecological diversity have been studied for more than a century. Historically, ecological processes tended to be considered as occurring in local habitats that were spatially homogeneous and temporally at equilibrium. Initially considered as a constraint to be avoided in ecological studies, spatial heterogeneity was progressively recognized as critical for biodiversity. This resulted, in the 1970s, in the emergence of a new discipline, landscape ecology, whose major goal is to understand how spatial and temporal heterogeneity influence biodiversity. To achieve this goal, researchers came to realize that a fundamental issue revolves around how they choose to conceptualize and measure heterogeneity. Indeed, observed landscape patterns and their apparent relationship with biodiversity often depend on the scale of observation and the model used to describe the landscape. Due to the strong influence of island biogeography, landscape ecology has focused primarily on spatial heterogeneity. Several landscape models were conceptualized, allowing for the prediction and testing of distinct but complementary effects of landscape heterogeneity on species diversity. We now have ample empirical evidence that patch structure, patch context, and mosaic heterogeneity all influence biodiversity. More recently, the increasing recognition of the role of temporal scale has led to the development of new conceptual frameworks acknowledging that landscapes are not only heterogeneous but also dynamic. The current challenge remains to truly integrate both spatial and temporal heterogeneity in studies on biodiversity. This integration is even more challenging when considering that biodiversity often responds to environmental changes with considerable time lags, and multiple drivers of global changes are interacting, resulting in non-additive and sometimes antagonistic effects. Recent technological advances in remote sensing, the availability of massive amounts of data, and long-term studies represent, however, very promising avenues to improve our understanding of how spatial and temporal heterogeneity influence biodiversity.
James M. MacDonald
Industrialized livestock production can be characterized by five key attributes: confinement feeding of animals, separation of feed and livestock production, specialization, large size, and close vertical linkages with buyers. Industrialized livestock operations—popularly known as CAFOs, for Concentrated Animal Feeding Operations—have spread rapidly in developed and developing countries; by the early 21st century, they accounted for three quarters of poultry production and over half of global pork production, and held a growing foothold in dairy production.
Industrialized systems have created significant improvements in agricultural productivity, leading to greater output of meat and dairy products for given commitments of land, feed, labor, housing, and equipment. They have also been effective at developing, applying, and disseminating research leading to persistent improvements in animal genetics, breeding, feed formulations, and biosecurity. The reduced prices associated with productivity improvements support increased meat and dairy product consumption in low and middle income countries, while reducing the resources used for such consumption in higher income countries.
The high-stocking densities associated with confined feeding also exacerbate several social costs associated with livestock production. Animals in high-density environments may be exposed to diseases, subject to attacks from other animals, and unable to engage in natural behaviors, raising concerns about higher levels of fear, pain, stress, and boredom. Such animal welfare concerns have realized greater salience in recent years.
By consolidating large numbers of animals in a location, industrial systems also concentrate animal wastes, often in levels that exceed the capacity of local cropland to absorb the nutrients in manure. While the productivity improvements associated with industrial systems reduce the resource demands of agriculture, excessive localized concentrations of manure can lean to environmental damage through contamination of ground and surface water and through volatilization of nitrogen nutrients into airborne pollutants.
Finally, animals in industrialized systems are often provided with antibiotics in their feed or water, in order to treat and prevent disease, but also to realize improved feed absorption (“a production purpose”). Bacteria are developing resistance to many important antibiotic drugs; the extensive use of such drugs in human and animal medicine has contributed to the spread of antibiotic resistance, with consequent health risks to humans.
The social costs associated with industrialized production have led to a range of regulatory interventions, primarily in North America and Europe, as well as private sector attempts to alter the incentives that producers face through the development of labels and through associated adjustments within supply chains.
Elisabeth N. Bui
Driving forces for natural soil salinity and alkalinity are climate, rock weathering, ion exchange, and mineral equilibria reactions that ultimately control the chemical composition of soil and water. The major weathering reactions that produce soluble ions are tabled. Where evapotranspiration is greater than precipitation, downward water movement is insufficient to leach solutes out of the soil profile and salts can precipitate. Microbes involved in organic matter mineralization and thus the carbon, nitrogen, and sulfur biogeochemical cycles are also implicated. Seasonal contrast and evaporative concentration during dry periods accelerate short-term oxidation-reduction reactions and local and regional accumulation of carbonate and sulfur minerals. The presence of salts and alkaline conditions, together with the occurrence of drought and seasonal waterlogging, creates some of the most extreme soil environments where only specially adapted organisms are able to survive. Sodic soils are alkaline, rich in sodium carbonates, with an exchange complex dominated by sodium ions. Such sodic soils, when low in other salts, exhibit dispersive behavior, and they are difficult to manage for cropping. Maintaining the productivity of sodic soils requires control of the flocculation-dispersion behavior of the soil. Poor land management can also lead to anthropogenically induced secondary salinity. New developments in physical chemistry are providing insights into ion exchange and how it controls flocculation-dispersion in soil. New water and solute transport models are enabling better options of remediation of saline and/or sodic soils.
Mental and behavioral disorders account for approximately 7.4% of the global burden of disease, with depression now the world’s leading cause of disability. One in four people in the world will suffer from a mental health problem at some point in their life. City planning and design holds much promise for reducing this burden of disease, and for offering solutions that are affordable, accessible and equitable. Increasingly urban green space is recognized as an important social determinant of health, with the potential to protect mental health – for example, by buffering against life stressors - as well as relieving the symptom severity of specific psychiatric disorders. Pathways linking urban green space with mental wellbeing include the ability of natural stimuli – trees, water, light patterns – to promote ‘involuntary attention’ allowing the brain to disengage and recover from cognitive fatigue. This article brings together evidence of the positive effects of urban green space on common mental health problems (i.e. stress, anxiety, depression) together with evidence of its role in the symptom relief of specific psychiatric disorders, including schizophrenia and psychosis, post-traumatic stress disorder (PTSD), dementia, attention deficit/hyperactivity Disorder (ADHD) and autism. Urban green space is a potential force for building mental health: city planners, urban designers, policy makers and public health professionals need to maximize the opportunities in applying green space strategies for both health prevention and in supporting treatment of mental ill health.
Paolo Inglese and Giuseppe Sortino
In May, every year since 1857, in the great park of Sans-Souci in Potsdam just outside Berlin—a park begun in 1745 by Emperor Frederick II of Hohenzollern and expanded a century later by Frederick William IV—the doors of the great Orangerie open in and a Renaissance-style garden called Sizilianischer Garten is set up. On horse-drawn carriages, large olive and citrus trees are brought outdoors, and are then raised in masters.
For the young European who, in the second half of the 18th century and in the first decades of the following, traveled to Italy to see and study Renaissance culture and the remains of Greek civilization, the citrus species and fruits and groves of southern Italy became the ultimate symbol of beauty and a sort of status symbol of wealth, particularly that of landowners. Nothing is more expressive of the fascination of their fruit than Abu-l-Hasan Ali’s 12th-century writings: “Come on, enjoy your harvested orange: happiness is present when it is present. / Welcome the cheeks of the branches, and welcome the stars of the trees! / It seems that the sky has lavished gold and that the earth has formed some shiny spheres.”
Indeed, Citrus spp. are among the most important crops and consumed fruit worldwide. Their co-evolution due to a millennial agricultural utilization resulted in a complexity of species and cultivated varieties derived by natural or induced mutations, crossing and breeding the “original” species (Citrus medica, Citrus maxima, Citrus reticulate, Fortunella japonica) and their main progenies (C. aurantium, C. sinensis, Citrus limon, Citrus paradisi, Citrus clementina, etc.). Citrus spread from the original tropical and subtropical regions of southeast Asia toward the Mediterranean countries of Europe and North Africa and, after 1492, in the Americas, not to mention South Africa and Australia, where they still have a very important role. Citrus species, wherever they have been cultivated, quickly became the protagonists of the letters and the arts, as well as the markets and gastronomy, and can even be found in religious ceremonies, such as for Feast of Tabernacles (Sukkot). Studies on Citrus botany, cultivation, and utilization have been pursued since the early stages of the fruit’s domestication and grew following their introduction in Europe, the Americas, Africa, and Australia. Citrus research involves many different aspects: such as the study of citrus origin and botanical classification; citrus growing, propagation, and orchard management; citrus fruit quality, utilization and industry; citrus gardening and ornamentals; citrus in arts and manufacturing.
Soil salinity has been causing problems for agriculturists for millennia, primarily in irrigated lands. The importance of salinity issues is increasing, since large areas are affected by irrigation-induced salt accumulation. A wide knowledge base has been collected to better understand the major processes of salt accumulation and choose the right method of mitigation. There are two major types of soil salinity that are distinguished because of different properties and mitigation requirements. The first is caused mostly by the large salt concentration and is called saline soil, typically corresponding to Solonchak soils. The second is caused mainly by the dominance of sodium in the soil solution or on the soil exchange complex. This latter type is called “sodic” soil, corresponding to Solonetz soils. Saline soils have homogeneous soil profiles with relatively good soil structure, and their appropriate mitigation measure is leaching. Naturally sodic soils have markedly different horizons and unfavorable physical properties, such as low permeability, swelling, plasticity when wet, and hardness when dry, and their limitation for agriculture is mitigated typically by applying gypsum. Salinity and sodicity need to be chemically quantified before deciding on the proper management strategy. The most complex management and mitigation of salinized irrigated lands involves modern engineering including calculations of irrigation water rates and reclamation materials, provisions for drainage, and drainage disposal. Mapping-oriented soil classification was developed for naturally saline and sodic soils and inherited the first soil categories introduced more than a century ago, such as Solonchak and Solonetz in most of the total of 24 soil classification systems used currently. USDA Soil Taxonomy is one exception, which uses names composed of formative elements.
Confidence in the projected impacts of climate change on agricultural systems has increased substantially since the first Intergovernmental Panel on Climate Change (IPCC) reports. In Africa, much work has gone into downscaling global climate models to understand regional impacts, but there remains a dearth of local level understanding of impacts and communities’ capacity to adapt. It is well understood that Africa is vulnerable to climate change, not only because of its high exposure to climate change, but also because many African communities lack the capacity to respond or adapt to the impacts of climate change. Warming trends have already become evident across the continent, and it is likely that the continent’s 2000 mean annual temperature change will exceed +2°C by 2100. Added to this warming trend, changes in precipitation patterns are also of concern: Even if rainfall remains constant, due to increasing temperatures, existing water stress will be amplified, putting even more pressure on agricultural systems, especially in semiarid areas. In general, high temperatures and changes in rainfall patterns are likely to reduce cereal crop productivity, and new evidence is emerging that high-value perennial crops will also be negatively impacted by rising temperatures. Pressures from pests, weeds, and diseases are also expected to increase, with detrimental effects on crops and livestock.
Much of African agriculture’s vulnerability to climate change lies in the fact that its agricultural systems remain largely rain-fed and underdeveloped, as the majority of Africa’s farmers are small-scale farmers with few financial resources, limited access to infrastructure, and disparate access to information. At the same time, as these systems are highly reliant on their environment, and farmers are dependent on farming for their livelihoods, their diversity, context specificity, and the existence of generations of traditional knowledge offer elements of resilience in the face of climate change. Overall, however, the combination of climatic and nonclimatic drivers and stressors will exacerbate the vulnerability of Africa’s agricultural systems to climate change, but the impacts will not be universally felt. Climate change will impact farmers and their agricultural systems in different ways, and adapting to these impacts will need to be context-specific.
Current adaptation efforts on the continent are increasing across the continent, but it is expected that in the long term these will be insufficient in enabling communities to cope with the changes due to longer-term climate change. African famers are increasingly adopting a variety of conservation and agroecological practices such as agroforestry, contouring, terracing, mulching, and no-till. These practices have the twin benefits of lowering carbon emissions while adapting to climate change as well as broadening the sources of livelihoods for poor farmers, but there are constraints to their widespread adoption. These challenges vary from insecure land tenure to difficulties with knowledge-sharing.
While African agriculture faces exposure to climate change as well as broader socioeconomic and political challenges, many of its diverse agricultural systems remain resilient. As the continent with the highest population growth rate, rapid urbanization trends, and rising GDP in many countries, Africa’s agricultural systems will need to become adaptive to more than just climate change as the uncertainties of the 21st century unfold.
In 1945 the Amazon biome was almost intact. Marks of ancient cultural developments in Andean and lowland Amazon had cicatrized and the impacts of rubber and more recent resources exploitation were reversible. Very few roads existed, and only on the Amazon’s periphery. However, from the 1950s, but especially in the 1960s, Brazil and some Andean countries launched ambitious road-building and colonization processes. Amazon occupation heavily intensified in the 1970s when forest losses began to raise worldwide concern. More roads continued to be built at a geometrically growing pace in every following decade, multiplying correlated deforestation and forest degradation. A no-return point was reached when interoceanic roads crossed the Brazilian-Andean border in the 2000s, exposing remaining safe havens for indigenous people and nature. It is commonly estimated that today no less than 18% of the forest has been substituted by agriculture and that over 60% of that remaining has been significantly degraded.
Theories regarding the importance of biogeochemical cycles have been developed since the 1970s. The confirmation of the role of the Amazon as a carbon sink added some international pressure for its protection. But, in general, the many scientific discoveries regarding the Amazon have not helped to improve its conservation. Instead, a combination of new agricultural technologies, anthropocentric philosophies, and economic changes strongly promoted forest clearing.
Since the 1980s and as of today Amazon conservation efforts have been increasingly diversified, covering five theoretically complementary strategies: (a) more, larger, and better-managed protected areas; (b) more and larger indigenous territories; (c) a series of “sustainable-use” options such as “community-based conservation,” sustainable forestry, and agroforestry; (d) financing of conservation through debt swaps and climate change’s related financial mechanisms; and (e) better legislation and monitoring. Only five small protected areas have existed in the Amazon since the early 1960s but, responding to the road-building boom of the 1970s, several larger patches aiming at conserving viable samples of biological diversity were set aside, principally in Brazil and Peru. Today around 22% of the Amazon is protected but almost half of such areas correspond to categories that allow human presence and resources exploitation, and there is no effective management. Another 28% or more pertains to indigenous people who may or may not conserve the forest. Both types of areas together cover over 45% of the Amazon. None of the strategies, either alone or in conjunction, have fully achieved their objectives, while development pressures and threats multiply as roads and deforestation continue relentlessly, with increasing funding by multilateral and national banks and due to the influence of transnational enterprises.
The future is likely to see unprecedented agriculture expansion and corresponding intensification of deforestation and forest degradation even in protected areas and indigenous land. Additionally, the upper portion of the Amazon basin will be impacted by new, larger hydraulic works. Mining, formal as well as illegal, will increase and spread. Policymakers of Amazon countries still view the region as an area in which to expand conventional development while the South American population continues to be mostly indifferent to Amazon conservation.
Charles A. Francis
Adaptation of cropping systems to weather uncertainty and climate change is essential for resilient food production and long-term food security. Changes in climate result in substantial temporal modifications of cropping conditions, and rainfall and temperature patterns vary greatly with location. These challenges come at a time when global human population and demand for food are both increasing, and it appears to be difficult to find ways to satisfy growing needs with conventional systems of production. Agriculture in the future will need to feature greater biodiversity of crop species and appropriate design and management of cropping and integrated crop/animal systems. More diverse and longer-cycle crop rotations will need to combine sequences of annual row crops such as maize and soybean with close-drilled cereals, shallow-rooted with deep-rooted crops, summer crops with winter crops, and annuals with perennials in the same fields. Resilience to unpredictable weather will also depend on intercropping, with the creative arrangement of multiple interacting crop species to diversify the field and the landscape. Other multiple-cropping systems and strategies to integrate animals and crops will make more efficient use of natural resources and applied inputs; these include systems such as permaculture, agroforestry, and alley cropping. Future systems will be spatially diverse and adapted to specific fields, soil conditions, and unique agroecozones. Production resilience will be achieved by planting diverse combinations of species together in the same field, and economic resilience through producing a range of products that can be marketed through different channels. The creation of local food webs will be more appropriate in the future, as contrasted with the dominance of global food chains today. Materials considered “waste” from the food system, including human urine and feces, will become valuable resources to be cycled back into the natural environment and into food production. Due to the increasing scarcity of fertile land, the negative contributions of chemicals to environmental pollution, the costs of fossil fuels, and the potential for the economic and political disruption of supply chains, future systems will increasingly need to be local in character while still achieving adaptation to the most favorable conditions for each system and location. It is essential that biologically and economically resilient systems become productive and profitable, as well as environmentally sound and socially equitable, in order to contribute to stability of food production, security of the food supply, and food sovereignty, to the extent that this is possible. The food system cannot continue along the lines of “business as usual,” and its path will need to radically diverge from the recognized trends toward specialization and globalization of the early 21st century. The goal needs to shift from exploitation and short-term profits to conservation of resources, greater equity in distribution of benefits, and resilience in food supply, even with global climate change.
Muhammad Farooq, Ahmad Nawaz, and Faisal Nadeem
Planned crop rotation offers a pragmatic option to improve soil fertility, manage insect pests and diseases, and offset the emission of greenhouse gases. The inclusion of legume crops in crop rotations helps to reduce the use of external nitrogen inputs for legumes and other crops because legumes may fix the atmospheric nitrogen. This also helps to reduce the environmental pollution caused by volatilization and leaching of applied nitrogen. The inclusion of allelopathic crops in rotation may be useful to suppress noxious weeds due to release of the allelochemicals in the rhizosphere. The rotation of tap-rooted crops with shallow rooted crops may result in efficient and productive use of nutrient resources and conservation of soil moisture. Continuous monoculture systems may cause the loss of biodiversity. Land fallowing is an efficient agricultural management technique mostly practiced in arid regions to capture rainwater and store it in the soil profile for later use in crop production. During fallowing, tillage operations are practiced to enhance moisture conservation in the soil. Keeping soil fallow for a season or more restores soil fertility through nutrient deposits; increases organic matter, microbial carbon, and soil microbial diversity; and improves the soil’s physical properties, including aggregation stability and reduced soil compaction due to decreased traffic. In addition, fallowing of land provides biological means of pest (weeds and insects) control by disrupting the life cycle of pests and decreasing reliance on pesticides. Land fallowing can help offset the emission of greenhouse gases from agricultural fields by reducing traffic and increasing carbon sequestration within the soil. Summer fallowing may help to preserve moisture in diverse soil types in the rainfed regions of the world, although it may reduce the carbon sequestration potential of soils over the long term. Energy resources are decreasing, and the inclusion of energy crops in crop rotation may be highly beneficial. Many of the processes, factors, and mechanisms involved in crop rotation and land fallowing are poorly understood and require further investigation.
Shu Ting Chang and Solomon P. Wasser
The word mushroom may mean different things to different people in different countries. Specialist studies on the value of mushrooms and their products should have a clear definition of the term mushroom. In a broad sense, “Mushroom is a distinctive fruiting body of a macrofungus, which produce spores that can be either epigeous or hypogeous and large enough to be seen with the naked eye and to be picked by hand.” Thus, mushrooms need not be members of the group Basidiomycetes, as commonly associated, nor aerial, nor fleshy, nor edible. This definition is not perfect, but it has been accepted as a workable term to estimate the number of mushrooms on Earth (approximately 16,000 species according to the rules of International Code of Nomenclature). The most cultivated mushrooms are saprophytes and are heterotrophic for carbon compounds. Even though their cells have walls, they are devoid of chlorophyll and cannot perform photosynthesis. They are also devoid of vascular xylem and phloem. Furthermore, their cell walls contain chitin, which also occurs in the exoskeleton of insects and other arthropods. They absorb O2 and release CO2. In fact, they may be functionally more closely related to animal cells than plants. However, they are sufficiently distinct both from plants and animals and belong to a separate group in the Fungi Kingdom. They rise up from lignocellulosic wastes: yet, they become bountiful and nourishing. Mushrooms can greatly benefit environmental conditions. They biosynthesize their own food from agricultural crop residues, which, like solar energy, are readily available; otherwise, their byproducts and wastes would cause health hazards. The spent compost/substrate could be used to grow other species of mushrooms, as fodder for livestock, as a soil conditioner and fertilizer, and in environmental bioremediation. The cultivation of mushrooms dates back many centuries; Auricularia auricula-judae, Lentinula edodes, and Agaricus bisporus have, for example, been cultivated since 600
Mushrooms can be used as food, tonics, medicines, cosmeceuticals, and as natural biocontrol agents in plant protection with insecticidal, fungicidal, bactericidal, herbicidal, nematocidal, and antiphytoviral activities. The multidimensional nature of the global mushroom cultivation industry, its role in addressing critical issues faced by humankind, and its positive contributions are presented. Furthermore, mushrooms can serve as agents for promoting equitable economic growth in society. Since the lignocellulose wastes are available in every corner of the world, they can be properly used in the cultivation of mushrooms, and therefore could pilot a so-called white agricultural revolution in less developed countries and in the world at large. Mushrooms demonstrate a great impact on agriculture and the environment, and they have great potential for generating a great socio-economic impact in human welfare on local, national, and global levels.
Wheat is the most widely grown food crop in the world and the dominant staple crop in temperate countries where it contributes between about 20% and 50% of the total energy intake. About 95% of the wheat grown is hexaploid bread wheat, with tetraploid durum wheat being grown in the hot dry Mediterranean climate and very small volumes of ancient species. About 80% of the dry weight of the mature grain is starchy endosperm. This is the major grain storage tissue, which is separated by milling to give white flour, the outer layers and germ together forming the bran. However, white flour and bran differ significantly in their compositions, with white flour being rich in starch (about 80% dry wt) and protein (about 10% dry wt) and the bran rich in fiber, minerals, vitamins, and phytochemicals.
Most of the wheat consumed by humankind is in the form of bread, noodles, pasta, and other processed foods, and the quality for processing is determined by two major characteristics: the grain texture (hardness) and the viscoelastic properties conferred to dough by the gluten proteins.
In addition to being a source of energy, wheat also contributes protein and a range of other essential and beneficial components, particularly dietary fiber. However, because most of these components are concentrated in the bran, it is important to increase the consumption of whole grain products or to improve the composition of white flour. Although there is concern among consumers about possible adverse effects of consuming wheat products on health, these are unlikely to affect more than a small proportion of the population, and wheat should form part of a healthy balanced diet for the vast majority.
Mainaak Mukhopadhyay and Tapan Kumar Mondal
Tea, the globally admired, non-alcoholic, caffeine-containing beverage, is manufactured from the tender leaves of the tea [Camellia sinensis (L.)] plant. It is basically a woody, perennial crop with a lifespan of more than 100 years. Cultivated tea plants are natural hybrids of the three major taxa or species, China, Assam (Indian), or Cambod (southern) hybrids based on the morphological characters (principally leaf size). Planting materials are either seedlings (10–18 months old) developed from either hybrid, polyclonal, or biclonal seeds, or clonal cuttings developed from single-leaf nodal cuttings of elite genotypes. Plants are forced to remain in the vegetative stage as bushes by following cultural practices like centering, pruning, and plucking, and they are harvested generally from the second year onward at regular intervals of 7–10 days in the tropics and subtropics, with up to 60 years as the economic lifespan. Originally, the Chinese were the first to use tea as a medicinal beverage, around 2000 years ago, and today, around half of the world’s population drink tea. It is primarily consumed as black tea (fermented tea), although green tea (non-fermented) and oolong tea (semifermented) are also consumed in many countries. Tea is also used as vegetables such as “leppet tea” in Burma and “meing tea” in Thailand.
Green tea has extraordinary antioxidant properties, and black tea plays a positive role in treating cardiovascular ailments. Tea in general has considerable therapeutic value and can cure many diseases. Global tea production (black, green, and instant) has increased significantly during the past few years. China, as the world’s largest tea producer, accounts for more than 38% of the total global production of made tea [i.e. ready to drink tea] annually, while production in India, the second-largest producer. India recorded total production of 1233.14 million kg made tea during 2015–2016, which is the highest ever production so far.
Since it is an intensive monoculture, tea cultivation has environmental impacts. Application of weedicides, pesticides, and inorganic fertilizers creates environmental hazards. Meanwhile, insecticides often eliminate the fauna of a vast tract of land. Soil degradation is an additional concern because the incessant use of fertilizers and herbicides compound soil erosion. Apart from those issues, chemical runoff into bodies of water can also create problems. Finally, during tea manufacturing, fossil fuel is used to dry the processed leaves, which also increases environmental pollution.
Dairy has intertwined with human society since the beginning of civilization. It evolves from art in ancient society to science in the modern world. Its roles in nutrition and health are underscored by the continuous increase in global consumption. Milk production increased by almost 50% in just the past quarter century alone. Population growth, income rise, nutritional awareness, and science and technology advancement contributed to a continuous trend of increased milk production and consumption globally. With a fourfold increase in milk production per cow since the 1940s, the contemporary dairy industry produces more milk with fewer cows, and consumes less feed and water per liter of milk produced. The dairy sector is diversified, as people from a wider geographical distribution are consuming milk, from cattle to species such as buffalo, goat, sheep, and camel. The dairy industry continues to experience structural changes that impact society, economy, and environment. Organic dairy emerged in the 1990s as consumers increasingly began viewing it as an appropriate way of both farming and rural living. Animal welfare, environmental preservation, product safety, and health benefit are important considerations in consuming and producing organic dairy products. Large dairy operations have encountered many environmental issues related to elevated greenhouse gas emissions. Dairy cattle are second only to beef cattle as the largest livestock contributors in methane emission. Disparity in greenhouse gas emissions per dairy animal among geographical regions can be attributed to production efficiency. Although a number of scientific advancements have implications in the inhibition of methanogenesis, improvements in production efficiency through feeding, nutrition, genetic selection, and management remain promising for the mitigation of greenhouse gas emissions from dairy animals. This article describes the trends in milk production and consumption, the debates over the role of milk in human nutrition, the global outlook of organic dairy, the abatement of greenhouse gas emissions from dairy animals, as well as scientific and technological developments in nutrition, genetics, reproduction, and management in the dairy sector.
Christiane Runyan and Jeff Stehm
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Eight thousand years ago, forests covered an estimated 47% of Earth’s land surface. In 2015, forests covered roughly 30% of the Earth’s land surface, a cumulative loss over the last 8,000 years of approximately 2.2 billion hectares (ha). Between 1990 and 2015, forest losses occurred at the rate of about 0.13% annually, but this rate appears to be slowing. These losses mostly occur in tropical forests (58%), followed by boreal (27%) and temperate forests (8%). Deforestation is driven by a number of direct and indirect factors and processes that vary across regions and interact in complex ways. The primary driver of deforestation is agricultural expansion (both commercial and subsistence), followed by mining, infrastructure extension, and urban expansion. Indirectly, population and economic growth increase the demand for agricultural and timber products. Global food demand will increase 70% by 2050, requiring a net increase of 70 million ha of arable land under cultivation, with approximately 80% of this expansion occurring in the tropics. Deforestation is affected by other indirect factors such as land tenure uncertainties, poor governance, low capacity of public forestry agencies, and inadequate planning and monitoring. Forest loss has a number of environmental, economic, and social implications. Environmentally, forests provide an expansive range of benefits across local, regional, and global scales, including hydrological benefits (e.g., regulating water supply and river discharge), climate benefits (e.g., precipitation recycling, regulating local and global temperature, and indirectly, by taking up atmospheric CO2 during photosynthesis), biogeochemical benefits (e.g., enhancing nutrient availability and reducing nutrient losses), and by supporting greater biodiversity as well as ecosystem stability and resiliency, to name a few. The loss of forest vegetation may negatively affect important ecosystem processes and services, and may induce bistable ecosystem dynamics. The existence of bistable ecosystem dynamics in some forest ecosystems suggests that these forests are prone to abrupt and irreversible shifts to a stable and often degraded state with no trees. In addition to environmental impacts, the long-term loss of forest resources negatively affects societies. About 8% (450 million) of the world’s population live in forest ecosystems, with an estimated 350 million people entirely dependent on forest ecosystems for their livelihoods. The forest sector, in 2011, contributed an estimated total amount of USD 600 billion to global GDP, or about 0.9% of global GDP. Understanding how to best manage forest resources to preserve their unique qualities is a challenge that will require an integrated and concentrated effort from scientists and policymakers alike. In particular, improving forest management will require being able to more accurately measure and monitor forest resources, identifying the trade-offs between competing objectives, valuing forest goods and services, and equitably balancing costs and benefits. On the policy front, approaches to strengthen land tenure and property rights, reduce corruption, improve the capacity of public agencies, and develop more inclusive governance arrangements are needed. Underlying these management and policy goals is the need to better understand the environmental processes occurring in forests—to improve their management, minimize adverse impacts, and strengthen our models of these systems, because the long-term loss of forest resources affects not only the functioning of ecosystems but also the societies whose health and livelihoods depend upon them.