181-200 of 215 Results

Article

David A. Robinson, Fiona Seaton, Katrina Sharps, Amy Thomas, Francis Parry Roberts, Martine van der Ploeg, Laurence Jones, Jannes Stolte, Maria Puig de la Bellacasa, Paula Harrison, and Bridget Emmett

Soils provide important functions, which according to the European Commission include: biomass production (e.g., agriculture and forestry); storing, filtering, and transforming nutrients, substances, and water; harboring biodiversity (habitats, species, and genes); forming the physical and cultural environment for humans and their activities; providing raw materials; acting as a carbon pool; and forming an archive of geological and archaeological heritage, all of which support human society and planetary life. The basis of these functions is the soil natural capital, the stocks of soil material. Soil functions feed into a range of ecosystem services which in turn contribute to the United Nations sustainable development goals (SDGs). This overarching framework hides a range of complex, often nonlinear, biophysical interactions with feedbacks and perhaps yet to be discovered tipping points. Moreover, interwoven with this biophysical complexity are the interactions with human society and the socioeconomic system which often drives our attitudes toward, and the management and exploitation of, our environment. Challenges abound, both social and environmental, in terms of how to feed an increasingly populous and material world, while maintaining some semblance of thriving ecosystems to pass on to future generations. How do we best steward the resources we have, keep them from degradation, and restore them where necessary as soils underpin life? How do we measure and quantify the soil resources we have, how are they changing in time and space, what can we predict about their future use and function? What is the value of soil resources, and how should we express it? This article explores how soil properties and processes underpin ecosystem services, how to measure and model them, and how to identify the wider benefits they provide to society. Furthermore, it considers value frameworks, including caring for our resources.

Article

Pichu Rengasamy

Salt accumulation in soils, affecting agricultural productivity, environmental health, and the economy of the community, is a global phenomenon since the decline of ancient Mesopotamian civilization by salinity. The global distribution of salt-affected soils is estimated to be around 830 million hectares extending over all the continents, including Africa, Asia, Australasia, and the Americas. The concentration and composition of salts depend on several resources and processes of salt accumulation in soil layers. Major types of soil salinization include groundwater associated salinity, non–groundwater-associated salinity, and irrigation-induced salinity. There are several soil processes which lead to salt build-up in the root zone interfering with the growth and physiological functions of plants. Salts, depending on the ionic composition and concentration, can also affect many soil processes, such as soil water dynamics, soil structural stability, solubility of essential nutrients, and pH and pE of soil water—all indirectly hindering plant growth. The direct effect of salinity includes the osmotic effect affecting water and nutrient uptake and the toxicity or deficiency due to high concentration of certain ions. The plan of action to resolve the problems associated with soil salinization should focus on prevention of salt accumulation, removal of accumulated salts, and adaptation to a saline environment. Successful utilization of salinized soils needs appropriate soil and irrigation management and improvement of plants by breeding and genetic engineering techniques to tolerate different levels of salinity and associated abiotic stress.

Article

Beyond damage to rainfed agricultural and forestry ecosystems, soil erosion due to water affects surrounding environments. Large amounts of eroded soil are deposited in streams, lakes, and other ecosystems. The most costly off-site damages occur when eroded particles, transported along the hillslopes of a basin, arrive at the river network or are deposited in lakes. The negative effects of soil erosion include water pollution and siltation, organic matter loss, nutrient loss, and reduction in water storage capacity. Sediment deposition raises the bottom of waterways, making them more prone to overflowing and flooding. Sediments contaminate water ecosystems with soil particles and the fertilizer and pesticide chemicals they contain. Siltation of reservoirs and dams reduces water storage, increases the maintenance cost of dams, and shortens the lifetime of reservoirs. Sediment yield is the quantity of transported sediments, in a given time interval, from eroding sources through the hillslopes and river network to a basin outlet. Chemicals can also be transported together with the eroded sediments. Sediment deposition inside a reservoir reduces the water storage of a dam. The prediction of sediment yield can be carried out by coupling an erosion model with a mathematical operator which expresses the sediment transport efficiency of the hillslopes and the channel network. The sediment lag between sediment yield and erosion can be simply represented by the sediment delivery ratio, which can be calculated at the outlet of the considered basin, or by using a distributed approach. The former procedure couples the evaluation of basin soil loss with an estimate of the sediment delivery ratio SDRW for the whole watershed. The latter procedure requires that the watershed be discretized into morphological units, areas having a constant steepness and a clearly defined length, for which the corresponding sediment delivery ratio is calculated. When rainfall reaches the surface horizon of the soil, some pollutants are desorbed and go into solution while others remain adsorbed and move with soil particles. The spatial distribution of the loading of nitrogen, phosphorous, and total organic carbon can be deduced using the spatial distribution of sediment yield and the pollutant content measured on soil samples. The enrichment concept is applied to clay, organic matter, and all pollutants adsorbed by soil particles, such as nitrogen and phosphorous. Knowledge of both the rate and pattern of sediment deposition in a reservoir is required to establish the remedial strategies which may be practicable. Repeated reservoir capacity surveys are used to determine the total volume occupied by sediment, the sedimentation pattern, and the shift in the stage-area and stage-storage curves. By converting the sedimentation volume to sediment mass, on the basis of estimated or measured bulk density, and correcting for trap efficiency, the sediment yield from the basin can be computed.

Article

Soils are the complex, dynamic, spatially diverse, living, and environmentally sensitive foundations of terrestrial ecosystems as well as human civilizations. The modern, environmental study of soil is a truly young scientific discipline that emerged only in the late 19th century from foundations in agricultural chemistry, land resource mapping, and geology. Today, little more than a century later, soil science is a rigorously interdisciplinary field with a wide range of exciting applications in agronomy, ecology, environmental policy, geology, public health, and many other environmentally relevant disciplines. Soils form slowly, in response to five inter-related factors: climate, organisms, topography, parent material, and time. Consequently, many soils are chemically, biologically, and/or geologically unique. The profound importance of soil, combined with the threats of erosion, urban development, pollution, climate change, and other factors, are now prompting soil scientists to consider the application of endangered species concepts to rare or threatened soil around the world.

Article

Lars J. Munkholm, Mansonia Pulido-Moncada, and Peter Bilson Obour

Soil tilth is a dynamic and multifaceted concept that refers to the suitability of a soil for planting and growing crops. A soil with good tilth is “usually loose, friable and well granulated”; a condition that can also be described as the soil’s having a good “self-mulching” ability. On the other hand soils with poor tilth are usually dense (compacted), with hard, blocky, or massive structural characteristics. Poor soil tilth is generally associated with compaction, induced by wheel traffic, animal trampling, and/or to natural soil consolidation (i.e., so-called hard-setting behavior). The soil-tilth concept dates back to the early days of arable farming and has been addressed in soil-science literature since the 1920s. Soil tilth is generally associated with soil’s physical properties and processes rather than the more holistic concepts of soil quality and soil health. Improved soil tilth has been associated with deep and intensive tillage, as those practices were traditionally considered the primary method for creating a suitable soil condition for plant growth. Therefore, for millennia there has been a strong focus both in practice and in research on developing tillage tools that create suitable growing conditions for different crops, soil types, and climatic conditions. Deep and intensive tillage may be appropriate for producing a good, short-term tilth, but may also lead to severe long-term degradation of the soil structure. The failure of methods relying on physical manipulation as means of sustaining good tilth has increased the recognition given to the important role that soil biota have in soil-structure formation and stabilization. Soil biology has only received substantial attention in soil science during the last few decades. One result of this is that this knowledge is now being used to optimize soil management through strategies such as more diverse rotations, cover crops, and crop-residue management, with these being applied either as single management components or more preferably as part of an integrated system (i.e., either conservation agriculture or organic farming).Traditionally, farmers have evaluated soil tilth qualitatively in the field. However, a number of quantitative or semi-quantitative procedures for assessing soil tilth has been developed over the last 80 years. These procedures vary from simply determining soil cloddiness to more detailed evaluations whereby soil’s physical properties (e.g., porosity, strength, and aggregate characteristics) are combined with its consistency and organic-matter measurements in soil-tilth indices. Semi-quantitative visual soil-evaluation methods have also been developed for field evaluation of soil tilth, and are now used in many countries worldwide.

Article

Tim Haab, Lynne Lewis, and John Whitehead

The contingent valuation method (CVM) is a stated preference approach to the valuation of non-market goods. It has a 50+-year history beginning with a clever suggestion to simply ask people for their consumer surplus. The first study was conducted in the 1960s and over 10,000 studies have been conducted to date. The CVM is used to estimate the use and non-use values of changes in the environment. It is one of the more flexible valuation methods, having been applied in a large number of contexts and policies. The CVM requires construction of a hypothetical scenario that makes clear what will be received in exchange for payment. The scenario must be realistic and consequential. Economists prefer revealed preference methods for environmental valuation due to their reliance on actual behavior data. In unguarded moments, economists are quick to condemn stated preference methods due to their reliance on hypothetical behavior data. Stated preference methods should be seen as approaches to providing estimates of the value of certain changes in the allocation of environmental and natural resources for which no other method can be used. The CVM has a tortured history, having suffered slings and arrows from industry-funded critics following the Exxon Valdez and British Petroleum (BP)–Deepwater Horizon oil spills. The critics have harped on studies that fail certain tests of hypothetical bias and scope, among others. Nonetheless, CVM proponents have found that it produces similar value estimates to those estimated from revealed preference methods such as the travel cost and hedonic methods. The CVM has produced willingness to pay (WTP) estimates that exhibit internal validity. CVM research teams must have a range of capabilities. A CVM study involves survey design so that the elicited WTP estimates have face validity. Questionnaire development and data collection are skills that must be mastered. Welfare economic theory is used to guide empirical tests of theory such as the scope test. Limited dependent variable econometric methods are often used with panel data to test value models and develop estimates of WTP. The popularity of the CVM is on the wane; indeed, another name for this article could be “the rise and fall of CVM,” not because the CVM is any less useful than other valuation methods. It is because the best practice in the CVM is merging with discrete choice experiments, and researchers seem to prefer to call their approach discrete choice experiments. Nevertheless, the problems that plague discrete choice experiments are the same as those that plague contingent valuation. Discrete choice experiment–contingent valuation–stated preference researchers should continue down the same familiar path of methods development.

Article

Henry Darcy was an engineer who built the drinking water supply system of the French city of Dijon in the mid-19th century. In doing so, he developed an interest in the flow of water through sands, and, together with Charles Ritter, he experimented (in a hospital, for unclear reasons) with water flow in a vertical cylinder filled with different sands to determine the laws of flow of water through sand. The results were published in an appendix to Darcy’s report on his work on Dijon’s water supply. Darcy and Ritter installed mercury manometers at the bottom and near the top of the cylinder, and they observed that the water flux density through the sand was proportional to the difference between the mercury levels. After mercury levels are converted to equivalent water levels and recast in differential form, this relationship is known as Darcy’s Law, and until this day it is the cornerstone of the theory of water flow in porous media. The development of groundwater hydrology and soil water hydrology that originated with Darcy’s Law is tracked through seminal contributions over the past 160 years. Darcy’s Law was quickly adopted for calculating groundwater flow, which blossomed after the introduction of a few very useful simplifying assumptions that permitted a host of analytical solutions to groundwater problems, including flows toward pumped drinking water wells and toward drain tubes. Computers have made possible ever more advanced numerical solutions based on Darcy’s Law, which have allowed tailor-made computations for specific areas. In soil hydrology, Darcy’s Law itself required modification to facilitate its application for different soil water contents. The understanding of the relationship between the potential energy of soil water and the soil water content emerged early in the 20th century. The mathematical formalization of the consequences for the flow rate and storage change of soil water was established in the 1930s, but only after the 1970s did computers become powerful enough to tackle unsaturated flows head-on. In combination with crop growth models, this allowed Darcy-based models to aid in the setup of irrigation practices and to optimize drainage designs. In the past decades, spatial variation of the hydraulic properties of aquifers and soils has been shown to affect the transfer of solutes from soils to groundwater and from groundwater to surface water. More recently, regional and continental-scale hydrology have been required to quantify the role of the terrestrial hydrological cycle in relation to climate change. Both developments may pose new areas of application, or show the limits of applicability, of a law derived from a few experiments on a cylinder filled with sand in the 1850s.

Article

Gary Sands, Srinivasulu Ale, Laura Christianson, and Nathan Utt

Agricultural (tile) drainage enables agricultural production on millions of hectares of arable lands worldwide. Lands where drainage or irrigation (and sometimes both) are implemented, generate a disproportionately large share of global agricultural production compared to dry land or rain-fed agricultural lands and thus, these water management tools are vital for meeting the food demands of today and the future. Future food demands will likely require irrigation and drainage to be practiced on an even greater share of the world’s agricultural lands. The practice of agricultural drainage finds its roots in ancient societies and has evolved greatly to incorporate modern technologies and materials, including the modern drainage plow, plastic drainage pipe and tubing, laser and GPS-guided installation equipment, and computer-aided design tools. Although drainage brings important agricultural production and environmental benefits to poorly drained and salt-affected arable lands, it can also give rise to the transport of nutrients and other constituents to downstream waters. Other unwanted ecological and hydrologic environmental effects may also be associated with the practice. The goal of this article is to familiarize the reader with the practice of subsurface agricultural drainage, the history and extent of its application, and the benefits commonly associated with it. In addition, environmental effects associated with subsurface drainage including hydrologic and water quality effects are presented, and conservation practices for mitigating these unwanted effects are described. These conservation practices are categorized by whether they are implemented in-field (such as controlled drainage) versus edge-of-field (such as bioreactors). The literature cited and reviewed herein is not meant to be exhaustive, but seminal and key literary works are identified where possible.

Article

Luis S. Pereira and José M. Gonçalves

Surface irrigation is the oldest and most widely used irrigation method, more than 83% of the world’s irrigated area. It comprises traditional systems, developed over millennia, and modern systems with mechanized and often automated water application and adopting precise land-leveling. It adapts well to non-sloping conditions, low to medium soil infiltration characteristics, most crops, and crop mechanization as well as environmental conditions. Modern methods provide for water and energy saving, control of environmental impacts, labor saving, and cropping economic success, thus for competing with pressurized irrigation methods. Surface irrigation refers to a variety of gravity application of the irrigation water, which infiltrates into the soil while flowing over the field surface. The ways and timings of how water flows over the field and infiltrates the soil determine the irrigation phases—advance, maintenance or ponding, depletion, and recession—which vary with the irrigation method, namely paddy basin, leveled basin, border and furrow irrigation, generally used for field crops, and wild flooding and water spreading from contour ditches, used for pasture lands. System performance is commonly assessed using the distribution uniformity indicator, while management performance is assessed with the application efficiency or the beneficial water use fraction. The factors influencing system performance are multiple and interacting—inflow rate, field length and shape, soil hydraulics roughness, field slope, soil infiltration rate, and cutoff time—while management performance, in addition to these factors, depends upon the soil water deficit at time of irrigation, thus on the way farmers are able to manage irrigation. The process of surface irrigation is complex to describe because it combines surface flow with infiltration into the soil profile. Numerous mathematical computer models have therefore been developed for its simulation, aimed at both design adopting a target performance and field evaluation of actual performance. The use of models in design allows taking into consideration the factors referred to before and, when adopting any type of decision support system or multicriteria analysis, also taking into consideration economic and environmental constraints and issues. There are various aspects favoring and limiting the adoption of surface irrigation. Favorable aspects include the simplicity of its adoption at farm in flat lands with low infiltration rates, namely when water conveyance and distribution are performed with canal and/or low-pressure pipe systems, low capital investment, and low energy consumption. Most significant limitations include high soil infiltration and high variability of infiltration throughout the field, land leveling requirements, need for control of a constant inflow rate, difficulties in matching irrigation time duration with soil water deficit at time of irrigation, and difficult access to equipment for mechanized and automated water application and distribution. The modernization of surface irrigation systems and design models, as well as models and tools usable to support surface irrigation management, have significantly impacted water use and productivity, and thus competitiveness of surface irrigation.

Article

Sarada Krishnan

Coffee is an extremely important agricultural commodity, produced in about 80 tropical countries, with an estimated 125 million people depending on it for their livelihoods in Latin America, Africa, and Asia, with an annual production of about nine million tons of green beans. Consisting of at least 125 species, the genus Coffea L. (Rubiaceae, Ixoroideae, Coffeeae) is distributed in Africa, Madagascar, the Comoros Islands, the Mascarene Islands (La Réunion and Mauritius), tropical Asia, and Australia. Two species are economically important for the production of the beverage coffee, C. arabica L. (Arabica coffee) and C. canephora A. Froehner (robusta coffee). Higher beverage quality is associated with C. arabica. Coffea arabica is a self-fertile tetraploid, which has resulted in very low genetic diversity of this significant crop. Coffee genetic resources are being lost at a rapid pace due to varied threats, such as human population pressures, leading to conversion of land to agriculture, deforestation, and land degradation; low coffee prices, leading to abandoning of coffee trees in forests and gardens and shifting of cultivation to other more remunerative crops; and climate change, leading to increased incidence of pests and diseases, higher incidence of drought, and unpredictable rainfall patterns. All these factors threaten livelihoods in many coffee-growing countries. The economics of coffee production has changed in recent years, with prices on the international market declining and the cost of inputs increasing. At the same time, the demand for specialty coffee is at an all-time high. In order to make coffee production sustainable, attention should be paid to improving the quality of coffee by engaging in sustainable, environmentally friendly cultivation practices, which ultimately can claim higher net returns.

Article

Stephen Foster and John Chilton

This chapter first provides a concise account of the basic principles and concepts underlying scientific groundwater management, and it then both summarises the policy approach to developing an adaptive scheme of management and protection for groundwater resources that is appropriately integrated across relevant sectors and assesses the governance needs, roles and planning requirements to implement the selected policy approach.

Article

The world’s forest cover is approximately 4 billion hectares (10 billion acres). Of this total, approximately one-half is temperate forests. These range from the subtropics to roughly 65 degrees in latitude. As we move toward the equator, the forests would generally be considered tropical or subtropical, while forest above the 65th latitude might be considered boreal. Only a relatively small fraction of the forests that are temperate are managed in any significant manner. The major types of management can vary from serious forest protection to selective harvesting, with considerations for regeneration. Intensive forestry exists in the form of plantation forestry and is similar to agricultural cropping. Seedlings are planted, and the trees are managed in various ways while growing (e.g. fertilizers, herbicides, thinnings) and then harvested at a mature age. Typically, the cycle of planting and management then begins anew. Approximately 200 million hectares of forests are managed beyond simply minimal protection and natural regeneration. Recent estimates suggest that over 100 million hectares globally are intensively managed planted forests. The largest representatives of these forests are found in the Northern Hemisphere (e.g., the United States), China, and various countries of Europe, especially the Nordic countries. However, Brazil, Chile, New Zealand, and Australia are important producers while being in the Southern Hemisphere. A high percentage of managed forests are designed to produce industrial wood for construction and for pulp and paper production. Finally, in some countries like China, planted forests are intended to replace forests destroyed decades and even centuries ago. Many of these planted forests are intended to provide environmental services, including water capture and control, erosion control and soil protection, flood control, and habitat for wild life. Recently, forests are being considered as a vehicle to help control global warming. In addition, afforestation and/or reforestation may help address damages after a disturbance such as a fire. In China, the “green wall” has been established to prevent shoreline erosion in major coastal areas.

Article

Paolo Socci, Alessandro Errico, Giulio Castelli, Daniele Penna, and Federico Preti

Agricultural terraces are widely spread all over the world and are among the most evident landscape signatures of the human fingerprint, in many cases dating back to several centuries. Agricultural terraces create complex anthropogenic landscapes traditionally built to obtain land for cultivation in steep terrains, typically prone to runoff production and soil erosion, and thus hardly suitable for rain-fed farming practices. In addition to acquiring new land for cultivation, terracing can provide a wide array of ecosystem services, including runoff reduction, water conservation, erosion control, soil conservation and increase of soil quality, carbon sequestration, enhancement of biodiversity, enhancement of soil fertility and land productivity, increase of crop yield and food security, development of aesthetic landscapes and recreational options. Moreover, some terraced areas in the world can be considered as a cultural and historical heritage that increases the asset of the local landscape. Terraced slopes may be prone to failure and degradation issues, such as localized erosion, wall or riser collapse, piping, and landsliding, mainly related to runoff concentration processes. Degradation phenomena, which are exacerbated by progressive land abandonment, reduce the efficiency of benefits provided by terraces. Therefore, understanding the physical processes occurring in terraced slopes is essential to find the most effective maintenance criteria necessary to accurately and adequately preserve agricultural terraces worldwide.

Article

From earliest times, at least in arid and semi-arid regions, law has been used to allocate water to particular users, at particular locations, and for particular uses, as well as to regulate the uses of water. In the early 21st century, such laws are found everywhere in the world. While the details of such systems of water law are specific to each culture, these systems, in general terms, conform to one of three basic patterns, or to some combination thereof. The three patterns can be understood as a system of common property, a system of private property, or a system of public property. In a common property system, each person is free to use water as he or she chooses so long as the person has lawful access to the water source and does not unreasonably interfere with other lawful users. Such systems were common in humid regions where generally there was enough water available for all uses, but these break down when demand begins to outstrip supply frequently. Private property systems, more common in arid and semi-arid regions, where water is generally not available to meet all demand on the water sources, is a system that allocates specific amounts of water from an identified water source, for a particular water use at a particular location, and with a definite priority relative to other uses. The problem with such private property systems is their rigidity, with transfers of existing water allocations to new uses or new locations proving difficult in practice. In Australia, the specified claim on a water source is defined not as a quantity, but as a percentage of the available flow. Despite the praise heaped upon this system, it has proven difficult to implement without heavy government intervention, benefiting only large irrigators without adequately addressing the public values that water sources must serve. In part, the problems arise because cheating is easier in the absence of clear volumetric entitlements. The public property systems, which has roots dating back centuries but is largely an artifact of the 20th century, treats water as subject to active public management, whether through collaborative decision-making by stakeholders (a situation that is also sometimes called “common property” but is actually very different from the concept of common property used here), or through governmental institutions. Public property systems seek to avoid the deficiencies of the other two systems (particularly by avoiding the incessant conflicts characteristic of common property systems as demand approaches supply and the rigidity characteristics of actual private property systems), but at the cost of introducing bureaucratized decision making. In the late 20th century, many stakeholders, governments, and international institutions turned to market systems—usually linked to a revived or new private property system—as the supposed optimum means to allocate and re-allocate water to particular uses, users, and locations. Before the late 20th century, markets were rare and small, but institutions like the World Bank set about to make them the primary mechanism for water allocation. Markets, however, proved difficult to implement, at least without transferring wealth from relatively poor users to more prosperous users, and therefore produced a backlash in the form of support for a human right to water that would trump the private property claims central to water markets. The protection of public values, such as ecological or navigational flows, also proved difficult to maintain in the face of the demands of the marketplace. Each of these systems has proven useful in particular settings, but none of them can be universally applied.

Article

Venetia Alexa Hargreaves-Allen

Marine protected areas (MPAs) remain one of the principal strategies for marine conservation globally. MPAs are highly heterogeneous in terms of physical features such as size and shape, habitats included, management bodies undertaking management, goals, level of funding, and extent of enforcement. Economic research related to MPAs initially measured financial, gross, and net values generated by the habitats, most commonly fisheries, tourism, coastal protection, and non-use values. Bioeconomic modeling also generated important insights into the complexities of fisheries-related outcomes at MPAs. MPAs require a significant investment in public funds for design, designation, and ongoing management, which have associated opportunity costs. Therefore cost-benefit analysis has been increasingly required to justify this investment and demonstrate their benefits over time. The true economic value of MPAs is the value of protection, not the resource being protected. There is substantial evidence that MPAs should increase recreational values due to improvements in biodiversity and habitat quality, but assumptions that MPAs will generate such improvements may not be justified. Indeed, there remains no equivocal demonstration of spillover in fisheries adjacent to MPAs, due in part to the variability inherent in ecological and socio-economic processes and limited evidence of tourism benefits that are biologically or socio-cultural sustainable. There is a need for carefully designed valuation studies that compare values for areas within MPAs compared the same areas without management (the counterfactual scenario). The ecosystem service framework has become widely adopted as a way of characterizing goods and services that contribute directly or indirectly to human welfare. Quantitative analyses of the marginal changes to ecosystem services due to MPAs remains rare due to the requirements of large amounts of fine-grained data, relatively undeveloped bio-physical models for the majority of services, and the complexities of incorporating ecological non-linearities and threshold effects. In addition while some services are synergistic (so that double counting is difficult to avoid), others are traded off. Such marginal ecosystem service values are highly context specific, which limits the accuracy associated with benefits transfer. A number of studies published since 2000 have made advances in this area, and this is a rapidly developing field of research. While MPAs have been promoted as a sustainable development tool, there is evidence of significant distributive impacts of MPAs over time, over different time scales and between different stakeholders, including unintended costs to local stakeholders. Research suggests that support and compliance is predicated on the costs and benefits generated locally, which is a major determinant of MPA performance. Better understanding of socio-economic impacts will help to align incentives with MPA objectives. Further research is needed to value supporting and regulating services and to elucidate how ecosystem service provision is affected by MPAs in different conditions and contexts, over time and compared to unmanaged areas, to guide adaptive management.

Article

Ahmad Abbasnejad and Behnam Abbasnejad

A qanat is a kind of subterranean horizontal tunnel and usually excavated in soft sediments. It conducts groundwater to the surface at its emerging point. In addition to the tunnel, each qanat contains anywhere from several to hundreds of vertical wells for removal of dig materials and ventilation of the tunnel. These wells get increasingly deep until the deepest and last one, which is known as the mother well. According to the literature, qanat was first developed around 800 to 1000 bc in northwest of Iran and afterward was utilized in many other countries in Asia, Africa, southern Europe, and even (through independent invention) in the Americas. The areas utilizing the qanat have three characteristics in common: the shortage of surficial water (streams) indicating an arid or semiarid climate; suitable topographical slopes that help conduct groundwater to the surface for a distance by a gently sloping tunnel (qanat); and the presence of unconsolidated sediments (usually alluvial) that both act as subsurface reservoirs and as material that can be easily excavated using primitive tools. In another words, dry areas with mountain-plain topography, alluvial fans, and stream beds (wadis) are suitable for digging qanats. Major parts of Iran and some parts of the Maghreb have such conditions. This is why these two regions have been somewhat dependent on qanats for their water supply. Although the invention of qanats helped human settlement and welfare in drier countries, it had some negative impacts. The presence of humans due to qanats directly impacted the wildlife and vegetation cover of those areas. And in some cases, changes in the groundwater regime caused wilting and drying because of limited water resources for plants and wildlife. The history of qanat development may be viewed as undergoing three major stages in the dry zones of Iran and the Maghreb, as well as in many other countries where they are present. During the first stage, from 1,000 to 2,000 years after their introduction (depending upon the region) qanats rapidly proliferated as technology spread to new areas. During the second stage, new qanat construction halted, as they had been developed in almost all suitable areas. In the third stage, beginning in some places in the early 20th century, such factors as increasing demand for groundwater, technical developments in water well drilling, and problems with qanat maintenance and urban sprawl caused many qanats to dry out; their numbers in operation have dropped. This decline will continue with varying rates in different countries. Unfortunately, the rate of decline in Iran, the home country of qanats, is more than many other places. This is mainly due to mismanagement.

Article

Street-level bureaucrats (SLBs) interact directly with users and play a key role in providing services. In the Global South, and specifically in India, the work practices of frontline public workers—technical staff, field engineers, desk officers, and social workers—reflect their understanding of urban water reforms. The introduction of technology-driven solutions and new public management instruments, such as benchmarking, e-governance, and evaluation procedures, has transformed the nature of frontline staff’s responsibilities but has not solved the structural constraints they face. In regard to implementing solutions to improve access in poor neighborhoods, SLBs continue to play a key role in the making of formal and informal provision. Their daily practices are ambivalent. They can be both predatory and benevolent, which explains the contingent impacts on service improvement and the difficulty in generalizing reform experiments. Nevertheless, the discretionary power of SLBs can be a source of flexibility and adaptation to complex social settings.

Article

Raheel Anwar, Tahira Fatima, and Autar Mattoo

The modern-day cultivated and highly consumed tomato has come a long way from its ancestor(s), which were in the wild and not palatable. Breeding strategies made the difference in making desirable food, including tomato, available for human consumption. However, like other horticultural produce, the shelf life of tomato is short, which results in losses that can reach almost 50% of the produce, more so in developing countries than in countries with advanced technologies and better infrastructure. Food security concerns are real, especially taking into consideration that the population explosion anticipated by 2050 will require more food production and the production of more nutritious food, which applies as much to the tomato crop as the other crops. Today’s consumer has become aware and is looking for nutritious foods for a healthful and long life. Little was done until recently to generate nutritionally enhanced produce including fruits/vegetables. Also, extreme environments add to plant stress and impact yield and nutritional quality of produce. Recent developments in understandings of the plant/fruit genetics and progress made in developing genetic engineering technologies, including the use of CRISPR-Cas9, raise hopes that a better tomato with a high dose of nutrition and longer-lasting quality will become a reality.

Article

African domesticated animals, with the exception of the donkey, all came from the Near East. Some 8,000 years ago cattle, sheep, and goats came south to the Sahara which was much wetter than today. Pastoralism was an off-shoot of grain agriculture in the Near East, and those herders immigrating brought with them techniques of harvesting wild grains. With increasing aridity as the Saharan environment dried up around 5000 years ago, the herders began to control and manipulate their stands resulting in millet and sorghum domestication in the Sahel Zone, south of the Sahara. Pearl millet expanded to the south and was taken up by Bantu-speaking Iron Age farmers in the savanna areas of West Africa and then spread around the tropical forest into East Africa by 3000 b.p. As the Sahara dried up and the tsetse belts retreated, sheep and cattle also moved south. They expanded into East Africa via a tsetse-free environment of the Ethiopian highlands arriving around 4000 b.p. It took around 1000 years for the pastoralists to adapt to other epizootic diseases rife in this part of the continent before they could expand throughout the grasslands of Kenya and Tanzania. Thus, East Africa was a socially complex place 3000 years ago, with indigenous hunters, herders and farmers. This put pressure on pastoral use of the environment, so using another tsetse-free corridor from Tanzania, through Zambia to the northern Kalahari, then on to the Western Cape, herders moved to southern Africa, arriving 2000b.p. They were followed to the eastern part of South Africa by Bantu-speaking agro-pastoralists 1600 years ago who were able to use the summer rainfall area for their sorghum and millet crops. Control and manipulation of African indigenous plants of the forest regions probably has a long history from use by hunter-gatherers, but information on this is constrained by archaeological evidence, which is poor in tropical environments due to poor preservation. Evidence for early palm oil domestication has been found in Ghana dated to around 2550b.p. Several African indigenous plants are still widely used, such as yams, but the plant which has spread most widely throughout the world is coffee, originally from Ethiopia. Alien plants, such as maize, potatoes and Asian rice have displaced indigenous plants over much of Africa.

Article

Wayne C. Zipperer, Robert Northrop, and Michael Andreu

At the beginning of the 21st century more than 50% of the world’s population lived in cities. By 2050, this percentage will exceed 60%, with the majority of growth occurring in Asia and Africa. As of 2020 there are 31 megacities, cities whose population exceeds 10 million, and 987 smaller cities whose populations are greater than 500 thousand but less than 5 million in the world. By 2030 there will be more than 41 megacities and 1290 smaller cities. However, not all cities are growing. In fact, shrinking cities, those whose populations are declining, occur throughout the world. Factors contributing to population decline include changes in the economy, low fertility rates, and catastrophic events. Population growth places extraordinary demand for natural resources and exceptional stress on natural systems. For example, over 13 million hectares of forest land are converted to agriculture, urban land use, and industrial forestry annually. This deforestation significantly affects both hydrologic systems and territorial habitats. Hydrologically, urbanization creates a condition called urban stream syndrome. The increase in storm runoff, caused by urbanization through the addition of impervious surfaces, alters stream flow, morphology, temperature, and water quantity and quality. In addition, leaky sewer lines and septic systems as well as the lack of sanitation systems contribute significant amounts of nutrients and organic contaminants such as pharmaceuticals, caffeine, and detergents. Ecologically, these stressors and contaminants significantly affect aquatic flora and fauna. Habitat loss is the greatest threat to biodiversity. Urbanization not only destroys and fragments habitats but also alters the environment itself. For example, deforestation and fragmentation of forest lands lead to the degradation and loss of forest interior habitat as well as creating forest edge habitat. These changes shift species composition and abundance from urban avoiders to urban dwellers. In addition, roads and other urban features isolate populations causing local extinctions, limit dispersal among populations, increase mortality rates, and aid in the movement of invasive species. Cities often have higher ambient temperatures than rural areas, a phenomenon called the urban heat island effect. The urban heat island effect alters precipitation patterns, increases ozone production (especially during the summer), modifies biogeochemical processes, and causes stresses on humans and native species. The negative effect of the expansion and urbanization itself can be minimized through proper planning and design. Planning with nature is not new but it has only recently been recognized that human survival is predicated on coexisting with biodiversity and native communities. How and if cities apply recommendations for sustainability depends entirely on the people themselves.