You are looking at 101-120 of 179 articles
Caroline A. Ochieng, Cathryn Tonne, Sotiris Vardoulakis, and Jan Semenza
Household air pollution from use of solid fuels (biomass fuels and coal) is a major problem in low and middle income countries, where 90% of the population relies on these fuels as the primary source of domestic energy. Use of solid fuels has multiple impacts, on individuals and households, and on the local and global environment. For individuals, the impact on health can be considerable, as household air pollution from solid fuel use has been associated with acute lower respiratory infections, chronic obstructive pulmonary disease, lung cancer, and other illnesses. Household-level impacts include the work, time, and high opportunity costs involved in biomass fuel collection and processing. Harvesting and burning biomass fuels affects local environments by contributing to deforestation and outdoor air pollution. At a global level, inefficient burning of solid fuels contributes to climate change.
Improved biomass cookstoves have for a long time been considered the most feasible immediate intervention in resource-poor settings. Their ability to reduce exposure to household air pollution to levels that meet health standards is however questionable. In addition, adoption of improved cookstoves has been low, and there is limited evidence on how the barriers to adoption and use can be overcome. However, the issue of household air pollution in low and middle income countries has gained considerable attention in recent years, with a range of international initiatives in place to address it. These initiatives could enable a transition from biomass to cleaner fuels, but such a transition also requires an enabling policy environment, especially at the national level, and new modes of financing technology delivery. More research is also needed to guide policy and interventions, especially on exposure-response relationships with various health outcomes and on how to overcome poverty and other barriers to wide-scale transition from biomass fuels to cleaner forms of energy.
Edward B. Barbier
Globally, around 1.5 billion people in developing countries, or approximately 35% of the rural population, can be found on less-favored agricultural land (LFAL), which is susceptible to low productivity and degradation because the agricultural potential is constrained biophysically by terrain, poor soil quality, or limited rainfall. Around 323 million people in such areas also live in locations that are highly remote, and thus have limited access to infrastructure and markets. The households in such locations often face a vicious cycle of declining livelihoods, increased ecological degradation and loss of resource commons, and declining ecosystem services on which they depend. In short, these poor households are prone to a poverty-environment trap. Policies to eradicate poverty, therefore, need to be targeted to improve the economic livelihood, productivity, and income of the households located on remote LFAL. The specific elements of such a strategy include involving the poor in paying for ecosystem service schemes and other measures that enhance the environments on which the poor depend; targeting investments directly to improving the livelihoods of the rural poor, thus reducing their dependence on exploiting environmental resources; and tackling the lack of access by the rural poor in less-favored areas to well-functioning and affordable markets for credit, insurance, and land, as well as the high transportation and transaction costs that prohibit the poorest households in remote areas to engage in off-farm employment and limit smallholder participation in national and global markets.
The economic tool of individual transferable quotas (ITQs) gives their owners exclusive and transferable rights to catch a given portion of the total allowable catch (TAC) of a given fish stock. Authorities establish TACs and then divide them among individual fishers or firms in the form of individual catch quotas, usually a percentage of the TAC. ITQs are transferable through selling and buying in an open market. The main arguments by proponents of ITQs is that they eliminate the need to “race for the fish” and thus increase economic returns while eliminating overcapacity and overfishing. In general, fisheries’ management objectives consist of ecological (sustainable use of fish stocks), economic (no economic waste), and social (mainly the equitable distribution of fisheries benefits) issues. There is evidence to show that ITQs do indeed reduce economic waste and increase profits for those remaining in fisheries. However, they do not perform well in terms of sustainability or socially. A proposal that integrates ITQs in a comprehensive and effective ecosystem-based fisheries management system that is more likely to perform much better than ITQs with respect to ecological, economic, and social objectives is presented in this article.
Simon Holdaway and Rebecca Phillipps
Northeast Africa forms an interesting case study for investigating the relationship between changes in environment and agriculture. Major climatic changes in the early Holocene led to dramatic changes in the environment of the eastern Sahara and to the habitation of previously uninhabitable regions. Research programs in the eastern Sahara have uncovered a wealth of archaeological evidence for sustained occupation during the African Humid Period, from about 11,000 years ago. Initial studies of faunal remains seemed to indicate early shifts in economic practice toward cattle pastoralism. Although this interpretation was much debated when it was first proposed, the possibility of early pastoralism stimulated discussion concerning the relationships between people and animals in particular environmental contexts, and ultimately led to questions concerning the role of agriculture imported from elsewhere in contrast to local developments. Did agriculture, or indeed cultivation and domestication more generally (sensu Fuller & Hildebrand, 2013), develop in North Africa, or were the concepts and species imported from Southwest Asia? And if agriculture did spread from elsewhere, were just the plants and animals involved, or was the shift part of a full socioeconomic suite that included new subsistence strategies, settlement patterns, technologies, and an agricultural “culture”? And finally, was this shift, wherever and however it originated, related to changes in the environment during the early to mid-Holocene?
These questions refer to the “big ideas” that archaeologists explore, but before answers can be formed it is important to consider the nature of the material evidence on which they are based. Archaeologists must consider not only what they discover but also what might be missing. Materials from the past are preserved only in certain places, and of course some materials can be preserved better than others. In addition, people left behind the material remains of their activities, but in doing so they did not intend these remains to be an accurate historical record of their actions. Archaeologists need to consider how the remains found in one place may inform us about a range of activities that occurred elsewhere for which the evidence may be less abundant or missing. This is particularly true for Northeast Africa where environmental shifts and consequent changes in resource abundance often resulted in considerable mobility. This article considers the origins of agriculture in the region covering modern-day Egypt and Sudan, paying particular attention to the nature of the evidence from which inferences about past socioeconomies may be drawn.
Christopher Morgan, Shannon Tushingham, Raven Garvey, Loukas Barton, and Robert Bettinger
At the global scale, conceptions of hunter-gatherer economies have changed considerably over time and these changes were strongly affected by larger trends in Western history, philosophy, science, and culture. Seen as either “savage” or “noble” at the dawn of the Enlightenment, hunter-gatherers have been regarded as everything from holdovers from a basal level of human development, to affluent, ecologically-informed foragers, and ultimately to this: an extremely diverse economic orientation entailing the fullest scope of human behavioral diversity. The only thing linking studies of hunter-gatherers over time is consequently simply the definition of the term: people whose economic mode of production centers on wild resources. When hunter-gatherers are considered outside the general realm of their shared subsistence economies, it is clear that their behavioral diversity rivals or exceeds that of other economic orientations. Hunter-gatherer behaviors range in a multivariate continuum from: a focus on mainly large fauna to broad, wild plant-based diets similar to those of agriculturalists; from extremely mobile to sedentary; from relying on simple, generalized technologies to very specialized ones; from egalitarian sharing economies to privatized competitive ones; and from nuclear family or band-level to centralized and hierarchical decision-making. It is clear, however, that hunting and gathering modes of production had to have preceded and thus given rise to agricultural ones. What research into the development of human economies shows is that transitions from one type of hunting and gathering to another, or alternatively to agricultural modes of production, can take many different evolutionary pathways. The important thing to recognize is that behaviors which were essential to the development of agriculture—landscape modification, intensive labor practices, the division of labor and the production, storage, and redistribution of surplus—were present in a range of hunter-gatherer societies beginning at least as early as the Late Pleistocene in Africa, Europe, Asia, and the Americas. Whether these behaviors eventually led to the development of agriculture depended in part on the development of a less variable and CO2-rich climatic regime and atmosphere during the Holocene, but also a change in the social relations of production to allow for hoarding privatized resources. In the 20th and 21st centuries, ethnographic and archaeological research shows that modern and ancient peoples adopt or even revert to hunting and gathering after having engaged in agricultural or industrial pursuits when conditions allow and that macroeconomic perspectives often mask considerable intragroup diversity in economic decision making: the pursuits and goals of women versus men and young versus old within groups are often quite different or even at odds with one another, but often articulate to form cohesive and adaptive economic wholes. The future of hunter-gatherer research will be tested by the continued decline in traditional hunting and gathering but will also benefit from observation of people who revert to or supplement their income with wild resources. It will also draw heavily from archaeology, which holds considerable potential to document and explain the full range of human behavioral diversity, hunter-gatherer or otherwise, over the longest of timeframes and the broadest geographic scope.
Ann E. Ferris, Richard Garbaccio, Alex Marten, and Ann Wolverton
Concern regarding the economic impacts of environmental regulations has been part of the public dialogue since the beginning of the U.S. EPA. Even as large improvements in environmental quality occurred, government and academia began to examine the potential consequences of regulation for economic growth and productivity. In general, early studies found measurable but not severe effects on the overall national economy. Although price increases due to regulatory requirements outweighed the stimulative effect of investments in pollution abatement, they nearly offset one another. However, these studies also highlighted potentially substantial effects on local labor markets due to the regional and industry concentration of plant closures.
More recently, a substantial body of work examined industry-specific effects of environmental regulation on the productivity of pollution-intensive firms most likely to face pollution control costs, as well as on plant location and employment decisions within firms. Most econometric-based studies found relatively small or no effect on sector-specific productivity and employment, though firms were less likely to open plants in locations subject to more stringent regulation compared to other U.S. locations. In contrast, studies that used economy-wide models to explicitly account for sectoral linkages and intertemporal effects found substantial sector-specific effects due to environmental regulation, including in sectors that were not directly regulated.
It is also possible to think about the overall impacts of environmental regulation on the economy through the lens of benefit-cost analysis. While this type of approach does not speak to how the costs of regulation are distributed across sectors, it has the advantage of explicitly weighing the benefits of environmental improvements against their costs. If benefits are greater than costs, then overall social welfare is improved. When conducting such exercises, it is important to anticipate the ways in which improvements in environmental quality may either directly improve the productivity of economic factors—such as through the increased productivity of outdoor workers—or change the composition of the economy as firms and households change their behavior. If individuals are healthier, for example, they may choose to reallocate their time between work and leisure. Although introducing a role for pollution in production and household behavior can be challenging, studies that have partially accounted for this interconnection have found substantial impacts of improvements in environmental quality on the overall economy.
Maria C. Bruno
World food systems in the 21st century comprise domesticated plant and animal species that originated from nearly every continent on the globe, spread through exchange and trade, and have been taken up by farmers and cooks worldwide. The indigenous inhabitants of the Americas domesticated several of the worlds’ most important food crops, including maize, potatoes, chili peppers, and quinoa. They also domesticated several animal species, two of which, llamas and alpacas, have become important as alternative herd animals outside of their native Andes. While maize, potatoes, and chili peppers became important globally in the 16th and 17th centuries as part of the Columbian Exchange, llamas/alpacas and quinoa have only gained worldwide prominence in the 20th and 21st centuries.
Unraveling the history of how, where, when, and why these species were domesticated requires the expertise of researchers in the fields of biology, genetics, and archaeology. Domestication is the process by which humans transform wild plant or animal populations into forms that can only be maintained with human intervention. Humans build upon the natural variation in these species but select traits that while desirable for humans, would not be beneficial to survival without them. Using a range of evidence from the remains of ancient plants and animals recovered from archaeological sites to the study of the genetic relationships of living and ancient plant and animal populations, these researchers are revealing how ancient American populations created some of the world’s most important food sources.
Noa Kekuewa Lincoln and Peter Vitousek
Agriculture in Hawaiʻi was developed in response to the high spatial heterogeneity of climate and landscape of the archipelago, resulting in a broad range of agricultural strategies. Over time, highly intensive irrigated and rainfed systems emerged, supplemented by extensive use of more marginal lands that supported considerable populations. Due to the late colonization of the islands, the pathways of development are fairly well reconstructed in Hawaiʻi. The earliest agricultural developments took advantage of highly fertile areas with abundant freshwater, utilizing relatively simple techniques such as gardening and shifting cultivation. Over time, investments into land-based infrastructure led to the emergence of irrigated pondfield agriculture found elsewhere in Polynesia. This agricultural form was confined by climatic and geomorphological parameters, and typically occurred in wetter, older landscapes that had developed deep river valleys and alluvial plains. Once initiated, these wetland systems saw regular, continuous development and redevelopment. As populations expanded into areas unable to support irrigated agriculture, highly diverse rainfed agricultural systems emerged that were adapted to local environmental and climatic variables. Development of simple infrastructure over vast areas created intensive rainfed agricultural systems that were unique in Polynesia. Intensification of rainfed agriculture was confined to areas of naturally occurring soil fertility that typically occurred in drier and younger landscapes in the southern end of the archipelago. Both irrigated and rainfed agricultural areas applied supplementary agricultural strategies in surrounding areas such as agroforestry, home gardens, and built soils. Differences in yield, labor, surplus, and resilience of agricultural forms helped shape differentiated political economies, hierarchies, and motivations that played a key role in the development of sociopolitical complexity in the islands.
Richard Sharpe, Nicholas Osborne, Sotiris Vardoulakis, and Sani Dimitroulopoulou
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The built environment involves the interaction between the home (social, cultural, and economic structure of the household), the dwelling (i.e., the physical structure), the community, and the immediate environment. Evidence linking the built environment to health and well-being dates back many decades, with a range of landmark publications (e.g., the 1980 Black report in the UK) calling for improvements in the housing stock and for the alleviation of poverty. Indoor air pollution is of particular interest in developed countries because of the trend in Western societies to spend more time indoors, especially in homes where indoor exposures have been associated with poorer health. However, the timing and extent of exposure are likely to influence the resulting health outcomes, which has led to some inconsistent findings associated with a range of complex heterogeneous diseases (e.g., allergy and asthma). Furthermore, the indoor environment is modified by other outdoor pollutants (PM, NO2, VOCs & Ozone, etc.) and biological agents (pollen and fungi) that can infiltrate indoors, though this is dependent on behavioral and ventilation patterns.
Poor housing (e.g., cold and damp homes) and poverty, combined with other lifestyle characteristics (e.g., smoking, the presence of pets, and the combustion of fuels for heating and cooking), all influence the quality of indoor air. Housing improvements such as sealing homes to prevent heat loss (i.e., increased household energy efficiency) can lead to the buildup of a range of physical, chemical, and biological agents when combined with inadequate heating and ventilation. Increased exposure to these indoor air pollutants are thought to play an important role in the development and clinical course of allergic diseases (including asthma), as well as other respiratory, cancerous, and cardiovascular health problems. Asthma and other allergic diseases are a significant public health interest because they are very common today and represent a heavy economic and societal burden. Furthermore, the dramatic rise in the prevalence of these diseases over the last two to three decades cannot be fully explained by genetic variance alone. This has led to an increased focus on environmental exposures, including air pollution resulting from indoor and outdoor environments.
The health impacts examined in “Indoor Air Pollution in Developed Countries” have yet to be fully explored in the context of the interactions between the indoor home environment and outdoor agents, particularly with respect to the interplay between a range of modifiable housing conditions (e.g., poor housing, fuel poverty, and energy efficiency) and risk of allergic diseases.
Gregory L. Willoughby
Agriculture has been said to be the key to civilization development. The longevity of the production of the soils which sustained the population development influenced, in fact caused, the rise and often the collapse of those ancient cultures. Furthermore, the fertilization of those soils, if by new sediment or by other means, enabled some civilizations to survive longer than others. It was only with the development of more consistent fertilization and newer, higher-analysis materials that crop production entered an era where it could reliably feed beyond the family unit but feed the city, and then the whole country. This modern industrial fertilization required fewer people to be devoted to food production so that their efforts could be directed to more secondary and tertiary careers. The growth of the use of fertilizer by over 200% in 40 years has led to an increased scrutiny of its environmental aspect in the early 21st century, and this has led to a revaluation of application procedures and to an increase in research and development of new forms of fertilizer and into ways to change modern fertilizers’ environmental footprints to better steward food production and remedy systems that are off target environmentally. These technologies are sometimes very basic, such as including combinations of elements which help stabilize each other (e.g. sulfur and nitrogen or phosphorus and sulfur). Other technologies include polymer-coating (e.g. slow-release coatings) and impregnatable coatings (e.g. nitrapyrin, NBPT). In other cases, new materials have been developed (e.g. methylated urea) and in yet others progress has come from a mixing of other compounds with the fertilizer (e.g. gypsum to phosphorus fertilizer, or humic acids to nitrogen formulations). Lastly, there has been a rise in the importance of micronutrients as production has increased (e.g. zinc, manganese, and boron) especially as yield levels have increased.
Nations rapidly industrialized after World War II, sharply increasing the extraction of resources from the natural world. Colonial empires broke up on land after the war, but they were re-created in the oceans. The United States, Japan, and the Soviet Union, as well as the British, Germans, and Spanish, industrialized their fisheries, replacing fleets of small-scale, independent artisanal fishermen with fewer but much larger government-subsidized ships. Nations like South Korea and China, as well as the Eastern Bloc countries of Poland and Bulgaria, also began fishing on an almost unimaginable scale. Countries raced to find new stocks of fish to exploit. As the Cold War deepened, nations sought to negotiate fishery agreements with Third World nations. The conflict over territorial claims led to the development of the Law of the Sea process, starting in 1958, and to the adoption of 200-mile exclusive economic zones (EEZ) in the 1970s.
Fishing expanded with the understanding that fish stocks were robust and could withstand high harvest rates. The adoption of maximum sustained yield (MSY) after 1954 as the goal of postwar fishery negotiations assumed that fish had surplus and that scientists could determine how many fish could safely be caught. As fish stocks faltered under the onslaught of industrial fisheries, scientists re-assessed their assumptions about how many fish could be caught, but MSY, although modified, continues to be at the heart of modern fisheries management.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Input-Output (I-O) models were originally conceived by the Nobel Prize winner Wassily Leontief in the 1930s as a tool that can be used by economists and economic policy makers to help in their decision process. The I-O models provide a “picture” of the how the economy works, that is, what are the necessities to produce goods and services; how this production generates income, profits and taxes; and how this income is spent. In a simplified way the I-O models can be seen as the model implementation of the economy’s circular flow diagrams usually show in the introductory courses of economics.
Taking, for example, the production of computer screens:
• On the production side, the I-O models have information for the following: (a) how much is spent on the inputs, goods and services, necessary to produce the screens; (b) if these inputs have their origin from the domestic market or were imported; (c) how much was paid in tax to the government; (d) what was the total amount paid in wages and salaries; (e) what were the profits of the producing firms; (f) how many computer screens are sold in the domestic market or in the international market (exported); and (g) if they are sold directly to the final consumer or if they are used as a production input, being incorporated in other goods, for example, like a refrigerator with a computer screen.
• On the demand side, the I-O models, taking into consideration the total income received by the different players in the economy, that is, households, firms, and government, have information about the following: (a) how the income of these players is spent on goods and services, and if they are used for consumption or investment; (b) if these goods and services were produced domestically or abroad (imported); and (c) how much consumer tax was paid.
From the above structure of the I-O models, and using economic mathematical models, it is possible to measure the direct and indirect inputs needed to produce goods and services in the economy, for example, to produce a car, one does not see the need for agricultural goods as a direct input for production, but the fabric used in the car seats or on the car carpets could have come from cotton, which is an agricultural good; as so, cotton is an indirect input used in car production.
The I-O models, by their capability to show a complete picture of the economic system, and of tracing the origin of direct and indirect inputs used in the production process, can be used in environmental studies by linking economic and environmental variables, on the production and consumption sides. From the production side, it is possible to measure, by considering the direct and indirect inputs used, how many natural resources were used and how much pollution was generated on the production of goods and services. On the demand side, it is possible to measure the environmental variables, natural resource, and pollution embodied in the goods and services consumed in the economy. Expanding the I-O models to a global scale, that is, using Inter-Country I-O models, it is possible to measure the environment impacts and contents of the goods and services by countries of the origin of production and by countries of consumption.
Maria A. Cunha-e-Sá and Sofia F. Franco
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
Economic development, technological change, and urbanization are typically identified as important drivers of land use change. Yet, land use change entails important environmental and socioeconomic consequences, namely by affecting the processes and functions of ecosystems, and, therefore, the provision of their services.
Urbanization comprises residential, commercial, industrial, and highway-related development. At the edge of this built environment is the urban-wildland interface (UWI), which has been the focus of environmental policies in several parts of the world. The reason rests on the fact that urban development at the UWI is linked to significant environmental damages, including air and water pollution, habitat destruction, landscape fragmentation, increased runoff, and wildfire risk, among others. These effects can reduce biological diversity and, furthermore, some of the ecosystem services (ES) can be irreversibly lost.
Though forests located nearby urban areas are a small fraction of the forest cover, a better understanding of the extent to which UWI forest conversion affects local economies and environmental services can help policymakers harmonizing urban development and environmental preservation at the UWI, with positive impact on the welfare of local communities.
The main income from most forest holdings at the UWI depends on wood production. However, forests and forestry practices also contribute to climate change mitigation. Yet, the public good nature of most forest ES distorts the forest market price below its social value. As a result, as development pressure increases, it is expected that interface traditional timber management practices become a transitional use as conversion to other valuable land use, such as residential use, occurs in the future and, the UWI open space is undervalued. This, in turn, raises concern given the current increasing trend of forestland conversion at the UWI.
Moreover, the value of developable land held by a private forest owner derives both from the returns from the current use (timber production) and from the expected returns from future urban uses. Given that the decision to convert is conditional on the relative magnitude and timing of the returns of the two alternative uses, decision-making at the UWI must reflect the factors that influence both, that is, urban (e.g., residential rents and switching costs) and forestry-related factors (e.g., stumpage prices and regeneration costs), when considering the optimal rotation periods and conversion dates. Accounting for the urban influences on UWI forestland practices is thus very important, because a growing population and increasingly land consumptive development patterns will require more effective policies and programs to stem the tide of urban sprawl seen in several municipalities worldwide.
Matti Nummelin and Niko Urho
Conservation and sustainable use of biodiversity have been in the center of policy creation for half a century. The main international biodiversity conventions and processes include the Convention on Biological Diversity (CBD) and its protocols, the Convention on Trade in Endangered Species of Wild Fauna and Flora (CITES), the Convention on Wetlands of International Importance (Ramsar Convention), the World Heritage Convention (WHC), the Convention on Conservation of Migratory Species of Wild Animals (CMS), the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA), the International Plant Protection Convention (IPPC), the Commission on Genetic Resources for Food and Agriculture (CGRFA), and the International Convention on the Regulation of Whaling (ICRW). The governance of marine biodiversity in areas beyond national jurisdiction (BBNJ) is also discussed, as political focus has shifted to the protection of the oceans and is expected to culminate in the adoption of a new international convention under the United Nations Convention on Law of Seas (UNCLOS). Other conventions and processes with links to biodiversity include the United Nations Convention to Combat Desertification (UNCCD), the United Nations Framework Convention on Climate Change (UNFCCC), and the United Nations Forum on Forests (UNFF).
Despite the multitude of instruments, governments are faced with the fact that biodiversity loss is spiraling and international targets are not being met. The Earth’s sixth mass extinction event has led to various initiatives to fortify the relevance of biodiversity in the UN system and beyond to accelerate action on the ground. In face of an ever more complex international policy landscape on biodiversity, country delegates are seeking to enhance efficiency and reduce fragmentation by enhancing synergies among multilateral environmental agreements and strengthening their science−policy interface. Furthermore, biodiversity has been reflected throughout the 2030 Agenda on Sustainable Development and is gradually gaining more ground in the human rights context. The Global Pact for the Environment, a new international initiative that is aiming to reinforce soft law commitments and increase coherence among environmental treaties, holds the potential to influence and strengthen the way biodiversity conventions function, but extensive discussions are still needed before concrete action is agreed upon.
Christopher Fleming and Christopher Ambrey
The method and practice of placing monetary values on environmental goods and services for which a conventional market price is otherwise unobservable is one of the most fertile areas of research in the field of natural resource and environmental economics. Initially motivated by the need to include environmental values in benefit-cost analysis, practitioners of non-market valuation have since found further motivation in national account augmentation and environmental damage litigation. Despite hundreds of applications and many decades of refinement, shortcomings in all of the techniques remain, and no single technique is considered superior to the others in all respects. Thus, techniques that expand the suite of options available to the non-market valuation practitioner have the potential to represent a genuine contribution to the field.
One technique to recently emerge from the economics of happiness literature is the “experienced preference method” or “life satisfaction approach.” Simply, this approach entails the inclusion of non-market goods as explanatory variables within micro-econometric functions of life satisfaction along with income and other covariates. The estimated coefficient for the non-market good yields, first, a direct valuation in terms of life satisfaction and, second, when compared to the estimated coefficient for income, the implicit willingness to pay for the non-market good in monetary terms.
The life satisfaction approach offers several advantages over more conventional non-market valuation techniques. For example, the approach does not ask individuals to directly value the non-market good in question, as is the case in contingent valuation. Nor does it ask individuals to make explicit trade-offs between market and non-market goods, as is the case in discrete choice modeling. The life satisfaction approach nonetheless has some potential limitations. Crucially, self-reported life satisfaction must be regarded as a good proxy for an individual’s utility. Furthermore, in order to yield reliable non-market valuation estimates, self-reported life satisfaction measures must: (1) contain information on respondents’ global evaluation of their life; (2) reflect not only stable inner states of respondents, but also current affects; (3) refer to respondents’ present life; and (4) be comparable across groups of individuals under different circumstances. Despite these conditions, there is growing evidence to support the suitability of individual’s responses to life satisfaction questions for non-market valuation. Applications of the life satisfaction approach to the valuation of environmental goods and services to date include the valuation of air quality, airport noise, greenspace, scenic amenity, floods, and drought.
Knowledge of the important role that the environment plays in determining human health predates the modern public health era. However, the tendency to see health, disease, and their determinants as attributes of individuals rather than characteristics of communities meant that the role of the environment in human health was seldom accorded sufficient importance during much of the 20th century. Instead, research began to focus on specific risk factors that correlated with diseases of greatest concern, i.e., the non-communicable diseases such as cardiovascular disease, asthma, and diabetes. Many of these risk factors (e.g., smoking, alcohol consumption, and diet) were aspects of individual lifestyle and behaviors, freely chosen by the individual. Within this individual-centric framework of human health, the standard economic model for human health became primarily the Grossman model of health and health care demand.
In this model, an individual’s health stock may be increased by investing in health (by consuming health services, for example) or decreased by endogenous (age) or exogenous (smoking) individual factors. Within this model, individuals used their available resources, their budget, to purchase goods and services that either increased or decreased their health stock. Grossman’s model provides a consumption-based approach to human health, where individuals purchase goods and services required to improve their individual health in the marketplace. Grossman’s model of health assumes that the goods and services required to optimize good health can be purchased through market-based interactions and that these goods and services are optimally priced—that the value of the goods and services are reflected in their price.
In reality, many types of goods and services that are good for human health are not available to purchase, or if they are available they are undervalued in the free market. Across the environmental and health literature, these goods and services are, today, broadly referred to as “ecosystem services for human health.” However, the quasi-public good nature of ecosystem services for human health means that the private market will generate a suboptimal environment for both individual and public health outcomes. In the face of continued austerity and scarce public resources, understanding the role of the environment in human health may help to alleviate future health care demand by decreasing (or increasing) environmental risk (or benefits) associated with health outcomes. However, to take advantage of the role that the environment plays in human health requires a fundamental reorientation of public health policy and spending to include environmental considerations.
Vincent Moreau and Guillaume Massard
The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems.
Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for.
In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems.
In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.
Vito Ferro and Vincenzo Bagarello
Field plots are often used to obtain experimental data (soil loss values corresponding to different climate, soil, topographic, crop, and management conditions) for predicting and evaluating soil erosion and sediment yield. Plots are used to study physical phenomena affecting soil detachment and transport, and their sizes are determined according to the experimental objectives and the type of data to be obtained. Studies on interrill erosion due to rainfall impact and overland flow need small plot width (2–3 m) and length (< 10 m), while studies on rill erosion require plot lengths greater than 6–13 m. Sites must be selected to represent the range of uniform slopes prevailing in the farming area under consideration. Plots equipped to study interrill and rill erosion, like those used for developing the Universal Soil Loss Equation (USLE), measure erosion from the top of a slope where runoff begins; they must be wide enough to minimize the edge or border effects and long enough to develop downslope rills. Experimental stations generally include bounded runoff plots of known rea, slope steepness, slope length, and soil type, from which both runoff and soil loss can be monitored. Once the boundaries defining the plot area are fixed, a collecting equipment must be used to catch the plot runoff. A conveyance system (H-flume or pipe) carries total runoff to a unit sampling the sediment and a storage system, such as a sequence of tanks, in which sediments are accumulated. Simple methods have been developed for estimating the mean sediment concentration of all runoff stored in a tank by using the vertical concentration profile measured on a side of the tank. When a large number of plots are equipped, the sampling of suspension and consequent oven-drying in the laboratory are highly time-consuming. For this purpose, a sampler that can extract a column of suspension, extending from the free surface to the bottom of the tank, can be used. For large plots, or where runoff volumes are high, a divisor that splits the flow into equal parts and passes one part in a storage tank as a sample can be used. Examples of these devices include the Geib multislot divisor and the Coshocton wheel. Specific equipment and procedures must be employed to detect the soil removed by rill and gully erosion. Because most of the soil organic matter is found close to the soil surface, erosion significantly decreases soil organic matter content. Several studies have demonstrated that the soil removed by erosion is 1.3–5 times richer in organic matter than the remaining soil. Soil organic matter facilitates the formation of soil aggregates, increases soil porosity, and improves soil structure, facilitating water infiltration. The removal of organic matter content can influence soil infiltration, soil structure, and soil erodibility.
There is scientific consensus that human activities have been altering the atmospheric composition and are a key driver of global climate and environmental changes since pre-industrial times (IPCC, 2013). It is a pressing priority to understand the Earth system response to atmospheric aerosol input from diverse sources, which so far remain one of the largest uncertainties in climate studies (Boucher et al., 2014; Forster et al., 2007). As the second most abundant component (in terms of mass) of atmospheric aerosols, mineral dust exerts tremendous impacts on Earth’s climate and environment through various interaction and feedback processes. Dust can also have beneficial effects where it deposits: Central and South American rain forests get most of their mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the midwestern United States, ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Accurate assessments of dust emission are of great importance to improvements in quantifying the diverse dust impacts.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated.
Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.”
Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources.
From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations.
The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.