Nations rapidly industrialized after World War II, sharply increasing the extraction of resources from the natural world. Colonial empires broke up on land after the war, but they were re-created in the oceans. The United States, Japan, and the Soviet Union, as well as the British, Germans, and Spanish, industrialized their fisheries, replacing fleets of small-scale, independent artisanal fishermen with fewer but much larger government-subsidized ships. Nations like South Korea and China, as well as the Eastern Bloc countries of Poland and Bulgaria, also began fishing on an almost unimaginable scale. Countries raced to find new stocks of fish to exploit. As the Cold War deepened, nations sought to negotiate fishery agreements with Third World nations. The conflict over territorial claims led to the development of the Law of the Sea process, starting in 1958, and to the adoption of 200-mile exclusive economic zones (EEZ) in the 1970s.
Fishing expanded with the understanding that fish stocks were robust and could withstand high harvest rates. The adoption of maximum sustained yield (MSY) after 1954 as the goal of postwar fishery negotiations assumed that fish had surplus and that scientists could determine how many fish could safely be caught. As fish stocks faltered under the onslaught of industrial fisheries, scientists re-assessed their assumptions about how many fish could be caught, but MSY, although modified, continues to be at the heart of modern fisheries management.
Vincent Moreau and Guillaume Massard
The concept of metabolism takes root in biology and ecology as a systematic way to account for material flows in organisms and ecosystems. Early applications of the concept attempted to quantify the amount of water and food the human body processes to live and sustain itself. Similarly, ecologists have long studied the metabolism of critical substances and nutrients in ecological succession towards climax. With industrialization, the material and energy requirements of modern economic activities have grown exponentially, together with emissions to the air, water and soil. From an analogy with ecosystems, the concept of metabolism grew into an analytical methodology for economic systems.
Research in the field of material flow analysis has developed approaches to modeling economic systems by assessing the stocks and flows of substances and materials for systems defined in space and time. Material flow analysis encompasses different methods: industrial and urban metabolism, input–output analysis, economy-wide material flow accounting, socioeconomic metabolism, and more recently material flow cost accounting. Each method has specific scales, reference substances such as metals, and indicators such as concentration. A material flow analysis study usually consists of a total of four consecutive steps: (a) system definition, (b) data acquisition, (c) calculation, and (d) interpretation. The law of conservation of mass underlies every application, which implies that all material flows, as well as stocks, must be accounted for.
In the early 21st century, material depletion, accumulation, and recycling are well-established cases of material flow analysis. Diagnostics and forecasts, as well as historical or backcast analyses, are ideally performed in a material flow analysis, to identify shifts in material consumption for product life cycles or physical accounting and to evaluate the material and energy performance of specific systems.
In practice, material flow analysis supports policy and decision making in urban planning, energy planning, economic and environmental performance, development of industrial symbiosis and eco industrial parks, closing material loops and circular economy, pollution remediation/control and material and energy supply security. Although material flow analysis assesses the amount and fate of materials and energy rather than their environmental or human health impacts, a tacit assumption states that reduced material throughputs limit such impacts.
Margarete Kalin, William N. Wheeler, Michael P. Sudbury, and Bryn Harris
The first treatise on mining and extractive metallurgy, published by Georgius Agricola in 1556, was also the first to highlight the destructive environmental side effects of mining and metals extraction, namely dead fish and poisoned water. These effects, unfortunately, are still with us. Since 1556, mining methods, knowledge of metal extraction, and chemical and microbial processes leading to the environmental deterioration have grown tremendously. Man’s insatiable appetite for metals and energy has resulted in mines vastly larger than those envisioned in 1556, compounding the deterioration. The annual amount of mined ore and waste rock is estimated to be 20 billion tons, covering 1,000 km2. The industry also annually consumes 80 km3 of freshwater, which becomes contaminated.
Since metals are essential in modern society, cost-effective, sustainable remediation measures need to be developed. Engineered covers and dams enclose wastes and slow the weathering process, but, with time, become permeable. Neutralization of acid mine drainage produces metal-laden sludges that, in time, release the metals again. These measures are stopgaps at best, and are not sustainable. Focus should be on inhibiting or reducing the weathering rate, recycling, and curtailing water usage. The extraction of only the principal economic mineral or metal generally drives the economics, with scant attention being paid to other potential commodities contained in the deposit. Technology exists for recovering more valuable products and enhancing the project economics, resulting in a reduction of wastes and water consumption of up to 80% compared to “conventional processing.”
Implementation of such improvements requires a drastic change, a paradigm shift, in the way that the industry approaches metals extraction. Combining new extraction approaches, more efficient water usage, and ecological engineering methods to deal with wastes will increase the sustainability of the industry and reduce the pressure on water and land resources.
From an ecological perspective, waste rock and tailings need to be thought of as primitive ecosystems. These habitats are populated by heat-, acid- and saline-loving microbes (extremophiles). Ecological engineering utilizes geomicrobiological, physical, and chemical processes to change the mineral surface to encourage biofilm growth (the microbial growth form) within wastes by enhancing the growth of oxygen-consuming microbes. This reduces oxygen available for oxidation, leading to improved drainage quality. At the water–sediment interface, microbes assist in the neutralization of acid water (Acid Reduction Using Microbiology). To remove metals from the waste water column, indigenous biota are promoted (Biological Polishing) with inorganic particulate matter as flocculation agents. This ecological approach generates organic matter, which upon death settles with the adsorbed metals to the sediment. Once the metals reach the deeper, reducing zones of the sediments, microbial biomineralization processes convert the metals to relatively stable secondary minerals, forming biogenic ores for future generations.
The mining industry has developed and thrived in an age when resources, space, and water appeared limitless. With the widely accepted rise of the Anthropocene global land and water shortages, the mining industry must become more sustainable. Not only is a paradigm shift in thinking needed, but also the will to implement such a shift is required for the future of the industry.
Theodore J. K. Radovich
Organic farming occupies a unique position among the world’s agricultural systems. While not the only available model for sustainable food production, organic farmers and their supporters have been the most vocal advocates for a fully integrated agriculture that recognizes a link between the health of the land, the food it produces, and those that consume it. Advocacy for the biological basis of agriculture and the deliberate restriction or prohibition of many agricultural inputs arose in response to potential and observed negative environmental impacts of new agricultural technologies introduced in the 20th century. A primary focus of organic farming is to enhance soil ecological function by building soil organic matter that in turn enhances the biota that soil health and the health of the agroecosystem depends on.
The rapid growth in demand for organic products in the late 20th and early 21st centuries is based on consumer perception that organically grown food is better for the environment and human health. Although there have been some documented trends in chemical quality differences between organic and non-organic products, the meaningful impact of the magnitude of these differences is unclear. There is stronger evidence to suggest that organic systems pose less risk to the environment, particularly with regard to water quality; however, as intensity of management in organic farming increases, the potential risk to the environment is expected to also increase. In the early 21st century there has been much discussion centered on the apparent bifurcation of organic farming into two approaches: “input substitution” and “system redesign.” The former approach is a more recent phenomenon associated with pragmatic considerations of scaling up the size of operations and long distance shipping to take advantage of distant markets. Critics argue that this approach represents a “conventionalization” of organic agriculture that will erode potential benefits of organic farming to the environment, human health, and social welfare. A current challenge of organic farming systems is to reconcile the different views among organic producers regarding issues arising from the rapid growth of organic farming.
Peter J. Schubert
Renewable energy was used exclusively by the first humans and is likely to be the predominant source for future humans. Between these times the use of extracted resources such as coal, oil, and natural gas has created an explosion of population and affluence, but also of pollution and dependency. This article explores the advent of energy sources in a broad social context including economics, finance, and policy. The means of producing renewable energy are described in an accessible way, highlighting the broad range of considerations in their development, deployment, and ability to scale to address the entirety of human enterprises.
Scott M. Moore
It has long been accepted that non-renewable natural resources like oil and gas are often the subject of conflict between both nation-states and social groups. But since the end of the Cold War, the idea that renewable resources like water and timber might also be a cause of conflict has steadily gained credence. This is particularly true in the case of water: in the early 1990s, a senior World Bank official famously predicted that “the wars of the next century will be fought over water,” while two years ago Indian strategist Brahma Chellaney made a splash in North America by claiming that water would be “Asia’s New Battleground.” But it has not quite turned out that way. The world has, so far, avoided inter-state conflict over water in the 21st century, but it has witnessed many localized conflicts, some involving considerable violence. As population growth, economic development, and climate change place growing strains on the world’s fresh water supplies, the relationship between resource scarcity, institutions, and conflict has become a topic of vocal debate among social and environmental scientists.
The idea that water scarcity leads to conflict is rooted in three common assertions. The first of these arguments is that, around the world, once-plentiful renewable resources like fresh water, timber, and even soils are under increasing pressure, and are therefore likely to stoke conflict among increasing numbers of people who seek to utilize dwindling supplies. A second, and often corollary, argument holds that water’s unique value to human life and well-being—namely that there are no substitutes for water, as there are for most other critical natural resources—makes it uniquely conductive to conflict. Finally, a third presumption behind the water wars hypothesis stems from the fact that many water bodies, and nearly all large river basins, are shared between multiple countries. When an upstream country can harm its downstream neighbor by diverting or controlling flows of water, the argument goes, conflict is likely to ensue.
But each of these assertions depends on making assumptions about how people react to water scarcity, the means they have at their disposal to adapt to it, and the circumstances under which they are apt to cooperate rather than to engage in conflict. Untangling these complex relationships promises a more refined understanding of whether and how water scarcity might lead to conflict in the 21st century—and how cooperation can be encouraged instead.
James B. London
Coastal zone management (CZM) has evolved since the enactment of the U.S. Coastal Zone Management Act of 1972, which was the first comprehensive program of its type. The newer iteration of Integrated Coastal Zone Management (ICZM), as applied to the European Union (2000, 2002), establishes priorities and a comprehensive strategy framework. While coastal management was established in large part to address issues of both development and resource protection in the coastal zone, conditions have changed. Accelerated rates of sea level rise (SLR) as well as continued rapid development along the coasts have increased vulnerability. The article examines changing conditions over time and the role of CZM and ICZM in addressing increased climate related vulnerabilities along the coast.
The article argues that effective adaptation strategies will require a sound information base and an institutional framework that appropriately addresses the risk of development in the coastal zone. The information base has improved through recent advances in technology and geospatial data quality. Critical for decision-makers will be sound information to identify vulnerabilities, formulate options, and assess the viability of a set of adaptation alternatives. The institutional framework must include the political will to act decisively and send the right signals to encourage responsible development patterns. At the same time, as communities are likely to bear higher costs for adaptation, it is important that they are given appropriate tools to effectively weigh alternatives, including the cost avoidance associated with corrective action. Adaptation strategies must be pro-active and anticipatory. Failure to act strategically will be fiscally irresponsible.
Frank W. Geels
Addressing persistent environmental problems such as climate change or biodiversity loss requires shifts to new kinds of energy, mobility, housing, and agro-food systems. These shifts are called socio-technical transitions because they involve not just changes in technology but also changes in consumer practices, policies, cultural meanings, infrastructures, and business models. Socio-technical transitions to sustainability are challenging for mainstream social sciences because they are multiactor, long-term, goal-oriented, disruptive, contested, and nonlinear processes. Sustainability transitions are being investigated by a new research community, which uses a socio-technical Multi-Level Perspective (MLP) as one of its orienting frameworks. Focusing on multidimensional struggles between “green” innovations and entrenched systems, the MLP suggests that transitions involve alignments of processes within and between three analytical levels: niche innovations, socio-technical regimes, and an exogenous socio-technical landscape. To understand more specific change mechanisms, the MLP mobilizes ideas from evolutionary economics, sociology of innovation, and institutional theory. Different phases, actors, and struggles are distinguished to understand the complexities of sustainability transitions, while still providing analytical traction and policy advice. The MLP draws attention to socio-technical systems as a new unit of analysis, which is more comprehensive than a micro-focus on individuals and more concrete than a macro-focus on a green economy. It also forms a new analytical framework that spans several stale dichotomies in environmental social science debates related to agency or structure and behavioral or technical change. The MLP accommodates stability and change and offers an integrative view on transitions, ranging from local projects to niche innovations to sector-level regimes and broader societal contexts. This new interdisciplinary research is attracting increasing attention from the European Environment Agency, International Panel on Climate Change (IPCC), and Organization for Economic Cooperation and Development (OECD).
Luis S. Pereira and José M. Gonçalves
Surface irrigation is the oldest and most widely used irrigation method, more than 83% of the world’s irrigated area. It comprises traditional systems, developed over millennia, and modern systems with mechanized and often automated water application and adopting precise land-leveling. It adapts well to non-sloping conditions, low to medium soil infiltration characteristics, most crops, and crop mechanization as well as environmental conditions. Modern methods provide for water and energy saving, control of environmental impacts, labor saving, and cropping economic success, thus for competing with pressurized irrigation methods. Surface irrigation refers to a variety of gravity application of the irrigation water, which infiltrates into the soil while flowing over the field surface. The ways and timings of how water flows over the field and infiltrates the soil determine the irrigation phases—advance, maintenance or ponding, depletion, and recession—which vary with the irrigation method, namely paddy basin, leveled basin, border and furrow irrigation, generally used for field crops, and wild flooding and water spreading from contour ditches, used for pasture lands. System performance is commonly assessed using the distribution uniformity indicator, while management performance is assessed with the application efficiency or the beneficial water use fraction. The factors influencing system performance are multiple and interacting—inflow rate, field length and shape, soil hydraulics roughness, field slope, soil infiltration rate, and cutoff time—while management performance, in addition to these factors, depends upon the soil water deficit at time of irrigation, thus on the way farmers are able to manage irrigation. The process of surface irrigation is complex to describe because it combines surface flow with infiltration into the soil profile. Numerous mathematical computer models have therefore been developed for its simulation, aimed at both design adopting a target performance and field evaluation of actual performance. The use of models in design allows taking into consideration the factors referred to before and, when adopting any type of decision support system or multicriteria analysis, also taking into consideration economic and environmental constraints and issues.
There are various aspects favoring and limiting the adoption of surface irrigation. Favorable aspects include the simplicity of its adoption at farm in flat lands with low infiltration rates, namely when water conveyance and distribution are performed with canal and/or low-pressure pipe systems, low capital investment, and low energy consumption. Most significant limitations include high soil infiltration and high variability of infiltration throughout the field, land leveling requirements, need for control of a constant inflow rate, difficulties in matching irrigation time duration with soil water deficit at time of irrigation, and difficult access to equipment for mechanized and automated water application and distribution. The modernization of surface irrigation systems and design models, as well as models and tools usable to support surface irrigation management, have significantly impacted water use and productivity, and thus competitiveness of surface irrigation.