Along with ceramics production, sedentism, and herding, agriculture is a major component of the Neolithic as it is defined in Europe. Therefore, the agricultural system of the first Neolithic societies and the dispersal of exogenous cultivated plants to Europe are the subject of many scientific studies. To work on these issues, archaeobotanists rely on residual plant remains—crop seeds, weeds, and wild plants—from archaeological structures like detritic pits, and, less often, storage contexts. To date, no plant with an economic value has been identified as domesticated in Western Europe except possibly opium poppy. The earliest seeds identified at archaeological sites dated to about 5500–5200
The Neolithic pioneers settled in an area that had experienced a long tradition of hunting and gathering. The Neolithization of Europe followed a colonization model. The Mesolithic groups, although exploiting plant resources such as hazelnut more or less intensively, did not significantly change the landscape. The impact of their settlements and their activities are hardly noticeable through palynology, for example. The control of the mode of reproduction of plants has certainly increased the prevalence of Homo sapiens, involving, among others, a demographic increase and the ability to settle down in areas that were not well adapted to year-round occupation up to that point. The characterization of past agricultural systems, such as crop plants, technical processes, and the impact of anthropogenic activities on the landscape, is essential for understanding the interrelation of human societies and the plant environment. This interrelation has undoubtedly changed deeply with the Neolithic Revolution.
Worldwide, governments subsidize agriculture at the rate of approximately 1 billion dollars per day. This figure rises to about twice that when export and biofuels production subsidies and state financing for dams and river basin engineering are included. These policies guide land use in numerous ways, including growers’ choices of crop and buyers’ demand for commodities. The three types of state subsidies that shape land use and the environment are land settlement programs, price and income supports, and energy and emissions initiatives. Together these subsidies have created perennial surpluses in global stores of cereal grains, cotton, and dairy, with production increases outstripping population growth. Subsidies to land settlement, to crop prices, and to processing and refining of cereals and fiber, therefore, can be shown to have independent and largely deleterious effect on soil fertility, fresh water supplies, biodiversity, and atmospheric carbon.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
In 1945, the Amazon biome was still mostly intact. The scars of ancient cultural developments in Andean and lowland Amazon had healed, and the impacts of rubber and other resource exploitation were reversible. Very few roads existed, and only in its periphery. In the 1950s and especially in the 1960s, Brazil and other Andean countries launched ambitious road building and colonization projects, largely driven by Brazilian geopolitical concerns. Interest in the Amazon became much more intense in the 1970s as forest loss began to raise worldwide concern. Construction of more and better roads continued at an exponentially growing pace in each following decade, multiplying correlated deforestation and forest degradation everywhere in the Amazon. A point of no return was reached when interoceanic roads crossed the borders of Brazilian-Andean countries in the 2000s, exposing the remaining safe havens for indigenous people and nature. It is commonly estimated that today no less than 18% of the forest has been replaced with agriculture and that more than 50% of the remaining forests are significantly degraded. Most deforested land, especially in Andean countries, is wasted or scarcely used. Oil, mining, and intense urban development, as well as intensive agriculture, spread serious water and soil contamination throughout the region. Logging, fisheries, and hunting gave rise to the successive commercial extinction of valuable species.
Theories regarding the importance of biogeochemical cycles had already been in development since the 1970s, however, in the late 1980s the dominant popular view on the environmental value of the Amazon “lungs of the planet” emerged. The confirmation of the role of the Amazon as a carbon sink added some international pressure for its protection. But, in general, the many scientific discoveries regarding the Amazon have not been helpful in improving its conservation. Instead, a combination of new agricultural technologies, anthropocentric philosophies, and economic changes has strongly promoted forest clearing.
From the 1980s to the present day, Amazon conservation efforts have increasingly diversified, and now consist of five theoretically complementary strategies: (1) the creation of more, larger and better managed protected areas, including biological corridors; (2) the protection of more and larger indigenous territories; (3) the promotion of a series of “sustainable use” options such as “community based conservation,” sustainable forestry, and agroforestry; (4) the financing of conservation through debt swaps and related financial mechanisms for mitigating climate change and; (5) the use of better legislation, monitoring, and control. Five small protected areas have existed in the Amazon since the early 1960s but, in response to the road building boom of the 1970s, several larger patches of forests were set aside with the aim of conserving viable samples of biological diversity. Today, around 25 % of the Amazon is designated as protected areas, but almost half of these areas are categorized in a way that allows human presence and resource exploitation, and there is no effective management. Another 25.3% is designated to indigenous people who may or not conserve the forest. Excluding areas of overlap, both types of protected areas cover 41.2% of the Amazon. Neither strategy has fully achieved its objective, alone or together, and development pressures and threats grow as road construction and deforestation continue relentlessly with increasing funding by multilateral and national banks and pressure from transnational enterprises.
The future will be directed by unprecedented agricultural expansion and the corresponding intensification of deforestation and forest degradation. Additionally, the Amazon basin will be impacted by new, larger hydraulic works. Mining will increase and spread. Policy makers of Amazon countries still view the region as the future for expanding conventional development, and the population continues to be indifferent.
Throughout the 1900s, the warmth of the current interglaciation was viewed as completely natural in origin (prior to greenhouse-gas emissions during the industrial era). In the view of physical scientists, orbital variations had ended the previous glaciation and caused a warmer climate but had not yet brought it to an end. Most historians focused on urban and elite societies, with much less attention to how farmers were altering the land. Historical studies were also constrained by the fact that written records extended back a few hundred to at most 3,500 years.
The first years of the new millennium saw a major challenge to the ruling paradigm. Evidence from deep ice drilling in Antarctica showed that the early stages of the three interglaciations prior to the current one were marked by decreases in concentrations of carbon dioxide (CO2) and methane (CH4) that must have been natural in origin. During the earliest part of the current (Holocene) interglaciation, gas concentrations initially showed similar decreases, but then rose during the last 7,000–5,000 years. These anomalous (“wrong-way”) trends are interpreted by many scientists as anthropogenic, with support from scattered evidence of deforestation (which increases atmospheric CO2) by the first farmers and early, irrigated rice agriculture (which emits CH4).
During a subsequent interval of scientific give-and-take, several papers have criticized this new hypothesis. The most common objection has been that there were too few people living millennia ago to have had large effects on greenhouse gases and climate. Several land-use simulations estimate that CO2 emissions from pre-industrial forest clearance amounted to just a few parts per million (ppm), far less than the 40 ppm estimate in the early anthropogenic hypothesis. Other critics have suggested that, during the best orbital analog to the current interglaciation, about 400,000 years ago, interglacial warmth persisted for 26,000 years, compared to the 10,000-year duration of the current interglaciation (implying more warmth yet to come). A geochemical index of the isotopic composition of CO2 molecules indicates that terrestrial emissions of 12C-rich CO2 were very small prior to the industrial era.
Subsequently, new evidence has once again favored the early anthropogenic hypothesis, albeit with some modifications. Examination of cores reaching deeper into Antarctic ice reconfirm that the upward gas trends in this interglaciation differ from the average downward trends in seven previous ones. Historical data from Europe and China show that early farmers used more land per capita and emitted much more carbon than suggested by the first land-use simulations. Examination of pollen trends in hundreds of European lakes and peat bogs has shown that most forests had been cut well before the industrial era. Mapping of the spread of irrigated rice by archaeobotanists indicates that emissions from rice paddies can explain much of the anomalous CH4 rise in pre-industrial time. The early anthropogenic hypothesis is now broadly supported by converging evidence from a range of disciplines.
Benjamin S. Arbuckle
The domestication of livestock animals has long been recognized as one of the most important and influential events in human prehistory and has been the subject of scholarly inquiry for centuries. Modern understandings of this important transition place it within the context of the origins of food production in the so-called Neolithic Revolution, where it is particularly well documented in southwest Asia. Here, a combination of archaeofaunal, isotopic, and DNA evidence suggests that sheep, goat, cattle, and pigs were first domesticated over a period of several millennia within sedentary communities practicing intensive cultivation beginning at the Pleistocene–Holocene transition. Resulting from more than a century of data collection, our understanding of the chronological and geographic features of the transition from hunting to herding indicate that the 9th millennium
The emergence of environment as a security imperative is something that could have been avoided. Early indications showed that if governments did not pay attention to critical environmental issues, these would move up the security agenda. As far back as the Club of Rome 1972 report, Limits to Growth, variables highlighted for policy makers included world population, industrialization, pollution, food production, and resource depletion, all of which impact how we live on this planet.
The term environmental security didn’t come into general use until the 2000s. It had its first substantive framing in 1977, with the Lester Brown Worldwatch Paper 14, “Redefining Security.” Brown argued that the traditional view of national security was based on the “assumption that the principal threat to security comes from other nations.” He went on to argue that future security “may now arise less from the relationship of nation to nation and more from the relationship between man to nature.”
Of the major documents to come out of the Earth Summit in 1992, the Rio Declaration on Environment and Development is probably the first time governments have tried to frame environmental security. Principle 2 says: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental and developmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national.”
In 1994, the UN Development Program defined Human Security into distinct categories, including:
• Economic security (assured and adequate basic incomes).
• Food security (physical and affordable access to food).
• Health security.
• Environmental security (access to safe water, clean air and non-degraded land).
By the time of the World Summit on Sustainable Development, in 2002, water had begun to be identified as a security issue, first at the Rio+5 conference, and as a food security issue at the 1996 FAO Summit. In 2003, UN Secretary General Kofi Annan set up a High-Level Panel on “Threats, Challenges, and Change,” to help the UN prevent and remove threats to peace. It started to lay down new concepts on collective security, identifying six clusters for member states to consider. These included economic and social threats, such as poverty, infectious disease, and environmental degradation.
By 2007, health was being recognized as a part of the environmental security discourse, with World Health Day celebrating “International Health Security (IHS).” In particular, it looked at emerging diseases, economic stability, international crises, humanitarian emergencies, and chemical, radioactive, and biological terror threats. Environmental and climate changes have a growing impact on health. The 2007 Fourth Assessment Report (AR4) of the UN Intergovernmental Panel on Climate Change (IPCC) identified climate security as a key challenge for the 21st century. This was followed up in 2009 by the UCL-Lancet Commission on Managing the Health Effects of Climate Change—linking health and climate change.
In the run-up to Rio+20 and the launch of the Sustainable Development Goals, the issue of the climate-food-water-energy nexus, or rather, inter-linkages, between these issues was highlighted. The dialogue on environmental security has moved from a fringe discussion to being central to our political discourse—this is because of the lack of implementation of previous international agreements.
Russian environmental history is a new field of inquiry, with the first archivally based monographs appearing only in the last years of the 20th century. Despite the field’s youth, scholars studying the topic have developed two distinct and contrasting approaches to its central question: How should the relationship between Russian culture and the natural world be characterized? Implicit in this question are two others: Is the Russian attitude toward the non-human world more sensitive than that which prevails in the West; and if so, is the Russian environment healthier or more stable than that of the United States and Western Europe? In other words, does Russia, because of its traditional suspicion of individualism and consumerism, have something to teach the West? Or, on the contrary, has the Russian historical tendency toward authoritarianism and collectivism facilitated predatory policies that have degraded the environment? Because environmentalism as a political movement and environmental history as an academic subject both emerged during the Cold War, at a time when the Western social, political, and economic system vied with the Soviet approach for support around the world, the comparative (and competitive) aspect of Russian environmental history has always been an important factor, although sometimes an implicit one. Accordingly, the existing scholarly works about Russian environmental history generally fall into one of two camps: one very critical of the Russian environmental record and the seeming disregard of the Russian government for environmental damage, and a somewhat newer group of works that draw attention to the fundamentally different concerns that motivate Russian environmental policies. The first group emphasizes Russian environmental catastrophes such as the desiccated Aral Sea, the eroded Virgin Lands, and the public health epidemics related to the severely polluted air of Soviet industrial cities. The environmental crises that the first group cites are, most often, problems once prevalent in the West, but successfully ameliorated by the environmental legislation of the late 1960s and early 1970s. The second group, in contrast, highlights Russian environmental policies that do not have strict Western analogues, suggesting that a thorough comparison of the Russian and Western environmental records requires, first of all, a careful examination of what constitutes environmental responsibility.
The Mississippi River, the longest in North America, is really two rivers geophysically. The volume is less, the slope steeper, the velocity greater, and the channel straighter in its upper portion than in its lower portion. Below the mouth of the Ohio River, the Mississippi meanders through a continental depression that it has slowly filled with sediment over many millennia. Some limnologists and hydrologists consider the transitional middle portion of the Mississippi, where the waters of its two greatest tributaries, the Missouri and Ohio rivers, join it, to comprise a third river, in terms of its behavioral patterns and stream and floodplain ecologies.
The Mississippi River humans have known, with its two or three distinct sections, is a relatively recent formation. The lower Mississippi only settled into its current formation following the last ice age and the dissipation of water released by receding glaciers. Much of the current river delta is newer still, having taken shape over the last three to five hundred years.
Within the lower section of the Mississippi are two subsections, the meander zone and the delta. Below Cape Girardeau, Missouri, the river passes through Crowley’s Ridge and enters the wide and flat alluvial plain. Here the river meanders in great loops, often doubling back on itself, forming cut offs that, if abandoned by the river, forming lakes. Until modern times, most of the plain, approximately 35,000 square miles, comprised a vast and rich—rich in terms of biomass production—ecological wetland sustained by annual Mississippi River floods that brought not just water, but fertile sediment—topsoil—gathered from across much of the continent. People thrived in the Mississippi River meander zone. Some of the most sophisticated indigenous cultures of North America emerged here. Between Natchez, Mississippi, and Baton Rouge, Louisiana, at Old River Control, the Mississippi begins to fork into distributary channels, the largest of which is the Atchafalaya River. The Mississippi River delta begins here, formed of river sediment accrued upon the continental shelf. In the delta the land is wetter, the ground water table is shallower. Closer to the sea, the water becomes brackish and patterns of river sediment distribution are shaped by ocean tides and waves. The delta is frequently buffeted by hurricanes.
Over the last century and a half people have transformed the lower Mississippi River, principally through the construction of levees and drainage canals that have effectively disconnected the river from the floodplain. The intention has been to dry the land adjacent to the river, to make it useful for agriculture and urban development. However, an unintended effect of flood control and wetland drainage has been to interfere with the flood-pulse process that sustained the lower valley ecology, and with the process of sediment distribution that built the delta and much of the Louisiana coastline. The seriousness of the delta’s deterioration has become especially apparent since Hurricane Katrina, and has moved conservation groups to action. They are pushing politicians and engineers to reconsider their approach to Mississippi River management.
Mark V. Barrow
The prospect of extinction, the complete loss of a species or other group of organisms, has long provoked strong responses. Until the turn of the 18th century, deeply held and widely shared beliefs about the order of nature led to a firm rejection of the possibility that species could entirely vanish. During the 19th century, however, resistance to the idea of extinction gave way to widespread acceptance following the discovery of the fossil remains of numerous previously unknown forms and direct experience with contemporary human-driven decline and the destruction of several species. In an effort to stem continued loss, at the turn of the 19th century, naturalists, conservationists, and sportsmen developed arguments for preventing extinction, created wildlife conservation organizations, lobbied for early protective laws and treaties, pushed for the first government-sponsored parks and refuges, and experimented with captive breeding. In the first half of the 20th century, scientists began systematically gathering more data about the problem through global inventories of endangered species and the first life-history and ecological studies of those species.
The second half of the 20th and the beginning of the 21st centuries have been characterized both by accelerating threats to the world’s biota and greater attention to the problem of extinction. Powerful new laws, like the U.S. Endangered Species Act of 1973, have been enacted and numerous international agreements negotiated in an attempt to address the issue. Despite considerable effort, scientists remain fearful that the current rate of species loss is similar to that experienced during the five great mass extinction events identified in the fossil record, leading to declarations that the world is facing a biodiversity crisis. Responding to this crisis, often referred to as the sixth extinction, scientists have launched a new interdisciplinary, mission-oriented discipline, conservation biology, that seeks not just to understand but also to reverse biota loss. Scientists and conservationists have also developed controversial new approaches to the growing problem of extinction: rewilding, which involves establishing expansive core reserves that are connected with migratory corridors and that include populations of apex predators, and de-extinction, which uses genetic engineering techniques in a bid to resurrect lost species. Even with the development of new knowledge and new tools that seek to reverse large-scale species decline, a new and particularly imposing danger, climate change, looms on the horizon, threatening to undermine those efforts.
Fisheries science emerged in the mid-19th century, when scientists volunteered to conduct conservation-related investigations of commercially important aquatic species for the governments of North Atlantic nations. Scientists also promoted oyster culture and fish hatcheries to sustain the aquatic harvests. Fisheries science fully professionalized with specialized graduate training in the 1920s.
The earliest stage, involving inventory science, trawling surveys, and natural history studies continued to dominate into the 1930s within the European colonial diaspora. Meanwhile, scientists in Scandinavian countries, Britain, Germany, the United States, and Japan began developing quantitative fisheries science after 1900, incorporating hydrography, age-determination studies, and population dynamics. Norwegian biologist Johan Hjort’s 1914 finding, that the size of a large “year class” of juvenile fish is unrelated to the size of the spawning population, created the central foundation and conundrum of later fisheries science. By the 1920s, fisheries scientists in Europe and America were striving to develop a theory of fishing. They attempted to develop predictive models that incorporated statistical and quantitative analysis of past fishing success, as well as quantitative values reflecting a species’ population demographics, as a basis for predicting future catches and managing fisheries for sustainability. This research was supported by international scientific organizations such as the International Council for the Exploration of the Sea (ICES), the International Pacific Halibut Commission (IPHC), and the United Nations’ Food and Agriculture Organization (FAO).
Both nationally and internationally, political entanglement was an inevitable feature of fisheries science. Beyond substituting their science for fishers’ traditional and practical knowledge, many postwar fisheries scientists also brought progressive ideals into fisheries management, advocating fishing for a maximum sustainable yield. This in turn made it possible for governments, economists, and even scientists, to use this nebulous target to project preferred social, political, and economic outcomes, while altogether discarding any practical conservation measures to rein in globalized postwar industrialized fishing. These ideals were also exported to nascent postwar fisheries science programs in developing Pacific and Indian Ocean nations and in Eastern Europe and Turkey.
The vision of mid-century triumphalist science, that industrial fisheries could be scientifically managed like any other industrial enterprise, was thwarted by commercial fish stock collapses, beginning slowly in the 1950s and accelerating after 1970, including the massive northern cod crisis of the early 1990s. In the 1980s scientists, aided by more powerful computers, attempted multi-species models to understand the different impacts of a fishery on various species. Daniel Pauly led the way with multi-species models for tropical fisheries, where the need for such was most urgent, and pioneered the global database FishBase, using fishing data collected by the FAO and national bodies. In Canada the cod crisis inspired Ransom Myers to use large databases for fisheries analysis to show the role of overfishing in causing that crisis. After 1980 population ecologists also demonstrated the importance of life history data for understanding fish species’ responses to fishery-induced population and environmental change.
With fishing continuing to shrink many global commercial stocks, scientists have demonstrated how different measures can manage fisheries for species with different life-history profiles. Aside from the need for effective scientific monitoring, the biggest ongoing challenges remain having politicians, governments, fisheries industry members, and other stakeholders commit to scientifically recommended long-term conservation measures.
David E. Clay, Sharon A. Clay, Thomas DeSutter, and Cheryl Reese
Since the discovery that food security could be improved by pushing seeds into the soil and later harvesting a desirable crop, agriculture and agronomy have gone through cycles of discovery, implementation, and innovation. Discoveries have produced predicted and unpredicted impacts on the production and consumption of locally produced foods. Changes in technology, such as the development of the self-cleaning steel plow in the 18th century, provided a critical tool needed to cultivate and seed annual crops in the Great Plains of North America. However, plowing the Great Plains would not have been possible without the domestication of plants and animals and the discovery of the yoke and harness. Associated with plowing the prairies were extensive soil nutrient mining, a rapid loss of soil carbon, and increased wind and water erosion. More recently, the development of genetically modified organisms (GMOs) and no-tillage planters has contributed to increased adoption of conservation tillage, which is less damaging to the soil. In the future, the ultimate impact of climate change on agronomic practices in the North American Great Plains is unknown. However, projected increasing temperatures and decreased rainfall in the southern Great Plains (SGP) will likely reduce agricultural productivity. Different results are likely in the northern Great Plains (NGP) where higher temperatures can lead to increased agricultural intensification, the conversion of grassland to cropland, increased wildlife fragmentation, and increased soil erosion. Precision farming, conservation, cover crops, and the creation of plants better designed to their local environment can help mitigate these effects. However, changing practices require that farmers and their advisers understand the limitations of the soils, plants, and environment, and their production systems. Failure to implement appropriate management practices can result in a rapid decline in soil productivity, diminished water quality, and reduced wildlife habitat.
The term ecological design was coined in a 1996 book by Sim van der Ryn and Stewart Cowan, in which the authors argued for a seamless integration of human activities with natural processes to minimize destructive environmental impact. Following their cautionary statements, William McDonough and Michael Braungart published in 2002 their manifesto book From Cradle to Cradle, which proposed a circular political economy to replace the linear logic of “cradle to grave.” These books have been foundational in architecture and design discussions on sustainability and establishing the technical dimension, as well as the logic, of efficiency, optimization, and evolutionary competition in environmental debates. From Cradle to Cradle evolved into a production model implemented by a number of companies, organizations, and governments around the world, and it also has become a registered trademark and a product certification.
Popularized recently, these developments imply a very short history for the growing field of ecological design. However, their accounts hark as far back as Ernst Haeckel’s definition of the field of ecology in 1866 as an integral link between living organisms and their surroundings (Generelle Morphologie der Organismen, 1866); and Henry David Thoreau’s famous 1854 manual for self-reliance and living in proximity with natural surroundings, in the cabin that he built at Walden Pond, Massachusetts (Walden; or, Life in the Woods, 1854).
Since World War II, contrary to the position of ecological design as a call to fit harmoniously within the natural world, there has been a growing interest in a form of synthetic naturalism, (Closed Worlds; The Rise and Fall of Dirty Physiology, 2015), where the laws of nature and metabolism are displaced from the domain of wilderness to the domain of cities, buildings, and objects. With the rising awareness of what John McHale called disturbances in the planetary reservoir (The Future of the Future, 1969), the field of ecological design has signified not only the integration of the designed object or space in the natural world, but also the reproduction of the natural world in design principles and tools through technological mediation. This idea of architecture and design producing nature paralleled what Buckminster Fuller, John McHale, and Ian McHarg, among others, referred to as world planning; that is, to understand ecological design as the design of the planet itself as much as the design of an object, building, or territory. Unlike van der Ryn and Cowan’s argumentation, which focused on a deep appreciation for nature’s equilibrium, ecological design might commence with the synthetic replication of natural systems.
These conflicting positions reflect only a small fraction of the ubiquitous terms used to describe the field of ecological design, including green, sustain, alternative, resilient, self-sufficient, organic, and biotechnical. In the context of this study, this paper will argue that ecological design starts with the reconceptualization of the world as a complex system of flows rather than a discrete compilation of objects, which visual artist and theorist György Kepes has described as one of the fundamental reorientations of the 20th century (Art and Ecological Consciousness, 1972).
Céline Granjou and Isabelle Arpin
The recent implementation of the IPBES is a major cornerstone in the transformation of the international environmental governance in the early 21st century. Often presented as “the IPCC (Intergovernmental Panel on Climate Change) for biodiversity,” the IPBES aims to produce regular expert assessments of the state and evolution of biodiversity and ecosystems at the local, regional, and global levels. Its creation was promoted in the 1990s by biodiversity scientists and NGOs who increasingly came to view the failure of achieving effective conservation of nature as the consequence of the gap between science and policy, rather than of a lack of knowledge. The new institution embodies an approach to nature and nature conservation that results from the progressive evolution of international environmental governance, marked by the notion of ecosystem services (i.e., the idea that nature provides benefits to people and that nature conservation and human development should be thought of as mutually constitutive). The IPBES creation was entrusted to the United Nations Environment Programme (UNEP). Social environmental studies accounted for the genesis and organization of the IPBES and paid special attention to the strong emphasis put by IPBES participants on principles of openness and inclusivity and on the need to consider scientific knowledge and other forms of knowledge (e.g. traditional ecological knowledge) on an equal footing. Overall the IPBES can be considered an innovative platform characterized by organizations and practices that foster inclusiveness and openness both to academic science and indigenous knowledge as well as to diverse values and visions of nature and its relationship to society. However, the extent to which it succeeded in putting different biodiversity values and knowledge on an equal footing in practice has varied and remains diversely appreciated by the literature.
Simon Holdaway and Rebecca Phillipps
Northeast Africa forms an interesting case study for investigating the relationship between changes in environment and agriculture. Major climatic changes in the early Holocene led to dramatic changes in the environment of the eastern Sahara and to the habitation of previously uninhabitable regions. Research programs in the eastern Sahara have uncovered a wealth of archaeological evidence for sustained occupation during the African Humid Period, from about 11,000 years ago. Initial studies of faunal remains seemed to indicate early shifts in economic practice toward cattle pastoralism. Although this interpretation was much debated when it was first proposed, the possibility of early pastoralism stimulated discussion concerning the relationships between people and animals in particular environmental contexts, and ultimately led to questions concerning the role of agriculture imported from elsewhere in contrast to local developments. Did agriculture, or indeed cultivation and domestication more generally (sensu Fuller & Hildebrand, 2013), develop in North Africa, or were the concepts and species imported from Southwest Asia? And if agriculture did spread from elsewhere, were just the plants and animals involved, or was the shift part of a full socioeconomic suite that included new subsistence strategies, settlement patterns, technologies, and an agricultural “culture”? And finally, was this shift, wherever and however it originated, related to changes in the environment during the early to mid-Holocene?
These questions refer to the “big ideas” that archaeologists explore, but before answers can be formed it is important to consider the nature of the material evidence on which they are based. Archaeologists must consider not only what they discover but also what might be missing. Materials from the past are preserved only in certain places, and of course some materials can be preserved better than others. In addition, people left behind the material remains of their activities, but in doing so they did not intend these remains to be an accurate historical record of their actions. Archaeologists need to consider how the remains found in one place may inform us about a range of activities that occurred elsewhere for which the evidence may be less abundant or missing. This is particularly true for Northeast Africa where environmental shifts and consequent changes in resource abundance often resulted in considerable mobility. This article considers the origins of agriculture in the region covering modern-day Egypt and Sudan, paying particular attention to the nature of the evidence from which inferences about past socioeconomies may be drawn.
Noa Kekuewa Lincoln and Peter Vitousek
Agriculture in Hawaiʻi was developed in response to the high spatial heterogeneity of climate and landscape of the archipelago, resulting in a broad range of agricultural strategies. Over time, highly intensive irrigated and rainfed systems emerged, supplemented by extensive use of more marginal lands that supported considerable populations. Due to the late colonization of the islands, the pathways of development are fairly well reconstructed in Hawaiʻi. The earliest agricultural developments took advantage of highly fertile areas with abundant freshwater, utilizing relatively simple techniques such as gardening and shifting cultivation. Over time, investments into land-based infrastructure led to the emergence of irrigated pondfield agriculture found elsewhere in Polynesia. This agricultural form was confined by climatic and geomorphological parameters, and typically occurred in wetter, older landscapes that had developed deep river valleys and alluvial plains. Once initiated, these wetland systems saw regular, continuous development and redevelopment. As populations expanded into areas unable to support irrigated agriculture, highly diverse rainfed agricultural systems emerged that were adapted to local environmental and climatic variables. Development of simple infrastructure over vast areas created intensive rainfed agricultural systems that were unique in Polynesia. Intensification of rainfed agriculture was confined to areas of naturally occurring soil fertility that typically occurred in drier and younger landscapes in the southern end of the archipelago. Both irrigated and rainfed agricultural areas applied supplementary agricultural strategies in surrounding areas such as agroforestry, home gardens, and built soils. Differences in yield, labor, surplus, and resilience of agricultural forms helped shape differentiated political economies, hierarchies, and motivations that played a key role in the development of sociopolitical complexity in the islands.
Nations rapidly industrialized after World War II, sharply increasing the extraction of resources from the natural world. Colonial empires broke up on land after the war, but they were re-created in the oceans. The United States, Japan, and the Soviet Union, as well as the British, Germans, and Spanish, industrialized their fisheries, replacing fleets of small-scale, independent artisanal fishermen with fewer but much larger government-subsidized ships. Nations like South Korea and China, as well as the Eastern Bloc countries of Poland and Bulgaria, also began fishing on an almost unimaginable scale. Countries raced to find new stocks of fish to exploit. As the Cold War deepened, nations sought to negotiate fishery agreements with Third World nations. The conflict over territorial claims led to the development of the Law of the Sea process, starting in 1958, and to the adoption of 200-mile exclusive economic zones (EEZ) in the 1970s.
Fishing expanded with the understanding that fish stocks were robust and could withstand high harvest rates. The adoption of maximum sustained yield (MSY) after 1954 as the goal of postwar fishery negotiations assumed that fish had surplus and that scientists could determine how many fish could safely be caught. As fish stocks faltered under the onslaught of industrial fisheries, scientists re-assessed their assumptions about how many fish could be caught, but MSY, although modified, continues to be at the heart of modern fisheries management.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Environmental Science. Please check back later for the full article.
The Quaternary period of Earth history, which commenced ca. 2.6 Ma ago, is noted for a series of dramatic shifts in global climate between long, cool (“icehouse”) and short, temperate (“greenhouse”) stages. This also coincides with the extinction of later Australopithecine hominins and evolution of modern Homo sapiens.
Wide recognition of a fourth, Quaternary, order of geologic time emerged in Europe between ca. 1760–1830 and became closely identified with the concept of an ice age. This most recent episode in Earth history is also the best preserved in stratigraphic and landscape records. Indeed, much of its character and processes continue in present time, which prompted early geologists’ recognition of the concept of uniformitarianism—the present is the key to the past.
Quaternary time was quickly divided into a dominant Pleistocene (“most recent”) epoch, characterized by cyclical growth and decay of major continental ice sheets and peripheral permafrost. Disappearance of most of these ice sheets, except in Antarctica and Greenland today, ushered in the Holocene (“wholly modern”) epoch, once thought to terminate the Ice Age but now seen as the current interglacial or temperate stage, commencing ca. 11.7 ka ago. Covering 30–50% of Earth’s land surface at their maxima, ice sheets and permafrost squeezed remaining biomes into a narrower circum-equatorial zone, where research indicated the former occurrence of pluvial and desiccation events. Early efforts to correlate them with mid-high latitude glacials and interglacials revealed the complex and often asynchronous Pleistocene record.
Nineteenth-century recognition of just four glaciations reflected a reliance on geomorphology and short terrestrial stratigraphic records, concentrated in northern hemisphere mid- and high-latitudes, until the 1970s. Correlation of δ16-18 O isotope signals from seafloor sediments (from ocean drilling programs after the 1960s) with polar ice core signals from the 1980s onward has revolutionized our understanding of the Quaternary, facilitating a sophisticated, time-constrained record of events and environmental reconstructions from regional to global scales. Records from oceans and ice sheets, some spanning 105–106 years, are augmented by similar long records from loess, lake sediments, and speleothems (cave sediments). Their collective value is enhanced by innovative analytical and dating tools.
Over 100 Marine Isotope Stages (MIS) are now recognized in the Quaternary, with dramatic climate shifts at decadal and centennial timescales—with the magnitude of 22 MIS in the past 900,000 years considered to reflect significant ice sheet accumulation and decay. Each cycle between temperate and cool conditions (odd- and even-numbered MIS respectively) is time-asymmetric, with progressive cooling over 80,000 to 100,000 years, followed by an abrupt termination then rapid return to temperate conditions for a few thousand years.
The search for causes of Quaternary climate and environmental change embraces all strands of Earth System Science. Strong correlation between orbital forcing and major climate changes (summarized as the Milankovitch mechanism) is displacing earlier emphasis on radiative (direct solar) forcing, but uncertainty remains over how the orbital signal is amplified or modulated. Tectonic forcing (ocean-continent distributions, tectonic uplift, and volcanic outgassing), atmosphere-biogeochemical and greenhouse gas exchange, ocean-land surface albedo and deep- and surface-ocean circulation are all contenders and important agents in their own right.
Modern understanding of Quaternary environments and processes feeds an exponential growth of multidisciplinary research, numerical modeling, and applications. Climate modeling exploits mutual benefits to science and society of “hindcasting,” using paleoclimate data to aid understanding of the past and increasing confidence in modeling forecasts. Pursuit of more detailed and sophisticated understanding of ocean-atmosphere-cryosphere-biosphere interaction proceeds apace.
The Quaternary is also the stage on which human evolution plays. And the essential distinction between natural climate variability and human forcing is now recognized as designating, in present time, a potential new Anthropocene epoch. Quaternary past and present are major keys to its future.
Soils are the complex, dynamic, spatially diverse, living, and environmentally sensitive foundations of terrestrial ecosystems as well as human civilizations. The modern, environmental study of soil is a truly young scientific discipline that emerged only in the late 19th century from foundations in agricultural chemistry, land resource mapping, and geology. Today, little more than a century later, soil science is a rigorously interdisciplinary field with a wide range of exciting applications in agronomy, ecology, environmental policy, geology, public health, and many other environmentally relevant disciplines. Soils form slowly, in response to five inter-related factors: climate, organisms, topography, parent material, and time. Consequently, many soils are chemically, biologically, and/or geologically unique. The profound importance of soil, combined with the threats of erosion, urban development, pollution, climate change, and other factors, are now prompting soil scientists to consider the application of endangered species concepts to rare or threatened soil around the world.