1-20 of 57 Results

  • Keywords: period x
Clear all



Joseph Maran

The strongly fortified acropolis of Mycenaean Tiryns is situated about 1.5 kilometres from the present coast of the Bay of Nauplion (but only about five hundred metres in the Early Bronze Age and one kilometre in the Late Bronze Age), where it perches on a narrow, rocky outcrop that reaches a height of up to twenty-eight metres above sea level (Fig. 1). The hill slopes from south to north, a topographic feature used during the Mycenaean period to create a division into an upper citadel, a middle citadel, and a lower citadel by demarcating the limits of the different parts of the hill with strong, supporting walls. The acropolis was surrounded by an extensive settlement, the lower town, whose size during the different phases of occupation is still difficult to determine.Because of its impressive appearance, the identification of the site as ancient Tiryns was never disputed, which is why the site very early on attracted the attention of travellers and archaeologists. The remains of the last Mycenaean palace on the upper citadel were largely uncovered in 1884 and 1885 by Heinrich Schliemann and Wilhelm Dörpfeld.


The Six Nara Schools  

Mikaël Bauer

During the Nara period (710–794), the Japanese religious and political landscape saw tremendous change. A new capital was built, Chinese legal codes were implemented, and the Buddhist temples grew in size and number. Traditionally, Nara period Buddhism has been described in terms of the so-called “Six Schools.” It is certainly the case that the 8th century became the cornerstone of later doctrinal and ritual developments, but the Buddhist context was more complex and intertwined than these six forms of Buddhism.


The Development of Early Historic Urbanism in South Asia  

Reshma Sawant

Two phases of urbanism are identified in the South Asian context: the first one is the Mature Harappan phase (c. 2500–1900 bce) and the second one is the Early Historic phase (c. 600 bce–300 ce). The latter phase of urbanism has its roots in the preceding Protohistoric cultural phases. The gradual developments in various facets of the society, such as polity, social setup, subsistence strategies, settlement size and hierarchy, crafts and industries, and trade and exchange, during the Neolithic-Chalcolithic (non-Harappan) and Iron Age phases appear to have subsequently culminated into Early Historic urbanism in South Asia. Scholarship on the subject has proposed various theories to explain the genesis of the second urbanism, which include technologically deterministic explanations citing the introduction of iron in South Asia and its repercussions that resulted in drastic changes between 1200 and 600 bce. These multivariate explanations identify technological advancements, technology-based diversification of activities, and growing complexity of socioeconomic organizations as the causal factors behind the Early Historic urbanism. As is evident in the archaeological context, the transformation of wider spatial urban morphology, characterized by differential velocity and magnitude, occurred during different time periods in different parts of South Asia. However, by the beginning of the current era, in around c. 100–200 ce, it can be said that most of the South Asia had experienced growth of urbanism. The process of Early Historic urbanism in South Asia from between the 6th century bce and the 3rd century ce can be divided into three phases: Phase 1: The period around the 6th century bce witnessed the emergence of the first urban polities in South Asia known as the Janapada, organized under a ruling class of Janapadins. These Janapadas were ruled by twofold constitutions: Rajya (monarchical) and Gana or Sanghas (non-monarchical). Among these polities, the four monarchies of Kosala, Vatsa, Magadha, and Avanti emerged as notable rivals contending for internal supremacy. By the 4th century bce, Magadha arose supreme. The period 600–300 bce is characterized by an early phase of fortification in South Asia involving mud and stone ramparts, and ditch or moat building at a few sites like Charsada, Kausambi, Rajghat, Rajagriha, Champa, Adam, and Ujjain. There is substantial evidence of civic planning in these settlements, such as for the construction of streets, lanes, brick and ring wells, and drainage systems. There is also extensive evidence of burnt-brick structures, early coinage (bent bars, punch-marked coins [PMCs], and uninscribed cast copper coins) and script, apart from the widespread distribution of the identifying ceramic style: the Northern Black Polished Ware. It can be argued that these changes in socioeconomic conditions and urbanism may have in fact contributed to the formation and rise of institutional religious sects like Buddhism and Jainism. Phase 2: This period of urbanism in early South Asia can be dated to between 300 and 100 bce, marked by rise of the Mauryas. This stage was characterized by the steady expansion of trade with the western world, evidenced in the proliferation of Mauryan PMCs that are found all over South Asia, indicating the presence of vibrant political and economic interactions across the larger geographical region. The presence of Mauryan courtly culture and art can be seen reflected in the technological sophistication of the polished surfaces of Asokan pillars and the various distinct animal capitals that may indicate Persian, Greek, and Achaemenid influence. The patronage that Buddhism gained among royalty, trading communities, and masses is more than evident in the various donator inscriptions that can be seen at monuments like Sanchi. The rules regarding social status and the concept of wealth seem to have been liberal, with Buddhism providing much-needed impetus in facilitating long-distance trade through their encouragement of traders to undertake long journeys. The earliest script of South Asia is the Brahmi script and the earliest acceptable evidence of Brahmi can be found in the Asokan inscriptions. However, in the past few years, new data have emerged from Peninsular India and Sri Lanka (from the sites of Porunthal, Vallam, Alagnkulam, Uraiyur, Karur, Kodumanal, and Anuradhapuram) that indicate evidence of Brahmi script that can now be dated from as early as the 6th century bce to the 4th century bce. Phase 3: The rise of the Kushanas, Sakas, Kshtrapas, Satavahanas, Cheras, Cholas, and Pandyas, and their active presence in South Asia from c. 100 bce to 300 ce, brought significant changes to the urban aspects of life. This period is characterized by extensive construction activity, complex burnt-brick buildings, well laid-out streets and drains, and fortification walls; further characterized by the adoption of new techniques of tiled flooring and roofing, extensive coinage, remarkable developments in the fields of art and architecture, knowledge production, and organized religions. Under the rule of the Kushanas and the Satavahanas, hinterland as well as the maritime trade networks grew manifold. Maritime trade with Mediterranean and Southeast Asia is quite extensively evident within archaeological findings. Another commonality between the Kushanas and Satavahanas is their patronization of Buddhism that resulted in the impressive development of art and architecture. The Gandhara and the Mathura schools of art, the rock-cut Buddhist viharas in the western Deccan, and the construction of various stupas in Sanchi, Bharhut, Nagarjunakonda, Amaravati, and Kanaganahalli, are all excellent examples of flourishing Buddhism under the Kushanas and Satavahanas. These impressive social and political complexities arose from the financial demands of maritime and overland trade, and were not necessarily the consequence of mere territorial expansion. To summarize, Early Historic urbanism in South Asia is manifested through complex polities that took the form of cities and states characterized by architectural advancement in both secular and non-secular structures, the use of baked bricks, and ring wells. Early Historic urbanism was also characterized by technological advancements in the form of various craft industries and the extensive use of metal (iron and copper), along with the development of a complex system of recording, measurement, accounting, and other sciences due to an advancement in scripts, coinage, astronomy, and mathematics. Long-distance trade led to the introduction and intensification of new religious movements (Buddhism and Jainism) that in turn contributed to the development of philosophy, art, and architecture, and. ultimately, to the rise of a ruling class.


Early Settlement in the High Andes  

Randall Haas

The Andean highlands of western South America spans 7,000 km from the equator to Patagonia and reaches altitudes of 7,000 m. Extreme cold, hypoxia, and low bioproductivity impose a distinct set of challenges to human survival and reproduction. This adaptive setting has inspired considerable archaeological and genetic research, which seeks to define the timing and nature of the adaptive process. Current evidence establishes that Paleoindian populations with fluted-point projectile technology first entered the highlands around 12.8 cal. ka. Paleoindian use of the highlands over the subsequent one thousand years appears to have been ephemeral. During the Early Archaic period (11.7–9.0 cal. ka), a culturally and genetically distinct population appears to have replaced the Paleoindian population in the highlands where they eventually established year-round settlement systems. Early Archaic cultural adaptations included large-mammal hunting, tuber foraging, animal-hide technology, logistical mobility, and egalitarian social structure. Potential genetic adaptations include selection for respiratory and cardiovascular strength and enhanced starch digestion capacity. Human adaptation to the Andean highlands thus appears to have been a multifaceted process that transpired over some four thousand years. Although current evidence favors a model of gradual adaptation, a more rapid adaptive process cannot yet be excluded. Paleoindian sites remain woefully sparse, which may indicate limited use or a sampling problem. And although recent genetic and isotopic analyses have been incisive, they are restricted to few samples from relatively late contexts. Continued investigations at the intersection of traditional archaeological methods and new biomolecular methods are likely to resolve outstanding questions soon and create opportunities to explore more-nuanced questions about the peopling of the high Andes.


Panskoye I  

Vladimir F. Stolba

Panskoye I is one of the most prominent and best-studied settlements in the rural territory of Chersonesus on the Tarkhankut Peninsula (north-western Crimea). Founded in the late 5th century bce as a fortified outpost (tetrapyrgia) protecting the south-eastern frontiers of Olbian territory, around 360 bce it was subjugated to Tauric Chersonesus, a close relationship which it maintained until the settlement’s catastrophic destruction around 270 bce. In 1969–1994, a significant part of the settlement and associated necropolis were investigated by the Tarkhankut Archaeological Expedition of the Leningrad Institute of Archaeology, Academy of Sciences of the USSR (since 1991, Institute for the History of Material Culture, Russian Academy of Sciences, St. Petersburg). The settlement’s stratigraphy and size, as well as its unique structure and layout, representing an agglomeration of compactly placed free-standing farmsteads, adjoining house blocks, and monumental buildings accommodating more than one household, distinguish it from other rural settlements in the area. Its rich and original material culture shows a remarkable intermingling of various cultural components, both Greek and non-Greek.


Modern Nepal  

Marie Lecomte-Tilouine

Within the study of the modern period of Nepali history, history is considered here both as a narrative with its internal logic, notably the periodization of history produced by Nepali historians, as well as a series of statements, events, regulations, etc., which are incorporated in this narrative. Periodization of history in Nepal establishes a direct and necessary link between modern Nepal and its national territory. Indeed, the beginning of the modern era is determined by the “unification” of the fifty independent kingdoms and tribal territories that gave birth to the anational territory of Nepal during the second half of the 18th century. Such a correspondence makes modernity and the unified territory of Nepal coincide in a single space time. Yet, a closer examination of the logic behind periodization sheds light on its Kathmandu-centric, and dynastic perspective. This resulted in the formation of a hybrid conception of the national territory and of its center of power. From being the standard of the territory’s time and space, the Kathmandu Valley became the chronotope of the historical narrative dealing with the first half of the 19th century. It continued to form the territory’s remarkable center following the seizing of power by the Rana prime ministers (1846–1951), but now by assuming a futurist dimension, which conversely, plunged the rest of the country back in time.


Geological, Paleoclimatological, and Archaeological History of the Baltic Sea Region Since the Last Glaciation  

Jan Harff, Hauke Jöns, and Alar Rosentau

The correlation of climate variability; the change environment, in particular the change of coastlines; and the development of human societies during the last millennia can be studied exemplarily in the Baltic area. The retreat of the Scandinavian ice-sheet vertical crustal movement (glacio-isostatic adjustment), together with climatically controlled sea-level rise and a continuously warming atmosphere, determine a dramatic competition between different forcings of the environment that advancing humans are occupying step by step after the glaciation. These spatially and temporally changing life conditions require a stepwise adjustment of survival strategies. Changes in the natural environment can be reconstructed from sedimentary, biological proxy data and archaeological information. According to these reconstructions, the main shift in the Baltic area’s environment happened about 8,500 years before present (BP) when the Baltic Sea became permanently connected to the Atlantic Ocean via the Danish straits and the Sound, and changed the environment from lacustrine to brackish-marine conditions. Human reaction to environmental changes in prehistoric times is mainly reconstructed from remains of ancient settlements—onshore in the uplifting North and underwater in the South dominated by sea-level rise. According to the available data, the human response to environmental change was mainly passive before the successful establishment of agriculture. But it became increasingly active after people settled down and the socioeconomic system changed from hunter-gatherer to farming communities. This change, mainly triggered by the climatic change from the Holocene cool phase to the warming period, is clearly visible in Baltic basin sediment cores as a regime shift 6,000 years (BP). But the archaeological findings prove that the relatively abrupt environmental shift is reflected in the socioeconomic system by a period of transition when hunter-gatherer and farming societies lived in parallel for several centuries. After the Holocene warming, the permanent regression in the Northern Baltic Sea and the transgression in the South did affect the socioeconomic response of the Baltic coastal societies, who migrated downslope at the regressive coast and upslope at the transgressive coast. The following cooling phases, in particular the Late Antique Little Ice Age (LALIA) and the Little Ice Age (LIA), are directly connected with migration and severe changes of the socioeconomic system. After millennia of passive reaction to climate and environmental changes, the Industrial Revolution finally enabled humans to influence and protect actively the environment, and in particular the Baltic Sea shore, by coastal constructions. On the other hand, this ability also affected climate and environment negatively because of the disturbance of the natural balance between climate, geosystem, and ecosystem.


Tupian Languages  

Wolf Dietrich

“Tupian” is a common term applied by linguists to a linguistic stock of seven families spread across great parts of South America. Tupian languages share a large number of structural and morphological similarities which make genetic relationship very probable. Four families (Arikém, Mondé, Tuparí, and Raramarama-Poruborá) are still limited to the Madeira-Guaporé region in Brazil, considered by some scholars to be the Tupí homeland. Other families and branches would have migrated, in ancient times, down the Amazon (Mundurukú, Mawé) and up the Xingú River (Juruna, Awetí). Only the Tupí-Guarani branch, which makes up about 40 living languages, mainly spread to the south. Two Tupí-Guaraní languages played an important part in the Portuguese and Spanish colonisation of South America, Tupinambá on the Brazilian coast and Guaraní in colonial Paraguay. In the early 21st century, Guaraní is spoken by more than six million non-Indian people in Paraguay and in adjacent parts of Argentina and Brazil. Tupí-Guaraní (TG) is an artificial term used by linguists to denominate the family composed by eight subgroups of languages, one of them being the Guaraní subgroup and the other one the extinct Tupinambá and its varieties. Important phonological characteristics of Tupian languages are nasality and the occurrence of a high central vowel /ɨ/, a glottal stop /ʔ/, and final consonants, especially plosives in coda position. Nasality seems to be a common characteristic of all branches of the family. Most of them show phenomena such as nasal harmony, also called nasal assimilation or regressive nasalization by some scholars. Tupian languages have a rich morphology expressed mainly by suffixes and prefixes, though particles are also important to express grammatical categories. Verbal morphology is characterized by generally rich devices of valence-changing formations. Relational inflection is one of the most striking phenomena of TG nominal phrases. It allows marking the determination of a noun by a preceding adjunct, its syntactical transformation into a nominal predicate, or the absence of any relation. Relational inflection partly occurs also in other branches and families than Tupí-Guaraní. Verbal person marking is realized by prefixing in most languages; some languages of the Tuparí and Juruna family, however, use only free pronouns. Tupian syntax is based on the predication of both verbs and nouns. Subordinate clauses, such as relative clauses, are produced by nominalization, while adverbial clauses are formed by specific particles or postpositions on the predicate. Traditional word order is SOV.


Generations and Political Engagement  

Miroslav Nemčok and Hanna Wass

The concept of “generation” constitutes a useful tool to understand the world of politics. Trends in political behavior typical for the youngest generation are indicative for future development. In a wider perspective, large differences between generations also reveal potential for intergenerational conflict and a shift in the entire political paradigm. Four important topics need to be addressed in order to properly understand the body of research studying specifics of political behavior across generations and the use of generation as an analytical tool: (a) conceptual definition of generation, (b) its distinction from other time-related concepts, (c) methodological challenges in applying the time-related factors in research, and (d) understanding the wider implications of these factors for individuals’ political behavior which has already been identified in the scholarship. A political generation is formed among cohorts who experience the same event(s) during their formative years and become permanently influenced by them. Therefore, members of the same generation share similar socialization experiences which create a sense of group belonging and shape the attitudes and behavior throughout their lives. This definition of political generation is distinctive among the three time-related factors—age, period, and cohort—each of which has a well-grounded and distinctive theoretical underpinning. However, a truly insightful examination of the time-related development in political engagement needs to utilize hybrid models that interact with age and period or cohort and period. This imposes a challenge known as identification problem—age (years since birth), period (year), and cohort (year of birth) are perfect linear functions of each other and therefore conventional statistical techniques cannot disentangle their effects. Despite extraordinary effort and outstanding ideas, this issue has not been resolved yet in a fully reliable and hence satisfactory manner. Regardless of methodological issues, the literature is already able to provide important findings resulting from cohort analysis of political engagement. This scholarship includes two major streams: The first focuses on voter turnout, exploring whether nonvoting among the youngest generation is a main reason for the turnout decline in contemporary democracies. The second stream examines the generational differences in political engagement and concludes that low electoral participation among the youngest generation may be explained by young people being more engaged with noninstitutionalized forms of political participation (e.g., occupations, petitions, protests, and online activism).


Biblical Archaeology  

Aren Maeir

Biblical archaeology is defined as the study of the archaeological remains of the peoples, cultures, and periods in which the biblical texts were formed. While in the past biblical archaeology was often seen as an ideologically motivated field of inquiry, currently, a balanced and scientifically advanced approach is common among most practitioners. The large body of research in this field, continuing to the present, provides a broad range of finds, insights, and understanding of the relevant cultures, peoples and periods in which the biblical texts were formed.Biblical archaeology may be defined as the study of the archaeological remains of the regions, cultures, and periods, in which the biblical texts were formed. Modern biblical archaeology does not attempt to prove or disprove the Bible. Rather, archaeological study of the cultures in which the Bible was formed, or which are included in the Bible narratives, can provide a better understanding of the material and intellectual context of the biblical texts. The primary aim, however, is to study the archaeology of these regions, periods, and cultures associated with the Bible, the biblical interface being secondary. Biblical archaeology focuses primary attention on the regions and cultures of the Southern Levant, specifically the region of modern-day Israel, Palestine, Jordan, Lebanon, and southern Syria. Nearby regions such as Egypt, northern Syria, Mesopotamia, Anatolia, Cyprus, and the Aegean are within its scope of interest. The main chronological focus of biblical archaeology are the periods in which the actual biblical texts were formed and written down—the Iron Age, Persian period, and Hellenistic period for the Hebrew Bible, about .


Renaissance Literature and the Environment  

Todd Andrew Borlik

As the environmental humanities have gained traction, its practitioners have ventured beyond a predictable canon of modern nature writers and Romantic poets into earlier eras to better fathom the origins of our ecological predicament. It has become abundantly clear that the Renaissance (c. 1340–1660), often reframed as the early modern era (c. 1500–1800), marks a pivotal epoch in the history of the earth. Spurred by the rediscovery of classical learning to rival the grandeur of Ancient Rome and by Columbus’s plundering of the West Indies, European powers studied and exploited the environment with unprecedented zeal, while investing in resource extraction, overseas colonization, and technoscience. These developments left an indelible imprint on both the planet and the period’s literature. In tandem with the invention of landscape by Renaissance painters, writers in the generations between Francesco Petrarch and John Milton sought new ways—while reviving and adapting ancient ones—to capture the beauty, fragility, and animacy of the natural world. In the works of poets such as Torquato Tasso, Michael Drayton, and Mary Wroth, trees can bleed, rivers speak, and nightingales transform into violated maidens. As such conceits suggest, the prevailing views of nature can seem quaint or anthropomorphic by post-Enlightenment standards. Yet Renaissance literature has proven, in part because it enables us to interrogate those standards, surprisingly responsive to ecocritical concerns. Bringing these concerns to bear on the era has revealed startling new facets of familiar texts, thrown more limelight on undersung authors, unsettled complacent assumptions in environmental history, and greatly enriched eco-theory. Nature has always been a site of ideological contestation, but the historical distance afforded by the Renaissance can bring this into shimmering focus. If the label “early modern” underscores the era’s continuity with the present and its foreshadowing of ecological issues and sensibilities, the somewhat old-fashioned label Renaissance reminds us to keep sight of its alterity and to view its literature as an archive of radically different attitudes, epistemologies, and material practices that might help us to better understand and combat environmental problems. The urgency of the climate crisis makes it imperative to trace or insinuate parallels between then and now, but newcomers to the field would also be well advised to acquaint themselves with the contours of early modern cultural and environmental history so as to undertake ecocritical interpretations responsibly without peddling anachronisms or reductive caricatures. Early modern worldviews can be both familiar and alien, and its literature can jolt us into a greater awareness of these tensions. It is, for instance, ironic yet strangely apt that the same Francis Bacon reviled as an architect of the Anthropocene was one of the first to denounce the anthropocentric prejudice of the human sensorium and mind. Bacon also feared that language and an excessive reverence for the received knowledge of the past might warp our understanding of nature. Four centuries later, his words provide a cautionary reminder that we should not approach Renaissance literature as a repository of timeless, universal truths. Rather, insofar as studying Renaissance literature enables us to see beyond the shibboleths of our own culture and historical moment, it offers valuable cognitive training that might help us recognize and overcome species bias.


Historical Views of Homosexuality: European Renaissance and Enlightenment  

Gary Ferguson

Spanning the Renaissance and the Enlightenment—the 15th/16th to the 18th centuries—the early modern period in Europe sees a fundamental evolution in relation to the conception and expression of same-sex desire. The gradual emergence of a marginalized homosexual identity, both individual and collective, accompanies a profound transformation in the understanding of the sexed body: the consolidation of two separate and “opposite” sexes, which sustain physiologically grounded sexual and gender roles. This new paradigm contrasts with an earlier one in which masculinity and femininity might be seen as representing points on a spectrum, and same-sex desire, perceived as potentially concerning all men and women, was not assimilable to a permanent characteristic excluding desire for and relations with members of the other sex. These developments, however, happened gradually and unevenly. The period is therefore characterized by differing models of homosexual desire and practices—majoritizing and minoritizing—that coexist in multiple and shifting configurations. The challenge for historians is to describe these in their full complexity, taking account of geographic variations and of both differences and continuities over time—between the beginning of the period and its end, between different points within it, and between early modernity and the present or the more recent past. The tension between similarity, identity, and the endurance of categories, on the one hand, and alterity, incommensurability, and rupture, on the other hand, defies dichotomous thinking that would see them as opposites, and favor one to the exclusion of the other. In making such comparative studies, we would no doubt do well to think not in singular but in plural terms, that is, of homosexualities in history.


Climate Change and Glacier Reaction in the European Alps  

Wolfgang Schöner

Glaciers are probably the most obvious features of Earth’s changing climate. They enable one to see the effects of a warming or a cooling of the atmosphere by landscape changes on time scales short enough to be perceived or recognized by humans. However, the relationship between a retreating and advancing glacier and the climate is not linear, as glacier flow can filter the direct signal of the climate. Thus, glaciers can advance during periods of warming or, vice versa, retreat during periods of cooling. In fact, it is the mass change of the glacier (i.e., the mass balance) that directly links the glacier reaction to an atmospheric signal. The mechanism-based understanding of the relationship between the changing climate and glacier reaction received important and significant momentum from the science of the Alpine region. This strong momentum from the Alps has to do with the well-established science tradition in Europe in the 19th and beginning of the 20th century, which resulted in a series of important inventions to measure climate and glacier properties. Even at that time, knowledge was gained that is still valid in the early 21st century (e.g., the climate is changing and fluctuating; glacier changes are caused by changing climate; and the ice age was the result of shifting climate). Above all others, Albrecht Penck and Eduard Brückner were the key scientists in this blossoming era of glacier climatology. Interest in a better understanding of the relationship of climate to glaciers was not only driven by curiosity, but also by several impacts of glaciers on human life in the Alps. Investigations of climate–glacier relationships in the Alps began with the expiration of the Little Ice Age (LIA) period when glaciers were particularly large but began to retreat significantly. Observations of post-LIA glacier front positions showed a sharp decline after their maximum extent in about 1850 until the turn of the 19th to 20th centuries, when they began to grow and advance again. They were also forming a prominent moraine around 1920, which was, however, far behind the 1850 extent. Interestingly, climate time series of the post LIA period show a general long-term cooling of summer temperatures and several decades of precipitation deficit in the second half of the 19th century. Thus, the retreat forced by climate changes cannot be simply explained by increasing air temperatures, though calibrated glacier mass balance models are able to simulate this period quite well. Additional effects related to the albedo could be a source for a better understanding. From 1920 onward, the climate moved into a period of warm and high-sunshine summers, which peaked in the 1940s until 1950. Glaciers started again to melt strongly and related discharges of pro-glacial rivers were exceptionally high during this period as glaciers were still quite large and the available energy for melt from radiation was enhanced. With the shift of the Atlantic meridional overturning (AMO), which is an important driver of European climate, into a negative mode in the 1960s, the mass balances of Alpine glaciers experienced more and more positive mass balance years. This finally resulted in a period of advancing glaciers and the development of frontal moraines around 1980 for a large number of glaciers. Thereafter, from 1980 onward, Alpine glaciers moved into an era of continuous negative mass balances and particularly strong retreat. The anthropogenic forcing from greenhouse gases together with global brightening and the increase of anticyclonic weather types in summer moved the climate and thus the mass balances of glaciers into a state far away from equilibrium. Given available scenarios of future climate, this retreat will continue and, even under the optimistic RCP2.6 scenario, glaciers (as derived from model simulations for the future) will not return to an equilibrium mass balance before the end of the 21st century. According to a glacier inventory for the European Alps from Landsat Thematic Mapper scenes of 2003, published by Paul and coworkers in 2011, the total surface of all glaciers and ice patches in the European Alps in 2003 was 2,056 km² (50% in Switzerland, 19% in Italy, 18% in Austria, 13% in France, and <1% in Germany). Generally, the reaction of Alpine glaciers to climate perturbations is rather well understood. For the glaciers of the Alps, important processes of glacier changes are related to the surface energy balance during the ablation season when radiation is the primary source of energy for snow and ice melt. Other ablation processes, such as sublimation and internal and basal ablation, are small compared to surface melt. This specificity enables the use of simple temperature-based models to simulate the mass balance of glaciers sufficiently well. Besides atmospheric forcing of glacier mass balance, glacier flow (which is related to englacial temperature distribution) plays a role, in particular, for observed front position changes of glaciers. Glaciers are continuously adapting their size to the climate, which could work much faster for smaller glaciers compared to large valley glaciers of the Alps having a response time of about 100 years.


Quaternary Climate Variation in West Africa  

Timothy M. Shanahan

West Africa is among the most populated regions of the world, and it is predicted to continue to have one of the fastest growing populations in the first half of the 21st century. More than 35% of its GDP comes from agricultural production, and a large fraction of the population faces chronic hunger and malnutrition. Its dependence on rainfed agriculture is compounded by extreme variations in rainfall, including both droughts and floods, which appear to have become more frequent. As a result, it is considered a region highly vulnerable to future climate changes. At the same time, CMIP5 model projections for the next century show a large spread in precipitation estimates for West Africa, making it impossible to predict even the direction of future precipitation changes for this region. To improve predictions of future changes in the climate of West Africa, a better understanding of past changes, and their causes, is needed. Long climate and vegetation reconstructions, extending back to 5−8 Ma, demonstrate that changes in the climate of West Africa are paced by variations in the Earth’s orbit, and point to a direct influence of changes in low-latitude seasonal insolation on monsoon strength. However, the controls on West African precipitation reflect the influence of a complex set of forcing mechanisms, which can differ regionally in their importance, especially when insolation forcing is weak. During glacial intervals, when insolation changes are muted, millennial-scale dry events occur across North Africa in response to reorganizations of the Atlantic circulation associated with high-latitude climate changes. On centennial timescales, a similar response is evident, with cold conditions during the Little Ice Age associated with a weaker monsoon, and warm conditions during the Medieval Climate Anomaly associated with wetter conditions. Land surface properties play an important role in enhancing changes in the monsoon through positive feedback. In some cases, such as the mid-Holocene, the feedback led to abrupt changes in the monsoon, but the response is complex and spatially heterogeneous. Despite advances made in recent years, our understanding of West African monsoon variability remains limited by the dearth of continuous, high- resolution, and quantitative proxy reconstructions, particularly from terrestrial sites.


Climatic Changes and Cultural Responses During the African Humid Period Recorded in Multi-Proxy Data  

David McGee and Peter B. deMenocal

The expansion and intensification of summer monsoon precipitation in North and East Africa during the African Humid Period (AHP; c. 15,000–5,000 years before present) is recorded by a wide range of natural archives, including lake and marine sediments, animal and plant remains, and human archaeological remnants. Collectively this diverse proxy evidence provides a detailed portrait of environmental changes during the AHP, illuminating the mechanisms, temporal and spatial evolution, and cultural impacts of this remarkable period of monsoon expansion across the vast expanse of North and East Africa. The AHP corresponds to a period of high local summer insolation due to orbital precession that peaked at ~11–10 ka, and it is the most recent of many such precessionally paced pluvial periods over the last several million years. Low-latitude sites in the North African tropics and Sahel record an intensification of summer monsoon precipitation at ~15 ka, associated with both rising summer insolation and an abrupt warming of the high northern latitudes at this time. Following a weakening of monsoon strength during the Younger Dryas cold period (12.9–11.7 ka), proxy data point to peak intensification of the West African monsoon between 10–8 ka. These data document lake and wetland expansions throughout almost all of North Africa, expansion of grasslands, shrubs and even some tropical trees throughout much of the Sahara, increases in Nile and Niger River runoff, and proliferation of human settlements across the modern Sahara. The AHP was also marked by a pronounced reduction in windblown mineral dust emissions from the Sahara. Proxy data suggest a time-transgressive end of the AHP, as sites in the northern and eastern Sahara become arid after 8–7 ka, while sites closer to the equator became arid later, between 5–3 ka. Locally abrupt drops in precipitation or monsoon strength appear to have been superimposed on this gradual, insolation-paced decline, with several sites to the north and east of the modern arid/semi-arid boundary showing evidence of century-scale shifts to drier conditions around 5 ka. This abrupt drying appears synchronous with rapid depopulation of the North African interior and an increase in settlement along the Nile River, suggesting a relationship between the end of the AHP and the establishment of proto-pharaonic culture. Proxy data from the AHP provide an important testing ground for model simulations of mid-Holocene climate. Comparisons with proxy-based precipitation estimates have long indicated that mid-Holocene simulations by general circulation models substantially underestimate the documented expansion of the West African monsoon during the AHP. Proxy data point to potential feedbacks that may have played key roles in amplifying monsoon expansion during the AHP, including changes in vegetation cover, lake surface area, and mineral dust loading. This article also highlights key areas for future research. Among these are the role of land surface and mineral aerosol changes in amplifying West African monsoon variability; the nature and drivers of monsoon variability during the AHP; the response of human populations to the end of the AHP; and understanding locally abrupt drying at the end of the AHP.


Vegetation at the Time of the African Humid Period  

Anne-Marie Lézine

An orbitally induced increase in summer insolation during the last glacial-interglacial transition enhanced the thermal contrast between land and sea, with land masses heating up compared to the adjacent ocean surface. In North Africa, warmer land surfaces created a low-pressure zone, driving the northward penetration of monsoonal rains originating from the Atlantic Ocean. As a consequence, regions today among the driest of the world were covered by permanent and deep freshwater lakes, some of them being exceptionally large, such as the “Mega” Lake Chad, which covered some 400 000 square kilometers. A dense network of rivers developed. What were the consequences of this climate change on plant distribution and biodiversity? Pollen grains that accumulated over time in lake sediments are useful tools to reconstruct past vegetation assemblages since they are extremely resistant to decay and are produced in great quantities. In addition, their morphological character allows the determination of most plant families and genera. In response to the postglacial humidity increase, tropical taxa that survived as strongly reduced populations during the last glacial period spread widely, shifting latitudes or elevations, expanding population size, or both. In the Saharan desert, pollen of tropical trees (e.g., Celtis) were found in sites located at up to 25°N in southern Libya. In the Equatorial mountains, trees (e.g., Olea and Podocarpus) migrated to higher elevations to form the present-day Afro-montane forests. Patterns of migration were individualistic, with the entire range of some taxa displaced to higher latitudes or shifted from one elevation belt to another. New combinations of climate/environmental conditions allowed the cooccurrences of taxa growing today in separate regions. Such migrational processes and species-overlapping ranges led to a tremendous increase in biodiversity, particularly in the Saharan desert, where more humid-adapted taxa expanded along water courses, lakes, and wetlands, whereas xerophytic populations persisted in drier areas. At the end of the Holocene era, some 2,500 to 4,500 years ago, the majority of sites in tropical Africa recorded a shift to drier conditions, with many lakes and wetlands drying out. The vegetation response to this shift was the overall disruption of the forests and the wide expansion of open landscapes (wooded grasslands, grasslands, and steppes). This environmental crisis created favorable conditions for further plant exploitation and cereal cultivation in the Congo Basin.


Theory and Modeling of the African Humid Period and the Green Sahara  

Martin Claussen, Anne Dallmeyer, and Jürgen Bader

There is ample evidence from palaeobotanic and palaeoclimatic reconstructions that during early and mid-Holocene between some 11,700 years (in some regions, a few thousand years earlier) and some 4200 years ago, subtropical North Africa was much more humid and greener than today. This African Humid Period (AHP) was triggered by changes in the orbital forcing, with the climatic precession as the dominant pacemaker. Climate system modeling in the 1990s revealed that orbital forcing alone cannot explain the large changes in the North African summer monsoon and subsequent ecosystem changes in the Sahara. Feedbacks between atmosphere, land surface, and ocean were shown to strongly amplify monsoon and vegetation changes. Forcing and feedbacks have caused changes far larger in amplitude and extent than experienced today in the Sahara and Sahel. Most, if not all, climate system models, however, tend to underestimate the amplitude of past African monsoon changes and the extent of the land-surface changes in the Sahara. Hence, it seems plausible that some feedback processes are not properly described, or are even missing, in the climate system models. Perhaps even more challenging than explaining the existence of the AHP and the Green Sahara is the interpretation of data that reveal an abrupt termination of the last AHP. Based on climate system modeling and theoretical considerations in the late 1990s, it was proposed that the AHP could have ended, and the Sahara could have expanded, within just a few centuries—that is, much faster than orbital forcing. In 2000, paleo records of terrestrial dust deposition off Mauritania seemingly corroborated the prediction of an abrupt termination. However, with the uncovering of more paleo data, considerable controversy has arisen over the geological evidence of abrupt climate and ecosystem changes. Some records clearly show abrupt changes in some climate and terrestrial parameters, while others do not. Also, climate system modeling provides an ambiguous picture. The prediction of abrupt climate and ecosystem changes at the end of the AHP is hampered by limitations implicit in the climate system. Because of the ubiquitous climate variability, it is extremely unlikely that individual paleo records and model simulations completely match. They could do so in a statistical sense, that is, if the statistics of a large ensemble of paleo data and of model simulations converge. Likewise, the interpretation regarding the strength of terrestrial feedback from individual records is elusive. Plant diversity, rarely captured in climate system models, can obliterate any abrupt shift between green and desert state. Hence, the strength of climate—vegetation feedback is probably not a universal property of a certain region but depends on the vegetation composition, which can change with time. Because of spatial heterogeneity of the African landscape and the African monsoon circulation, abrupt changes can occur in several, but not all, regions at different times during the transition from the humid mid-Holocene climate to the present-day more arid climate. Abrupt changes in one region can be induced by abrupt changes in other regions, a process sometimes referred to as “induced tipping.” The African monsoon system seems to be prone to fast and potentially abrupt changes, which to understand and to predict remains one of the grand challenges in African climate science.


Scientific Dating Methods in African Archaeology  

Vincent J. Hare and Emma Loftus

The African archaeological record is particularly remarkable in that it covers timescales relevant to all human history and prehistory. Different dating techniques are therefore fundamental to constructing reliable chronologies for the continent. The principal factors that determine the usefulness of a dating technique are (1) applicability to the material in question, (2) the expected precision of the technique, and (3) the age range over which it is expected to be useful. Radiocarbon is applicable to the past fifty thousand years of human history, encompassing the Later Stone Age, Iron Age, and historical periods, and is a highly-refined method applicable to organic materials such as bones, plant matter, charcoal, teeth, and sometimes eggshell. However, African archaeological contexts often present challenges to the preservation of material, and it is important to establish the context of the material under investigation. Materials of preference for radiocarbon dating, such as plant cellulose, are thought to be resistant to alteration during burial (diagenesis). The age ranges of luminescence and uranium-series dating stretch well into the African Middle Stone Age. Luminescence dating is applied to sediments and burnt objects, and uranium-series (U-series) dating is applied to geological materials such as carbonates and stalagmites. In some special cases, U-series dating can also be applied to fossil bones, teeth, and eggshell. For all dating methods the importance of context cannot be overstated. Other techniques, such as archaeomagnetic dating and rehydroxylation (RHX) dating, should be applicable over the historical period, but these new methods are under development. Dating methods are an active area of interdisciplinary research, continuously refined and developed, and collaboration between African archaeologists, geologists, and dating specialists is important to establish accurate regional chronologies.


Japan and the Ainu in the Early Modern Period  

Noémi Godefroy

The Ainu are an indigenous people of northeast Asia, and their lands encompassed what are now known as the north of Honshu, Hokkaido, the Kuril archipelago, southern Sakhalin, the southernmost tip of Kamchatka, and the Amur River estuary region. As such, Ainu space was a maritime one, linking the Pacific, the Sea of Okhotsk, and the Sea of Japan, and the Ainu settlements were dynamic actors in various maritime trade networks. Hence, they actively traded with other peoples, including the Japanese, from an early stage. Spreading over thousands of years, relations between Japan and the Ainu evolved in an ever-tightening way. These relations can be read in diplomatic or political terms, but also, and maybe even more so, in economic, spatial, and environmental terms, as Japan’s relationship with the Ainu people is deeply rooted in its relationship to Ainu goods, lands, and resources. Furthermore, Ainu songs reveal the importance of the charismatic trade with Japan in the shaping of Ainu society and worldview. From the 17th century, the initial, relative reciprocity of Ainu-Japanese relations became increasingly unbalanced, as the Tokugawa shoguns’ domestic productivity and foreign trade came to hinge upon Ainu labor, central to the transformation of northern marine products. During the 18th century, overlapping authorities and conflicting interests on both sides of the ethnic divide led to the advent of an inextricable web of mutual interdependencies, which all but snapped as the northeastern region of the Ainu lands became the convergence point of Japanese, Russian, and European interests. The need to establish clear regional sovereignty, to directly reap regional economic benefits and prevent Ainu unrest, led the shogunate to progressively establish direct control on the Ainu lands from the dawn of the 19th century. Although shogunate control did not lead to a full-fledged colonial enterprise per se, from the advent of the Meiji era, Ainu lands were annexed and their inhabitants subjected to colonial measures of assimilation, cultural suppression, and forced agricultural redeployment on the one hand, and dichotomization and exhibition on the other hand, before they all but disappeared from public discourse from the end of the 1930s. From the 1990s, within a global context of emerging indigenous and minority voices, Ainu individuals, groups, and movements have strived to achieve discursive reappropriation and political representation, and the past years have seen them be recognized as a minority group in Japan. Given past and ongoing tensions between Russia and Japan over sovereignty in the southern Kuril, and the future opening of the Arctic route between the Atlantic and Pacific Oceans, the Ainu could play an international role in both diplomatic and environmental terms.


Plasticity of Information Processing in the Auditory System  

Andrew J. King

Information processing in the auditory system shows considerable adaptive plasticity across different timescales. This ranges from very rapid changes in neuronal response properties—on the order of hundreds of milliseconds when the statistics of sounds vary or seconds to minutes when their behavioral relevance is altered—to more gradual changes that are shaped by experience and learning. Many aspects of auditory processing and perception are sculpted by sensory experience during sensitive or critical periods of development. This developmental plasticity underpins the acquisition of language and musical skills, matches neural representations in the brain to the statistics of the acoustic environment, and enables the neural circuits underlying the ability to localize sound to be calibrated by the acoustic consequences of growth-related changes in the anatomy of the body. Although the length of these critical periods depends on the aspect of auditory processing under consideration, varies across species and brain level, and may be extended by experience and other factors, it is generally accepted that the potential for plasticity declines with age. Nevertheless, a substantial degree of plasticity is exhibited in adulthood. This is important for the acquisition of new perceptual skills; facilitates improvements in the detection or discrimination of fine differences in sound properties; and enables the brain to compensate for changes in inputs, including those resulting from hearing loss. In contrast to the plasticity that shapes the developing brain, perceptual learning normally requires the sound attribute in question to be behaviorally relevant and is driven by practice or training on specific tasks. Progress has recently been made in identifying the brain circuits involved and the role of neuromodulators in controlling plasticity, and an understanding of plasticity in the central auditory system is playing an increasingly important role in the treatment of hearing disorders.