You are looking at 1-17 of 17 articles
- Anthropology x
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
Anthropology is often understood to be primarily an academic undertaking. A typical first exposure to the discipline occurs through undergraduate coursework, where the anthropologists that students know tend to be their professors. Without anthropologists in business, government, and nonprofit (BGN) fields to serve as role models, students may come to believe that if they choose not to pursue graduate study and academic employment, their interest in anthropology must give way to something more career related.
At the same time, there exists a vibrant community of “practicing,” “professional,” “public,” and “applied” anthropologists employed in a variety of non-academic settings. Anthropological skills and perspectives are of use to many BGN employers, and in a few industries, the value of anthropology is generally accepted: historic preservation, public health, and user experience research are prominent examples. The relationship between academia and professional practice is sometimes difficult, however, as some practitioners feel stigmatized or excluded by academics, while others inhabit professional spaces where academic anthropology is largely irrelevant.
While anthropologists often speak of a “divide” or “split” between academic and practicing anthropology, this view overlooks the fact that much work in applied anthropology maintains a presence in both higher education and BGN institutions. Not only do projects often involve collaboration among team members with diverse careers, but individual anthropologists may simultaneously maintain both academic and non-academic affiliations or move between professional spheres over the course of their career. While the social pressures to attend graduate school and seek traditional faculty jobs are real, anthropologists have responded to them in a variety of ways, and observers must account for all contexts of practice in order to reach a full understanding of the profession.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
The Early Middle Stone Age (EMSA) is encompassed, in broad terms, by the time period between 300,000 and 130,000 years ago. This is a crucial phase in the history of Homo sapiens, as genetic and fossil evidence increasingly indicate that some of the roots of humanity may be traced to this time. The development of modern human anatomy was an extended process that involved a gradual enlargement of the brain and a change in the shape of the brain case toward its current globular form. By 300,000 years ago brains had already reached their relatively large size, but changes in the shape of the brain case evolved more gradually. The fossil evidence from South Africa from this time period is sparse, but the 260,000-year-old Homo helmei partial skull from Florisbad is especially significant in understanding modern human origins. Although the development of an extensive and detailed chronological and regional framework is still in progress, it seems that most of the earlier phases of the Middle Stone Age played out against the backdrop of the South Africa interior. This area contained many water-rich areas supporting highly productive ecosystems of open grassland and wetlands from as early as 400,000 years ago, supporting Florisian fauna. The earliest Middle Stone Age sites occur in the interior and include sites such as Haaskraal, Florisbad, Wonderwerk Cave, Cave of Hearths, Bushman Rock Shelter, and Border Cave. Lithic assemblages from a number of these sites have been described as being part of the early Pietersburg technocomplex that is characterized by a preference for fine-grained raw material such as hornfels, to produce long blades and elongated unifacial and bifacial points. In these and other early Middle Stone Age assemblages, prepared core technology was already firmly established. This technology entailed careful and extensive planning to design stone nodules in the appropriate way to knap pre-formed blanks such as blades, points, and flakes to specific parameters. Such pieces were hafted on to handles to hunt and process large bovids and other fauna. The extensive cognitive operations involved in producing EMSA lithic artifacts and hafted projectile weapons, are also evident in the pigment processing and reflect evolutionary amplification in procedural and working memory capabilities.
Chapurukha M. Kusimba
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
How, and in what ways, did socially complex societies emerge in Eastern and Southern Africa? Regional scholarship has shown that elite investment in long-distance trade, investment in extractive technologies, monopolization of wealth-creating resources, and warfare may have played key roles in the emergence of early states. The debate on the evolution of social complexity has focused on trade versus militarism as key sources of political power for African elites. To what extent were elite and non-elite engagement in local, regional, and trans-continental economic networks crucial to the development of social complexity in Eastern and Southern Africa? Extensive research on the Eastern Coast of Africa (Kenya and Tanzania) and Southern Africa (Zimbabwe, Botswana, and South Africa) has yielded adequate data to enable a discussion on the trajectories of the evolution of social complexity and the state. So far, three crucial factors—trade, investment in extractive technologies, and elite monopolization of wealth-creating resources—stand out as having coalesced to propel the region towards greater interaction and complexity. Major transformations in the form of increases in household size, clear differences in wealth and status, and settlement hierarchies occurred toward the end of the first millennium. Regional scholarship posits that elite control of internal and external trade infrastructure, restricted access to arable land, accumulation of surplus land, manipulation of religious ideology, and exploitation of ecological crises were among the major factors that contributed to the rise of the state. Could these factors have also favored investment and the use of organized violence to gain and monopolize access to fertile grazing lands, water, and mineral resources, and to provide security along the trade routes, including the Zambezi, Savi, Limpopo, Rufiji, Tana, and Webe Shebelle? Scholarship in the 21st century favors the notion that opportunistic use of ideological and ritual power enabled a small elite initially composed of elders, ritual and technical specialists, to control the regional political economy and information flows. The timing of these transformations was continent-wide and dates to the last three centuries of the first millennium. By all measures, the evidence points to wealth accumulation through trade, tribute, investment in agrarianism and pastoralism, and mining.
The Stone Age record in eastern Africa appears to be longer and better documented than any other region worldwide. Rich archaeological and fossil evidence derives particularly from sites within the Rift Valley of the region, often with secure radiometric age estimates. Despite a relatively late start and disproportionate focus on earlier time periods and open-air sites within the rift, scientific research into the region’s Stone Age record continues to play a central role in our understanding of human evolution.
Putative stone tools and modified bones from two Late Pliocene (3.6–2.58 million years ago, or Ma) contexts are exclusive to eastern Africa, as is conclusive evidence for these starting around the Plio-Pleistocene boundary (2.6–2.5 Ma). The earliest indisputable technological traces appear in the form of simple flakes and core tools, as well as surface-modified bones. It is not clear what triggered this invention, or whether hominins with this technology hunted or only scavenged carcasses. Neither is it certain whether late australopithecines made and used stone tools. Archaeological occurrences predating ~2 Ma are limited to sites in Ethiopia and Kenya, becoming more common afterwards across eastern Africa and beyond.
By 1.75 Ma, lithic technologies that included heavy-duty and large cutting tools appeared at two sites, in Ethiopia and Kenya. Details about these larger and more diverse stone tool forms are still inadequately understood, although their appearance in eastern Africa roughly coincides with the appearance of Homo erectus. These technologies represent by far the longest-lived Stone Age tradition that endured ~1.6 million years. Hominins with these technologies successfully inhabited high altitude (>2,300 m above sea level) environments starting ~1.5 Ma and expanded within and beyond the region starting even earlier.
Small-sized and highly diverse tool forms gradually and variably started to replace heavy-duty and large cutting tools beginning ~300 thousand years ago (ka). Conventional wisdom associates this extremely variable shift in toolkit with the evolution of Homo sapiens, although the oldest undisputed representatives of our species continued to make and use large cutting tools in eastern Africa well after 200 ka. In addition to the dominance of small retouched tools, such as pointed pieces, scrapers, and blades, significant innovations such as hafting and ranged weaponry emerged during the length of this technological tradition. Increasingly complex socio-cultural behaviors, including mortuary practices, mark the later part of this period in eastern Africa. The consolidation of such technological and socio-cultural skills, as well as environmental and demographic dynamics may have enabled the hypothesized, ultimately decisive out-of-Africa dispersal of our species from eastern Africa, ~50–80 ka.
Even smaller and more diverse stone tool forms and other socio-cultural innovations evolved in many areas of eastern Africa by ~50 ka. Miniaturization and diversification allowed the adoption of different complex technologies, including tools intentionally partially dulled and other microlithic tool forms used as parts of sophisticated composite implements, such as the bow and arrow. Complex behaviors involving personal ornamentation, symbolism, and rituals that resembled the lifeways of ethnographically known hunter-gatherer populations were similarly adopted, although relatively later than in northern and southern Africa. These led eventually to new technological and economic developments marked by the inception of agriculture and attendant lifeways.
Catherine Alexander and Josh Reno
The landscape of global economies of recycling has rapidly changed over the early 21st century. Increasingly, policy and economic and scholarly attention on environmental transformation have focused on this topic, in keeping with Gabrielle Hecht's characterization of the Anthropocene era as "the apotheosis of waste." The global policy environment that was ushered in by the 1992 Basel Agreement has begun to shift radically. In a post-Basel world, the geography of the global south altered sharply in 2018, with China (followed swiftly by other southeast Asian nations) now refusing to accept what had previously been categorized as recyclable plastic, and countries like Norway pushing for revisions to Basel to accommodate concerns about oceans filling up with plastic debris. This has led to reverberations from wealthy OECD countries, struggling to meet their recycling and carbon accounting quotas, and from marginal and precarious informal recyclers the world over, who can no longer collect rubbish for a guaranteed return.
In line with rising public and policy concern about wastes, there has been distinct rise in scholarly analyses of these and other developments associated with economies of recycling, focusing especially on people’s material and moral encounters with reuse. These range from nuanced investigations into how lives and materials can be re-crafted by recovering value from discards; following an object through its many social lives; or focusing on a material, such as plastic or e-waste, and tracking how waste is co-produced at each stage of creation and (re)use. Examining infrastructures is a useful method for exploring how global economies intersect with systems of waste management—not only to determine what becomes of waste, but also to discover how it is imagined as pollutant or resource, apotheosis of the Anthropocene or deliverance from it.
Jessica C. Thompson
Faunal analysis (or zooarchaeology) in African archaeology is the identification, analysis, and interpretation of the remains of animal bones recovered from archaeological sites in Africa. Faunal analysis is a core approach in investigations of the African past. Its methods and theoretical underpinnings derive from archaeology, paleontology, and geochemistry, and they extend across all faunal categories. Many of the major issues in African faunal analysis concern large-bodied mammalian taxa, but the approach encompasses analysis of fish, shellfish, birds, reptiles, and indeed, all animal remains found in association with archaeological sites.
The diversity of research encompassed within faunal analysis is further expanded in Africa, where the earliest reported archaeological site (dating to 3.3 million years ago [Ma]) is far older than the earliest widely accepted archaeological site outside of Africa (at 1.8 Ma). The extra time depth affords the African archaeological record an especially wide arena of research questions that are answerable using faunal data. These range from investigations of the very origins of human diet, to analysis of the historical use of animals in trade, exchange, and social status.
At the earliest end of the time spectrum, researchers seek to understand the origins of human ancestral interactions with other animals in their ecosystem. Humans and some human ancestors are the only primates to consume animals of the same or larger body size than themselves, and this change in diet facilitated a number of other key changes in human biological evolution, such as increased brain and body size around 1.8 Ma. Dietary change may also have been instrumental in driving technological change, as hunting became more important in our lineage. Our ancestors moved into a more carnivorous niche and came into greater competition with other predators, fundamentally shifting the way they interacted with other organisms in their ancestral environments.
Faunal analysis in African archaeology has been especially important in the development of taphonomic method and theory. Taphonomy is the study of what happens to an organism’s remains after death and includes processes that can severely impact what parts survive and ultimately become part of the fossil record. Common taphonomic processes include human butchery, carnivore consumption and scattering of the remains, burial and decomposition, and post-depositional movement or alteration through the actions of wind, water, and micro-organisms. In the first part of the 20th century, faunal analysis mainly focused on the identification of species that are found in archaeological assemblages. Taphonomic research, starting mainly in the 1960s, sparked an ongoing tradition of studying site formation processes through faunal analysis, with a particular focus on sites in the Rift Valley and in the southern African Cradle of Humankind, dating between 1.8 Ma and 500 thousand years ago (ka). These methods and insights have since transferred to other contexts outside of Africa, where they have become an essential part of the zooarchaeological toolkit.
Africa is also home to the earliest sites produced by members of our own species, Homo sapiens. Faunal analysis has been deployed extensively as a way to understand two key aspects of sites dating between ~500 and 50 ka—what environments were like at the time of early modern human evolution, when our species first achieved the ecological dominance it has today. Modern hunter-gatherers deploy a number of complex technologies and social behaviors in their daily foraging and hunting tasks, and faunal analysis is useful for understanding when these behaviors first emerged. Similarly, it is useful for understanding how later hunters and gatherers dealt with the changing abundance of resources that came with major environmental shifts such as the Last Glacial Maximum ~18 ka, or the end of the Ice Ages ~10.5 ka.
The African continent experienced a major change in human subsistence and land use patterns over the last 10,000 years, with the rise and expansion of food production. However, unlike in most other parts of the world, African food production began with pastoralism. Faunal analysis has played a pivotal role in debates about its origins and spread, mainly based on the morphology of animal bones. Food production, including use of domesticated livestock, spread into the southern tip of South Africa by ~1,300 years ago, accompanying a massive reconfiguration of human populations known as the Bantu expansion. New advances in ancient DNA and collagen fingerprinting are beginning to make a strong contribution to the archaeology of later African time periods, where research questions range from the rise and spread of exchange networks to the ethnicity and diet of different groups of people during historical time periods.
Fire is one of the oldest technologies of humankind; indeed, the earliest signs of fire appeared almost two million years ago. Traces of early fire use include charcoal, baked sediments, and burnt bone, but the archaeological evidence is ambiguous due to exposure to the elements for hundreds of thousands of years. The origin of fire use is, therefore, debated. The first fire users might have been occasional or opportunistic users, harvesting flames and heat-affected food from wildfires. The art of maintaining the fire developed, and eventually, humans learned to make fire at will. Fire technology (pyrotechnology) then became a habitual part of life.
Fire provided warmth and light, which allowed people to continue activities after dark and facilitated moving into colder climates. Cooking food over or in the fire improved digestibility; over time, humans developed a culinary technology based on fire that included the use of cooking pits or earth ovens and preservation techniques such as smoking the food. Fire could even help in the procurement of food—for example, in clearing vegetation for easier hunting, to increase the fertility of the land, and to promote the growth of certain plants or to trap animals. Many materials could be transformed through fire, such as the color of ochre for use in pigments or the knapping properties of rocks for production of stone tools. Pyrotechnology ultimately became integral to other technologies, such as the production of pottery and iron tools.
Fire use also has a social component. Initially, fires for cooking and light provided a natural meeting point for people to conduct different activities, thus facilitating communication and the formation of strong social relationships. The social organization of a campsite can sometimes be interpreted from the artifact types found around a fire or in how different fires were placed. For example, access to household fires was likely restricted to certain family members, whereas communal fires allowed access for all group members. There would have been conventions governing the activities that were allowed by a household fire or a communal fire and the placement of different fire types. Furthermore, the social uses of fire included ritual and ceremonial uses, such as cleansing rituals or cremation. The fire use of a prehistoric group can, consequently, reveal information on aspects such as subsistence, social organization, and technology.
In archaeology, heat treatment is the intentional transformation of stone (normally sedimentary silica rocks) using fire to produce materials with improved fracture properties. It has been documented on all continents, from the African Middle Stone Age until sub-recent times. It was an important part of the Mediterranean Neolithic, and it sporadically appeared in the Palaeolithic and Mesolithic of Asia and Europe. It may have been part of the knowledge of people first colonizing North and South America, and it played an important role for tool making in Australian Prehistory. In all these contexts, heat treatment was normally used to improve the quality of stone raw materials for tool knapping—its association with pressure flaking has been highlighted—but a few examples also document the quest for making tools with improved qualities (shaper cutting edges) and intentional segmentation of large blocks of raw material to produce smaller, more usable modules (fire-fracturing). Two categories of silica rocks were most often heat-treated throughout prehistory: relatively fine-grained marine chert or flint, and more coarse-grained continental silcrete. The finding of stone heat treatment in archaeological contexts opens up several research questions on its role for tool making, its cognitive and social implications, or the investment it required. There are important avenues for research—for example: Why did people heat-treat stone? What happens to stones when heated? How can heating be recognized? By what technical means were stones heated? What cost did heat treatment represent for its instigators? Answering these questions will shed light on archaeologically relevant processes like innovation, re-invention, convergence, or the advent of complexity. The methods needed to produce the answers, however, often stem from other fields like physics, chemistry, mineralogy, or material sciences.
Marlize Lombard and Katharine Kyriacou
The term hunter-gatherer refers to a range of human subsistence patterns and socioeconomies since the Middle Pleistocene, some of which are still practiced in rare pockets across the globe. Hunter-gatherer research is centered on ethnohistorical records of the lifeways, economies, and interpersonal relationships of groups who gather field/wild foods and hunt for meat. Information collected in this way is cautiously applied to the Stone Age/Palaeolithic archaeological records to inform on, or build hypotheses about, past human behaviors. Late Pleistocene (that is, the Tarantian stage of the Pleistocene after about 126,000 years ago) hunter-gatherers possessed the behavioral, technological, and cognitive wherewithal to populate the globe. Hunter-gatherer groups are often relatively egalitarian regarding power and gender relationships. But, as is the case for all mammals, only females become pregnant and bear offspring. This biological reality has socioeconomic and behavioral implications when it comes to food supply. Whereas we share the principles of the mammalian reproductive process, humans have evolved to occupy a unique cognitive-behavioral niche in which we outsmart competition in the quest for survival on any given landscape.
Since early on in our history, the women of our species gave birth to relatively big-brained offspring with considerable cognitive potential, measured against that of other animals. Key to this development is the consumption of specific foods that contain brain-selective nutrients such as omega-6 and omega-3 polyunsaturated fatty acids and trace elements, including iron, iodine, copper, selenium, and zinc. Such nutrients are as important for us as they are for modern and prehistoric hunter-gatherers. Ethnohistorical and nutritional evidence shows that edible plants and small animals, most often gathered by women, represent an abundant and accessible source of “brain foods.” This is in contrast to the “Man the Hunter” hypothesis wherein big-game hunting and meat-eating are seen as prime movers in the development of biological and behavioral traits that distinguish humans from other primates.
Derek Newberry and Eric Gruebel
Since at least the 1930s, anthropologists have been conducting research on the dynamics and features of leadership and complex organizations. Though the anthropological study of organizations has changed dramatically since W. Lloyd Warner (an anthropologist) and Elton Mayo’s (a psychiatrist) first project at Western Electric Company’s Hawthorne Plant, two of anthropology’s defining features—the ethnographic method and the culture concept—have remained steadfast characteristics of the field for nearly a century.
While the particular methodologies of ethnographic research can be as varied as the studies they undergird, anthropological work on leadership and organizational development is generally performed from the inside, involving medium- to long-term research centered on participant observation. What separates anthropologists of organizations—and particularly corporations—from those in other subspecialties is that a significant amount of their ethnographic research is funded not just by academic institutions but also by private organizations that employ anthropologists on a permanent or contract basis. Though some within the field welcome the diverse research questions and perspectives that corporate-sponsored projects bring, others raise ethical and methodological objections to this work.
As is the case throughout anthropology in general, no one definition of culture serves as the universal touchstone for the anthropological study of organizations. Still, anthropologists working within the field commonly reject any notion of culture as static, uniform, or fully bounded within an organization. Unlike in the traditional management scholarship, there are few explanatory frameworks on effective leadership or organizational functioning in the anthropological literature. This is a byproduct of the larger trend toward reflexivity over the last two decades, in which anthropologists have increasingly problematized the concept of culture itself as well as attempts to develop broad theoretical frameworks.
For anthropologists of organizations, this shift has created a division between more academically oriented scholars who produce highly particularistic ethnographies that resist generalization and applied anthropologists who have created more practical guides on methodological approaches to studying organizations. In this vacuum, anthropologically informed frameworks for understanding leadership and culture in organizations have been developed by academics and practitioners in the related fields of design-thinking and industrial-organizational psychology. It remains to be seen whether, moving forward, the field will continue down this bifurcated path or instead reconnect with its roots in broad cultural theory, leading to more efforts to develop new frameworks for understanding leadership and organizational change.
Keir James Cecil Martin
Corporations are among the most important of the institutions that shape lives across the globe. They often have a “taken for granted” character, both in everyday discourse and in economic or management theory, where they are often described as an inevitable outcome of the natural working of markets. Anthropological analysis suggests that neither the markets that are seen as their foundation nor corporations as social entities can be understood in this manner. Instead, their existence has to be seen as contingent on particular social relations and as being the outcome of long processes of historic conflict. The extent to which, at the start of the 21st century, corporations satisfactorily fulfill their supposed purpose of managing debt obligations in order to stimulate economic growth is particularly open to question. This was traditionally the justification for the establishment of corporations as separate legal actors in economic markets. Some 150 years on, other sociocultural relations and perspectives shape their boundaries and activities in a manner that means that their purpose and character can no longer be assumed on the basis of such axiomatic premises. Instead, their actions can be explained only on the basis of historic and ethnographic analysis of the contests over the limits of relational obligation that shape their boundaries.
Anthropologists have been studying the relationship between mining and the local forms of community that it has created or impacted since at least the 1930s. While the focus of these enquiries has moved with the times, reflecting different political, theoretical, and methodological priorities, much of this work has concentrated on local manifestations of the so-called resource curse or the paradox of plenty. Anthropologists are not the only social scientists who have tried to understand the social, cultural, political, and economic processes that accompany mining and other forms of resource extraction, including oil and gas operations. Geographers, economists, and political scientists are among the many different disciplines involved in this field of research. Nor have anthropologists maintained an exclusive claim over the use of ethnographic methods to study the effects of large or small-scale resource extraction. But anthropologists have generally had a lot more to say about mining and the extractives in general when it has involved people of non-European descent, especially exploited subalterns—peasants, workers, and indigenous peoples.
The relationship between mining and indigenous people has always been complex. At the most basic level, this stems from the conflicting relationships that miners and indigenous people have to the land and resources that are the focus of extractive activities, or what Marx would call the different relations to the means of production. Where miners see ore bodies and development opportunities that render landscapes productive, civilized, and familiar, local indigenous communities see places of ancestral connection and subsistence provision. This simple binary is frequently reinforced—and somewhat overdrawn—in the popular characterization of the relationship between indigenous people and mining companies, where untrammelled capital devastates hapless tribal people, or what has been aptly described as the “Avatar narrative,” after the 2009 film of the same name.
By the early 21st century, a number of anthropologists were producing ethnographic works that sought to debunk these popular narratives, which obscure the more complex sets of relationships that exist between the cast of different actors who are present in contemporary mining encounters, and the range of contradictory interests and identities that these actors may hold at any one point in time. Resource extraction has a way of surfacing the politics of indigeneity, and anthropologists have paid particular attention to a range of identities, entities, and relationships that emerge in response to new economic opportunities, or what can be called the social relations of compensation. That some indigenous communities deliberately court resource developers as a pathway to economic development does not, of course, deny the asymmetries of power inherent to these settings: even when indigenous communities voluntarily agree to resource extraction, they are seldom signing up to absorb the full range of social and ecological costs that extractive companies so frequently externalize. These imposed costs are rarely balanced by the opportunities to share in the wealth created by mineral development; and for most indigenous people, their experience of large-scale resource extraction has been frustrating and often highly destructive. It is for good reason that analogies are regularly drawn between these deals and the vast store of mythology concerning the person who sells their soul to the devil for wealth that is not only fleeting, but also the harbinger of despair, destruction, and death. This is no easy terrain for ethnographers, and engagement is fraught with difficult ethical, methodological, and ontological challenges.
Anthropologists are involved in these encounters in a variety of ways—as engaged or activist anthropologists, applied researchers and consultants, and independent ethnographers. The focus of these engagements includes environmental transformation and social disintegration, questions surrounding sustainable development—or the uneven distribution of the costs and benefits of mining, the making of company-community agreements, corporate forms, and the social responsibilities of corporations (or CSR), labour and livelihoods, conflict and resistance movements, gendered impacts, cultural heritage management, questions of indigeneity, and effects of displacement, to name but a few. These different forms of engagement raise important questions concerning positionality, and how this influences the production of knowledge—an issue that has divided anthropologists working in this contested field. Anthropologists must also grapple with questions concerning good ethnography, or what constitutes a “good enough” account of the relations between indigenous people and the multiple actors assembled in resource extraction contexts.
Susan Brownell and Niko Besnier
Sport offers a unique path to mobility to men—and to a much lesser degree women—who are members of disadvantaged groups and whose options for seeking a better life are otherwise limited. This mobility may be either social class mobility—as in basketball as a way out of racially segregated ghettos in the United States—or geographic mobility—as in the migration of soccer and rugby players from the Global South to the Global North in order to play in professional leagues there. Sport mobility potentially differs from the mobility based on manual and menial labor that is the more common path for such groups because successful professional athletes are regarded as heroes both by urban elites in their transplanted homes and by their compatriots back in their home neighborhoods, villages, and countries. At the same time, the hope to migrate to a successful career is often thwarted by the same structural conditions that thwart ordinary migrants’ mobility.
Different sports are associated with different social values that reflect the race, gender, social class, national, and global structures of power that underpin them. Until the past few decades, sports acquired their social value through a process of distinction in which gender, class, racial, and other differences were exaggerated by strategies of inclusion and exclusion. These differences were most closely guarded in sports organized by exclusive clubs, but they were also defended by other types of organizations such as schools and professional leagues. In the West, where most global sports originated, this produced a system of contrasting relationships between sport meanings: for example, golf, tennis, figure skating, and equestrian sports signified elite social status, while soccer, boxing, and—at least at the elite levels—basketball, baseball, and American football were identified with athletes from poorer socioeconomic backgrounds. In this way, sports produced an embodied social value in the form of the bodies of individual athletes, and until the last decades of the 20th century, this value was largely traded in the realm of symbolic capital and not economic capital—with the exception of a comparatively small number of athletes in professionalized sports. Furthermore, the embodied values of sports varied greatly between localities, nations, and world regions, shaped by the class structure, history, and culture of the body in a given locale.
However, at the end of the 20th century, the embodied values of many—if not most—individual sports became increasingly unmoored from their local, regional, ethnic, or national values and more tightly embroiled in global sport systems that have become increasingly commodified. Team sports, such as soccer, baseball, basketball, rugby, cricket, and ice hockey, and individual sports, such as tennis, golf, track and field, gymnastics, figure skating, and boxing, saw a large increase in the transnational mobility of athletes and coaches. These developments in the sports world reflected global changes in the global political economy: revenues from television broadcasting rights fees skyrocketed as television networks were privatized and proliferated; corporate sponsorship and advertising expanded along with the new television platforms; increasingly multinational sources of capital (such as corporations and billionaire team owners) were infused into sports; and elite athletes’ salaries, sponsorships, and transfer fees increased vertiginously in the most popular sports and seeped downward in the system. Clubs and teams began searching for talent further and further afield, bringing over players from the developing world. In US college sports (an anomaly on the world scene), the training of children toward the goal of gaining athletic scholarships became a growing industry that has even extended into China. In the Global South, at the same time, neoliberal development policies resulted in the reorganization or, in some cases, destruction of local agriculture and other forms of local production, as well as the social and economic relations that had been attached to them. Young men, who were particularly affected, now had to migrate to find employment and thus achieve the ideal of productive adult masculinity. These two factors produced a remarkable increase in the number of athletes from developing countries seeking employment as professionals in the industrialized world. For ever greater numbers of athletes, then, the embodied value of the body was no longer limited to symbolic or social capital but was all about economic capital.
The commodification of the sporting body and the transnationalization of the structures that determine its value provide novel and instructive insight into the changing nature of the global political economy since the end of the 20th century.
Augustin F. C. Holl
The “Three Age System” designed in the middle of the 19th century framed the general pattern of universal technological evolution. It all started with the use of stone tools in the very long “Stone Age.” The much shorter “Bronze Age” followed, to be capped by the even shorter “Iron Age.” This evolutionary taxonomy was crafted in Scandinavia, based on evidence from Denmark, and Europe by extension. Patterns of global long-term technological evolution recorded in Africa are at variance with this Stone-Bronze-Iron Age sequence; there is no Bronze Age yet.
The advent of copper and iron metallurgy is one of the most fascinating debates taking place in African archaeology at the beginning of the 21st century. The debate on the origins of African metallurgies has a long history with multiple implications. It is anchored on 19th-century evolutionism and touches on the patterns and pace of technological evolution worldwide. It has also impacted the history of discourses on human progress. As such, it has strong sociopolitical implications. It was used to support the assumption of “African backwardness,” an assumption according to which all important material and institutional inventions and innovations took place elsewhere—in the Near East precisely—and spread from there to Africa through demic or stimulus diffusion.
Does such a scheme capture global human technological history or is it a specific case of local areal development? That is the core of the current debate on the origins of African metallurgy.
A speculative phase, without any input of field data, took place in the 1950s–1960s. It was represented by the interesting exchanges between R. Mauny and H. Lhote. The former was a proponent of metallurgy diffusion and the latter argued for local inventions. For Mauny, metallurgy is such a complex process, requiring sophisticated mastery of elaborate pyrotechnology, that its independent invention anywhere else is totally ruled out. For Lhote, the diversity of African metallurgical practices and traditions is an indication of its local roots. Despite this debate, the dominant view asserted that iron metallurgy was invented in the Anatolian Hittite Empire in the middle of the 2nd millennium (1600–1500)
Sustained archaeological research was carried out in different parts of the continent from the early 1980s on. Evidence of copper and iron metallurgies was documented in different parts of the continent, in West, Central, and East Africa. Early copper metallurgies were recorded in the Akjoujt region of Mauritania and the Eghazzer basin in Niger. Surprisingly early iron smelting installations were found in the Eghazzer basin (Niger), the middle Senegal valley (Senegal), the Mouhoun Bend (Burkina Faso), the Nsukka region and Taruga (Nigeria), the Great Lakes region in East Africa, the Djohong (Cameroons), and the Ndio (Central African Republic) areas. It is, however, the discoveries from the northern margins of the equatorial rainforest in North-Central Africa, in the northeastern part of the Adamawa Plateau, that radically falsify the “iron technology diffusion” hypothesis. Iron production activities are documented to have taken place as early as 3000–2500
Philip Carl Salzman
Pastoralists depend for their livelihood on raising livestock on natural pasture. Livestock may be selected for meat, milk, wool, traction, carriage, or riding, or a combination of these. Pastoralists rarely rely solely on their livestock; they may also engage in hunting, fishing, cultivation, commerce, predatory raiding, or extortion. Some pastoral peoples are nomadic and others are sedentary, while yet others are partially mobile. Economically, some pastoralists are subsistence oriented, while others are market oriented, with others combining the two. Politically, some pastoralists are independent or quasi-independent tribes, while others, largely under the control of states, are peasants, and yet others are citizens engaged in commercial production in modern states.
All pastoralists have to address a common set of issues. The first issue is gaining and taking possession of livestock, including good breeding stock. Ownership of livestock may involve individual, group, or distributed rights. The second concern is managing the livestock through husbandry and herding. Husbandry refers to the selection of animals for breeding and maintenance, while herding involves ensuring that the livestock gains access to adequate pasture and water. Pasture access can be gained through territorial ownership and control, purchase, rent, or patronage. Security must be provided for the livestock through active human oversight or restriction by means of fences or other barriers. Manpower is provided by kin relations, exchange of labor, barter, monetary payment, or some combination.
Prominent pastoral peoples are sheep, goat, and camel herders in the arid band running from North Africa through the Middle East and northwest India; the cattle and small stock herders of Africa south of the Sahara; reindeer herders of the sub-Arctic northern Eurasia; the camelid herders of the Andes; and the ranchers of North and South America.
Remittances are monetary or social transfers made by migrants to their countries of origin, usually but not exclusively, to members of their families. They represent a significant capital flow at the international level (hundreds of billions of dollars), exceeding by far official development assistance. Remittances, as an instrument for combating poverty and fueling economic growth, have attracted an increasing interest in development studies and the social sciences in general. The question of the relationship between migration and development has gained significant visibility in the last decades and, at political and academic levels, has provoked passionate debates in which anthropologists have participated actively. Over time, the mood has fluctuated from developmentalist optimism in the 1950s and 1960s, to pessimism in the 1970s and 1980s, and once more optimistic views in the 1990s and 2000s. The post-9/11 period has seen a progressive shift again and is dominated by a securitization political rhetoric.
In spite of this cyclical history, the terms of the debate are well known and rather constant. On the one hand, the role of money sent by migrants to their families may be seen as an effective survival strategy, a diversification of revenue sources that increases purchasing power; it may lead to small business creation, the promotion of education, and the transfer of knowledge by return migrants bringing with them skills learned abroad. Ultimately, the possibility of remitting money back home contributes—in more sociological terms—to the establishment of transnational networks and therefore to the cohesion of kinship or residence groups despite dispersion. However, the fact that so many people—especially youth—are trying to migrate is related to a culture of dependence, while the private dimension of most transfers does not bring real collective benefits. Far from promoting social cohesion, remittances may, on the contrary, increase inequalities, as the poorest households cannot afford to send one of their members abroad. In some cases, the money that is transferred may be used to finance armed groups.
These debates on the role of money and know-how sent by migrants are primarily situated within the vast literature on migration and development. Interestingly, most anthropological dictionaries and encyclopedias do not have an entry on remittances. The issue of remittances has still to acquire a fully-fledged theoretical dimension within the discipline in order to contribute to conceptual discussions on global mobility. Migrants weave multiple links throughout their lives and are often full participants in several societies at the same time. To grasp the complexity of the phenomena at stake, it might be necessary to decompartmentalize the existing categories of mobile people (asylum seekers, refugees, migrant workers, skilled professionals, international students, even tourists), recognize the non-linearity of most spatial and social trajectories, and integrate empirical studies into a more encompassing theoretical discussion.
Indigenous peoples worldwide are affected by, and engage with, tourism in several major ways. On the one hand, the tourism industry in its constant expansion appropriates indigenous peoples’ land and resources, creating tensions and escalating inequalities. In some cases, indigenous peoples may have a role to play (with various levels of agency and power on their own part) in welcoming people into their homes and on their land, for the purposes of ecotourism (in which pristine environments, usually with rare or endemic species of plants, birds, or other living organisms are attractive to tourists), or because the people themselves and their way of life are of interest to tourists. What is more, the graves and monuments of the ancestors of indigenous people, local festivals, and ceremonies may be recognized as “marketable” from a tourism perspective and promoted to encourage tourist visits, which may or may not be considered disruptive or disrespectful from an indigenous perspective. So-called indigenous tourism development refers to tourism in which indigenous people and communities are directly involved (in varying degrees) in the industry, whether as owners and tour operators or as porters and servants. Many scholars from anthropology, sociology, human geography, and other related disciplines have sought to address some of the issues and concerns regarding the relationship between tourism and indigenous peoples, drawing on examples from around the globe in order to illustrate the multitude of ways in which this relationship operates. Ways that indigenous peoples’ relationship to tourism may be explored include contexts such as tourism to visit ancient monuments and UNESCO-listed world heritage sites, tourism in search of cultural difference, cruise travel and luxury resorts, and ecotourism.