You are looking at 1-20 of 24 articles
- Anthropology x
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
Anthropology is often understood to be primarily an academic undertaking. A typical first exposure to the discipline occurs through undergraduate coursework, where the anthropologists that students know tend to be their professors. Without anthropologists in business, government, and nonprofit (BGN) fields to serve as role models, students may come to believe that if they choose not to pursue graduate study and academic employment, their interest in anthropology must give way to something more career related.
At the same time, there exists a vibrant community of “practicing,” “professional,” “public,” and “applied” anthropologists employed in a variety of non-academic settings. Anthropological skills and perspectives are of use to many BGN employers, and in a few industries, the value of anthropology is generally accepted: historic preservation, public health, and user experience research are prominent examples. The relationship between academia and professional practice is sometimes difficult, however, as some practitioners feel stigmatized or excluded by academics, while others inhabit professional spaces where academic anthropology is largely irrelevant.
While anthropologists often speak of a “divide” or “split” between academic and practicing anthropology, this view overlooks the fact that much work in applied anthropology maintains a presence in both higher education and BGN institutions. Not only do projects often involve collaboration among team members with diverse careers, but individual anthropologists may simultaneously maintain both academic and non-academic affiliations or move between professional spheres over the course of their career. While the social pressures to attend graduate school and seek traditional faculty jobs are real, anthropologists have responded to them in a variety of ways, and observers must account for all contexts of practice in order to reach a full understanding of the profession.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
Applied anthropology has become an alternative to the more academic anthropological tradition. One important area of engagement is technology and innovation. This increasing involvement has been tied to, and encouraged by, the growth of applied anthropology. Applied anthropology is just “anthropology put to use,” as John Van Willigen noted, to solve practical real-world problems by applying anthropological theory and methods. The field of applied anthropology can be categorized into two overlapping groups of practitioners: those who apply anthropology while based in academia and those who practice anthropology outside of academia. At first, applied anthropology was dominated by those within academia who applied the theory and methods of anthropology to understand real-world problems for themselves or for a client. However, by the late 1990s, the problem-solving value of applied anthropology was becoming recognized in government, as well as in the private and not-for profit sectors. With recognition, employment opportunities outside of academia expanded exponentially. More and more anthropologists began working for manufacturers and technology companies as marketing professionals, user experience researchers, and insight managers, among other job titles. Most of the first anthropologists to work in product and technology companies were accidental innovators. It was not the intention of these early applied anthropologists, such as Suchman, Squires, or Brun-Cotton, to become innovators. Rather, they were primarily interested in applying anthropological theory and methods to solve serious problems faced by the companies for which they worked. It was only in the process of finding answers that they stumbled across new ways to frame issues and uncovered insights leading to novel solutions—innovation. Over time, labels such as business anthropology, design anthropology, and digital anthropology were used to distinguish those applied anthropologists working in product and technology industries. Fundamentally, however, they were anthropologists putting anthropology to use.
By 2005, applied anthropology within industry had come of age with a definitive boom of published literature, written primarily for or about the private sector. Resisting approaches that emphasize quantitative data, these publications maintain the value of qualitative and mixed methods approached from the perspective of anthropology. Ironically, despite the growth of applied anthropologists working in the product and technology sector, most of those who are currently publishing study innovation rather than participate in innovative activities. There may be a couple of reasons for this. First, those that work in the private sector do not have the time to write, or they have signed non-disclosure agreements that do not allow them to publish. Alternatively, there is a trend in which senior applied anthropologists who formerly worked in the private sector are returning to academia where they have time to write. Whether in the private sector or, now, in academia, the innovations that have resulted from the work of these anthropologists cannot be underestimated.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Anthropology. Please check back later for the full article.
The Early Middle Stone Age (EMSA) is encompassed, in broad terms, by the time period between 300,000 and 130,000 years ago. This is a crucial phase in the history of Homo sapiens, as genetic and fossil evidence increasingly indicate that some of the roots of humanity may be traced to this time. The development of modern human anatomy was an extended process that involved a gradual enlargement of the brain and a change in the shape of the brain case toward its current globular form. By 300,000 years ago brains had already reached their relatively large size, but changes in the shape of the brain case evolved more gradually. The fossil evidence from South Africa from this time period is sparse, but the 260,000-year-old Homo helmei partial skull from Florisbad is especially significant in understanding modern human origins. Although the development of an extensive and detailed chronological and regional framework is still in progress, it seems that most of the earlier phases of the Middle Stone Age played out against the backdrop of the South Africa interior. This area contained many water-rich areas supporting highly productive ecosystems of open grassland and wetlands from as early as 400,000 years ago, supporting Florisian fauna. The earliest Middle Stone Age sites occur in the interior and include sites such as Haaskraal, Florisbad, Wonderwerk Cave, Cave of Hearths, Bushman Rock Shelter, and Border Cave. Lithic assemblages from a number of these sites have been described as being part of the early Pietersburg technocomplex that is characterized by a preference for fine-grained raw material such as hornfels, to produce long blades and elongated unifacial and bifacial points. In these and other early Middle Stone Age assemblages, prepared core technology was already firmly established. This technology entailed careful and extensive planning to design stone nodules in the appropriate way to knap pre-formed blanks such as blades, points, and flakes to specific parameters. Such pieces were hafted on to handles to hunt and process large bovids and other fauna. The extensive cognitive operations involved in producing EMSA lithic artifacts and hafted projectile weapons, are also evident in the pigment processing and reflect evolutionary amplification in procedural and working memory capabilities.
Chapurukha M. Kusimba
How, and in what ways, did socially complex societies emerge in Eastern and Southern Africa? Regional scholarship has shown that elite investment in long-distance trade, investment in extractive technologies, monopolization of wealth-creating resources, and warfare may have played key roles in the emergence of early states. The debate on the evolution of social complexity has focused on trade versus militarism as key sources of political power for African elites. To what extent were elite and non-elite engagement in local, regional, and trans-continental economic networks crucial to the development of social complexity in Eastern and Southern Africa? Extensive research on the Eastern Coast of Africa (Kenya and Tanzania) and Southern Africa (Zimbabwe, Botswana, and South Africa) has yielded adequate data to enable a discussion on the trajectories of the evolution of social complexity and the state. So far, three crucial factors—trade, investment in extractive technologies, and elite monopolization of wealth-creating resources—stand out as having coalesced to propel the region towards greater interaction and complexity. Major transformations in the form of increases in household size, clear differences in wealth and status, and settlement hierarchies occurred toward the end of the first millennium. Regional scholarship posits that elite control of internal and external trade infrastructure, restricted access to arable land, accumulation of surplus land, manipulation of religious ideology, and exploitation of ecological crises were among the major factors that contributed to the rise of the state. Could these factors have also favored investment and the use of organized violence to gain and monopolize access to fertile grazing lands, water, and mineral resources, and to provide security along the trade routes, including the Zambezi, Savi, Limpopo, Rufiji, Tana, and Webe Shebelle? Scholarship in the 21st century favors the notion that opportunistic use of ideological and ritual power enabled a small elite initially composed of elders, ritual and technical specialists, to control the regional political economy and information flows. The timing of these transformations was continent-wide and dates to the last three centuries of the first millennium. By all measures, the evidence points to wealth accumulation through trade, tribute, investment in agrarianism and pastoralism, and mining.
The Stone Age record in eastern Africa appears to be longer and better documented than any other region worldwide. Rich archaeological and fossil evidence derives particularly from sites within the Rift Valley of the region, often with secure radiometric age estimates. Despite a relatively late start and disproportionate focus on earlier time periods and open-air sites within the rift, scientific research into the region’s Stone Age record continues to play a central role in our understanding of human evolution.
Putative stone tools and modified bones from two Late Pliocene (3.6–2.58 million years ago, or Ma) contexts are exclusive to eastern Africa, as is conclusive evidence for these starting around the Plio-Pleistocene boundary (2.6–2.5 Ma). The earliest indisputable technological traces appear in the form of simple flakes and core tools, as well as surface-modified bones. It is not clear what triggered this invention, or whether hominins with this technology hunted or only scavenged carcasses. Neither is it certain whether late australopithecines made and used stone tools. Archaeological occurrences predating ~2 Ma are limited to sites in Ethiopia and Kenya, becoming more common afterwards across eastern Africa and beyond.
By 1.75 Ma, lithic technologies that included heavy-duty and large cutting tools appeared at two sites, in Ethiopia and Kenya. Details about these larger and more diverse stone tool forms are still inadequately understood, although their appearance in eastern Africa roughly coincides with the appearance of Homo erectus. These technologies represent by far the longest-lived Stone Age tradition that endured ~1.6 million years. Hominins with these technologies successfully inhabited high altitude (>2,300 m above sea level) environments starting ~1.5 Ma and expanded within and beyond the region starting even earlier.
Small-sized and highly diverse tool forms gradually and variably started to replace heavy-duty and large cutting tools beginning ~300 thousand years ago (ka). Conventional wisdom associates this extremely variable shift in toolkit with the evolution of Homo sapiens, although the oldest undisputed representatives of our species continued to make and use large cutting tools in eastern Africa well after 200 ka. In addition to the dominance of small retouched tools, such as pointed pieces, scrapers, and blades, significant innovations such as hafting and ranged weaponry emerged during the length of this technological tradition. Increasingly complex socio-cultural behaviors, including mortuary practices, mark the later part of this period in eastern Africa. The consolidation of such technological and socio-cultural skills, as well as environmental and demographic dynamics may have enabled the hypothesized, ultimately decisive out-of-Africa dispersal of our species from eastern Africa, ~50–80 ka.
Even smaller and more diverse stone tool forms and other socio-cultural innovations evolved in many areas of eastern Africa by ~50 ka. Miniaturization and diversification allowed the adoption of different complex technologies, including tools intentionally partially dulled and other microlithic tool forms used as parts of sophisticated composite implements, such as the bow and arrow. Complex behaviors involving personal ornamentation, symbolism, and rituals that resembled the lifeways of ethnographically known hunter-gatherer populations were similarly adopted, although relatively later than in northern and southern Africa. These led eventually to new technological and economic developments marked by the inception of agriculture and attendant lifeways.
Catherine Alexander and Josh Reno
The landscape of global economies of recycling has rapidly changed over the early 21st century. Increasingly, policy and economic and scholarly attention on environmental transformation have focused on this topic, in keeping with Gabrielle Hecht's characterization of the Anthropocene era as "the apotheosis of waste." The global policy environment that was ushered in by the 1992 Basel Agreement has begun to shift radically. In a post-Basel world, the geography of the global south altered sharply in 2018, with China (followed swiftly by other southeast Asian nations) now refusing to accept what had previously been categorized as recyclable plastic, and countries like Norway pushing for revisions to Basel to accommodate concerns about oceans filling up with plastic debris. This has led to reverberations from wealthy OECD countries, struggling to meet their recycling and carbon accounting quotas, and from marginal and precarious informal recyclers the world over, who can no longer collect rubbish for a guaranteed return.
In line with rising public and policy concern about wastes, there has been distinct rise in scholarly analyses of these and other developments associated with economies of recycling, focusing especially on people’s material and moral encounters with reuse. These range from nuanced investigations into how lives and materials can be re-crafted by recovering value from discards; following an object through its many social lives; or focusing on a material, such as plastic or e-waste, and tracking how waste is co-produced at each stage of creation and (re)use. Examining infrastructures is a useful method for exploring how global economies intersect with systems of waste management—not only to determine what becomes of waste, but also to discover how it is imagined as pollutant or resource, apotheosis of the Anthropocene or deliverance from it.
Jessica C. Thompson
Faunal analysis (or zooarchaeology) in African archaeology is the identification, analysis, and interpretation of the remains of animal bones recovered from archaeological sites in Africa. Faunal analysis is a core approach in investigations of the African past. Its methods and theoretical underpinnings derive from archaeology, paleontology, and geochemistry, and they extend across all faunal categories. Many of the major issues in African faunal analysis concern large-bodied mammalian taxa, but the approach encompasses analysis of fish, shellfish, birds, reptiles, and indeed, all animal remains found in association with archaeological sites.
The diversity of research encompassed within faunal analysis is further expanded in Africa, where the earliest reported archaeological site (dating to 3.3 million years ago [Ma]) is far older than the earliest widely accepted archaeological site outside of Africa (at 1.8 Ma). The extra time depth affords the African archaeological record an especially wide arena of research questions that are answerable using faunal data. These range from investigations of the very origins of human diet, to analysis of the historical use of animals in trade, exchange, and social status.
At the earliest end of the time spectrum, researchers seek to understand the origins of human ancestral interactions with other animals in their ecosystem. Humans and some human ancestors are the only primates to consume animals of the same or larger body size than themselves, and this change in diet facilitated a number of other key changes in human biological evolution, such as increased brain and body size around 1.8 Ma. Dietary change may also have been instrumental in driving technological change, as hunting became more important in our lineage. Our ancestors moved into a more carnivorous niche and came into greater competition with other predators, fundamentally shifting the way they interacted with other organisms in their ancestral environments.
Faunal analysis in African archaeology has been especially important in the development of taphonomic method and theory. Taphonomy is the study of what happens to an organism’s remains after death and includes processes that can severely impact what parts survive and ultimately become part of the fossil record. Common taphonomic processes include human butchery, carnivore consumption and scattering of the remains, burial and decomposition, and post-depositional movement or alteration through the actions of wind, water, and micro-organisms. In the first part of the 20th century, faunal analysis mainly focused on the identification of species that are found in archaeological assemblages. Taphonomic research, starting mainly in the 1960s, sparked an ongoing tradition of studying site formation processes through faunal analysis, with a particular focus on sites in the Rift Valley and in the southern African Cradle of Humankind, dating between 1.8 Ma and 500 thousand years ago (ka). These methods and insights have since transferred to other contexts outside of Africa, where they have become an essential part of the zooarchaeological toolkit.
Africa is also home to the earliest sites produced by members of our own species, Homo sapiens. Faunal analysis has been deployed extensively as a way to understand two key aspects of sites dating between ~500 and 50 ka—what environments were like at the time of early modern human evolution, when our species first achieved the ecological dominance it has today. Modern hunter-gatherers deploy a number of complex technologies and social behaviors in their daily foraging and hunting tasks, and faunal analysis is useful for understanding when these behaviors first emerged. Similarly, it is useful for understanding how later hunters and gatherers dealt with the changing abundance of resources that came with major environmental shifts such as the Last Glacial Maximum ~18 ka, or the end of the Ice Ages ~10.5 ka.
The African continent experienced a major change in human subsistence and land use patterns over the last 10,000 years, with the rise and expansion of food production. However, unlike in most other parts of the world, African food production began with pastoralism. Faunal analysis has played a pivotal role in debates about its origins and spread, mainly based on the morphology of animal bones. Food production, including use of domesticated livestock, spread into the southern tip of South Africa by ~1,300 years ago, accompanying a massive reconfiguration of human populations known as the Bantu expansion. New advances in ancient DNA and collagen fingerprinting are beginning to make a strong contribution to the archaeology of later African time periods, where research questions range from the rise and spread of exchange networks to the ethnicity and diet of different groups of people during historical time periods.
Fire is one of the oldest technologies of humankind; indeed, the earliest signs of fire appeared almost two million years ago. Traces of early fire use include charcoal, baked sediments, and burnt bone, but the archaeological evidence is ambiguous due to exposure to the elements for hundreds of thousands of years. The origin of fire use is, therefore, debated. The first fire users might have been occasional or opportunistic users, harvesting flames and heat-affected food from wildfires. The art of maintaining the fire developed, and eventually, humans learned to make fire at will. Fire technology (pyrotechnology) then became a habitual part of life.
Fire provided warmth and light, which allowed people to continue activities after dark and facilitated moving into colder climates. Cooking food over or in the fire improved digestibility; over time, humans developed a culinary technology based on fire that included the use of cooking pits or earth ovens and preservation techniques such as smoking the food. Fire could even help in the procurement of food—for example, in clearing vegetation for easier hunting, to increase the fertility of the land, and to promote the growth of certain plants or to trap animals. Many materials could be transformed through fire, such as the color of ochre for use in pigments or the knapping properties of rocks for production of stone tools. Pyrotechnology ultimately became integral to other technologies, such as the production of pottery and iron tools.
Fire use also has a social component. Initially, fires for cooking and light provided a natural meeting point for people to conduct different activities, thus facilitating communication and the formation of strong social relationships. The social organization of a campsite can sometimes be interpreted from the artifact types found around a fire or in how different fires were placed. For example, access to household fires was likely restricted to certain family members, whereas communal fires allowed access for all group members. There would have been conventions governing the activities that were allowed by a household fire or a communal fire and the placement of different fire types. Furthermore, the social uses of fire included ritual and ceremonial uses, such as cleansing rituals or cremation. The fire use of a prehistoric group can, consequently, reveal information on aspects such as subsistence, social organization, and technology.
Fiona McCormack and Jacinta Forde
The anthropology of fisheries is a core focus of maritime anthropology. Scholarship in this field is multifaceted, exploring fishing ways of life, fishing knowledge, marine tenure and economies, and the specificities of how this particular watery nature is manifested in social relations and cultural systems. Fishing can be defined as a productive activity that takes place in a multidimensional space, depending more on natural or wild processes than manufactured processes. The idea of fishing being closer to nature is an analytical thread, giving the anthropology of fisheries a particular edge on the multispecies and more–than-human ethnographic turn in contemporary anthropology. Research in the anthropology of fisheries has long held the connections between fisher and fish to be of central concern. Also significant is the thesis that the construction of fisheries as a natural domain to be managed, of which fishers are atomistic extractors, is a highly politicized process involving the bioeconomic creation of fish stock and broader political economies.
Peter W. Van Arsdale
Global human rights, writ large, impact the entire human condition. They span cultural, social, economic, ecological, political, and civic realms. They pertain to how people are treated, protected, and respected. They are interrelated, interdependent, and of importance to all people, yet in actuality—as they play out—do not apply equally to all people. They have not been formulated by representatives of all societies, have not been accepted by members of all nation-states, and have not—in any sense of an entirety or set—been formally approved by many important transnational rights-oriented organizations. However, as commonalities are considered in the way rights emerge and evolve, there are many. Certain principles are foundational. The processes are as essential as the products. The aspirations are as important as the achievements.
The subject of human rights can be addressed from many angles. Some authorities suggest that philosophy provides the overarching umbrella, dating from the era of John Locke (b. 1632, d. 1704). From one perspective of history, which features emergent religious interpretations, duties and obligations that are situated in various diverse cultural traditions are central. From another perspective of history, which features seminal events such as wars and genocides, the actions and reactions of various actors—from victims to warriors—become central. From the perspective of law, covenants and protocols designed to advise, protect, and aid prosecution emerge prominently. From the perspective of political science, the ways in which citizens engage the political process as rights and wrongs are debated is key. Other disciplines, from psychology to theology to journalism, also contribute significantly. By way of contrast, cultural or social anthropology takes an ethnographic perspective. The cultural context is specified, with case-specific narratives often featured. Documentation of encounters (one-to-one, group-to-group, institution-to-institution) is crucial. Past, present, and potential future issues are addressed. The actions of victims, survivors, and perpetrators, as well as service providers, advocates, and everyday citizens, stand out.
Field research, both theoretical and applied, is part and parcel of what anthropologists do. There is no single “theory of human rights.” However, there are a number of prominent paradigms, theories, and models that inform anthropological work in human rights. Of note are statist, cosmopolitan, and internationalist models, with the cosmopolitan of particular interest to anthropologists given its emphasis on individuals rather than states. Viewed differently, from the perspective of power and its abuses, the theory of structural violence is very useful. Case studies of perpetrators of abuse are usually more difficult to develop than those for victims, yet are particularly illustrative of power differentials. Ultimately, improvements in the ways in which abuses are dealt with and the ways in which the human rights regime (i.e., the systematized body of discourse, norms, resources, and protocols) ultimately can change for the better for everyday citizens, are tied to processes of socialization, internalization, and obligation. Rights are not static, but rather, very dynamic.
In archaeology, heat treatment is the intentional transformation of stone (normally sedimentary silica rocks) using fire to produce materials with improved fracture properties. It has been documented on all continents, from the African Middle Stone Age until sub-recent times. It was an important part of the Mediterranean Neolithic, and it sporadically appeared in the Palaeolithic and Mesolithic of Asia and Europe. It may have been part of the knowledge of people first colonizing North and South America, and it played an important role for tool making in Australian Prehistory. In all these contexts, heat treatment was normally used to improve the quality of stone raw materials for tool knapping—its association with pressure flaking has been highlighted—but a few examples also document the quest for making tools with improved qualities (shaper cutting edges) and intentional segmentation of large blocks of raw material to produce smaller, more usable modules (fire-fracturing). Two categories of silica rocks were most often heat-treated throughout prehistory: relatively fine-grained marine chert or flint, and more coarse-grained continental silcrete. The finding of stone heat treatment in archaeological contexts opens up several research questions on its role for tool making, its cognitive and social implications, or the investment it required. There are important avenues for research—for example: Why did people heat-treat stone? What happens to stones when heated? How can heating be recognized? By what technical means were stones heated? What cost did heat treatment represent for its instigators? Answering these questions will shed light on archaeologically relevant processes like innovation, re-invention, convergence, or the advent of complexity. The methods needed to produce the answers, however, often stem from other fields like physics, chemistry, mineralogy, or material sciences.
Marlize Lombard and Katharine Kyriacou
The term hunter-gatherer refers to a range of human subsistence patterns and socioeconomies since the Middle Pleistocene, some of which are still practiced in rare pockets across the globe. Hunter-gatherer research is centered on ethnohistorical records of the lifeways, economies, and interpersonal relationships of groups who gather field/wild foods and hunt for meat. Information collected in this way is cautiously applied to the Stone Age/Palaeolithic archaeological records to inform on, or build hypotheses about, past human behaviors. Late Pleistocene (that is, the Tarantian stage of the Pleistocene after about 126,000 years ago) hunter-gatherers possessed the behavioral, technological, and cognitive wherewithal to populate the globe. Hunter-gatherer groups are often relatively egalitarian regarding power and gender relationships. But, as is the case for all mammals, only females become pregnant and bear offspring. This biological reality has socioeconomic and behavioral implications when it comes to food supply. Whereas we share the principles of the mammalian reproductive process, humans have evolved to occupy a unique cognitive-behavioral niche in which we outsmart competition in the quest for survival on any given landscape.
Since early on in our history, the women of our species gave birth to relatively big-brained offspring with considerable cognitive potential, measured against that of other animals. Key to this development is the consumption of specific foods that contain brain-selective nutrients such as omega-6 and omega-3 polyunsaturated fatty acids and trace elements, including iron, iodine, copper, selenium, and zinc. Such nutrients are as important for us as they are for modern and prehistoric hunter-gatherers. Ethnohistorical and nutritional evidence shows that edible plants and small animals, most often gathered by women, represent an abundant and accessible source of “brain foods.” This is in contrast to the “Man the Hunter” hypothesis wherein big-game hunting and meat-eating are seen as prime movers in the development of biological and behavioral traits that distinguish humans from other primates.
Derek Newberry and Eric Gruebel
Since at least the 1930s, anthropologists have been conducting research on the dynamics and features of leadership and complex organizations. Though the anthropological study of organizations has changed dramatically since W. Lloyd Warner (an anthropologist) and Elton Mayo’s (a psychiatrist) first project at Western Electric Company’s Hawthorne Plant, two of anthropology’s defining features—the ethnographic method and the culture concept—have remained steadfast characteristics of the field for nearly a century.
While the particular methodologies of ethnographic research can be as varied as the studies they undergird, anthropological work on leadership and organizational development is generally performed from the inside, involving medium- to long-term research centered on participant observation. What separates anthropologists of organizations—and particularly corporations—from those in other subspecialties is that a significant amount of their ethnographic research is funded not just by academic institutions but also by private organizations that employ anthropologists on a permanent or contract basis. Though some within the field welcome the diverse research questions and perspectives that corporate-sponsored projects bring, others raise ethical and methodological objections to this work.
As is the case throughout anthropology in general, no one definition of culture serves as the universal touchstone for the anthropological study of organizations. Still, anthropologists working within the field commonly reject any notion of culture as static, uniform, or fully bounded within an organization. Unlike in the traditional management scholarship, there are few explanatory frameworks on effective leadership or organizational functioning in the anthropological literature. This is a byproduct of the larger trend toward reflexivity over the last two decades, in which anthropologists have increasingly problematized the concept of culture itself as well as attempts to develop broad theoretical frameworks.
For anthropologists of organizations, this shift has created a division between more academically oriented scholars who produce highly particularistic ethnographies that resist generalization and applied anthropologists who have created more practical guides on methodological approaches to studying organizations. In this vacuum, anthropologically informed frameworks for understanding leadership and culture in organizations have been developed by academics and practitioners in the related fields of design-thinking and industrial-organizational psychology. It remains to be seen whether, moving forward, the field will continue down this bifurcated path or instead reconnect with its roots in broad cultural theory, leading to more efforts to develop new frameworks for understanding leadership and organizational change.
Keir James Cecil Martin
Corporations are among the most important of the institutions that shape lives across the globe. They often have a “taken for granted” character, both in everyday discourse and in economic or management theory, where they are often described as an inevitable outcome of the natural working of markets. Anthropological analysis suggests that neither the markets that are seen as their foundation nor corporations as social entities can be understood in this manner. Instead, their existence has to be seen as contingent on particular social relations and as being the outcome of long processes of historic conflict. The extent to which, at the start of the 21st century, corporations satisfactorily fulfill their supposed purpose of managing debt obligations in order to stimulate economic growth is particularly open to question. This was traditionally the justification for the establishment of corporations as separate legal actors in economic markets. Some 150 years on, other sociocultural relations and perspectives shape their boundaries and activities in a manner that means that their purpose and character can no longer be assumed on the basis of such axiomatic premises. Instead, their actions can be explained only on the basis of historic and ethnographic analysis of the contests over the limits of relational obligation that shape their boundaries.
The Middle Stone Age (MSA) is a period of African prehistory characterized by the production of stone points and blades using prepared core reduction techniques. The MSA follows the Earlier Stone Age and precedes the Later Stone Age. The MSA is generally regarded as having started by at least 300 thousand years ago, and lasting to roughly 40 to 20 thousand years ago. Identifying the chronological limits for the MSA is challenging because some aspects of Middle Stone Age technology are found in assemblages outside this time range that also have Earlier or Later Stone Age-type tools.
The earlier part of the MSA is associated with Homo heidelbergensis (alternatively known as archaic Homo sapiens, or Homo rhodesiensis). The later part of the MSA, post-200-thousand-years, ago is associated with Homo sapiens. Identifying the processes underlying the evolution of Homo sapiens during the MSA is a major objective of ongoing research, but very few fossil remains have been recovered so far.
Across the African continent and through time, the MSA exhibited a high degree of variability in the types of and ways that stone tools were manufactured and used. Archaeologists have used this variability to define several techno-complexes and industries within the MSA that include the Aterian, Howiesons Poort, Still Bay, and Lupemban. Variation in point styles, presumably hafted to wooden handles or projectiles in many cases, is a hallmark of the regional diversification that originates in the MSA. This kind of variability, which is temporally and spatially restricted, differs in degree from the preceding Earlier Stone Age.
The MSA is significant from an evolutionary perspective because it is associated with the anatomical origins of Homo sapiens, as well as several significant changes in human behavior. Populations in the MSA practiced a foraging economy, were proficient hunters, and began efficiently utilizing aquatic resources such as shellfish and freshwater fish for the first time. Other significant changes included the elaboration of and increased reliance on symbolic resources, complex technologies, and social learning. For example, the first known externally stored symbols in the form of cross-hatched incised pigments date to 100 thousand years ago. In contexts of similar age, shell beads for making jewelry have been recovered from Morocco and South Africa. The earliest evidence for complex projectiles dates to at least 74 thousand years ago. The meaning, utility, and persistence of symbols and complex technologies depend on social learning and confer advantages in contexts that involve long-distance, complex social networks. While many of these earliest finds linked to behavioral modernity have been geographically restricted, the combined suite of genetic, fossil, and archaeological evidence may better support a pan-African origin for Homo sapiens over the course of the MSA.
Anthropologists have been studying the relationship between mining and the local forms of community that it has created or impacted since at least the 1930s. While the focus of these enquiries has moved with the times, reflecting different political, theoretical, and methodological priorities, much of this work has concentrated on local manifestations of the so-called resource curse or the paradox of plenty. Anthropologists are not the only social scientists who have tried to understand the social, cultural, political, and economic processes that accompany mining and other forms of resource extraction, including oil and gas operations. Geographers, economists, and political scientists are among the many different disciplines involved in this field of research. Nor have anthropologists maintained an exclusive claim over the use of ethnographic methods to study the effects of large or small-scale resource extraction. But anthropologists have generally had a lot more to say about mining and the extractives in general when it has involved people of non-European descent, especially exploited subalterns—peasants, workers, and indigenous peoples.
The relationship between mining and indigenous people has always been complex. At the most basic level, this stems from the conflicting relationships that miners and indigenous people have to the land and resources that are the focus of extractive activities, or what Marx would call the different relations to the means of production. Where miners see ore bodies and development opportunities that render landscapes productive, civilized, and familiar, local indigenous communities see places of ancestral connection and subsistence provision. This simple binary is frequently reinforced—and somewhat overdrawn—in the popular characterization of the relationship between indigenous people and mining companies, where untrammelled capital devastates hapless tribal people, or what has been aptly described as the “Avatar narrative,” after the 2009 film of the same name.
By the early 21st century, a number of anthropologists were producing ethnographic works that sought to debunk these popular narratives, which obscure the more complex sets of relationships that exist between the cast of different actors who are present in contemporary mining encounters, and the range of contradictory interests and identities that these actors may hold at any one point in time. Resource extraction has a way of surfacing the politics of indigeneity, and anthropologists have paid particular attention to a range of identities, entities, and relationships that emerge in response to new economic opportunities, or what can be called the social relations of compensation. That some indigenous communities deliberately court resource developers as a pathway to economic development does not, of course, deny the asymmetries of power inherent to these settings: even when indigenous communities voluntarily agree to resource extraction, they are seldom signing up to absorb the full range of social and ecological costs that extractive companies so frequently externalize. These imposed costs are rarely balanced by the opportunities to share in the wealth created by mineral development; and for most indigenous people, their experience of large-scale resource extraction has been frustrating and often highly destructive. It is for good reason that analogies are regularly drawn between these deals and the vast store of mythology concerning the person who sells their soul to the devil for wealth that is not only fleeting, but also the harbinger of despair, destruction, and death. This is no easy terrain for ethnographers, and engagement is fraught with difficult ethical, methodological, and ontological challenges.
Anthropologists are involved in these encounters in a variety of ways—as engaged or activist anthropologists, applied researchers and consultants, and independent ethnographers. The focus of these engagements includes environmental transformation and social disintegration, questions surrounding sustainable development—or the uneven distribution of the costs and benefits of mining, the making of company-community agreements, corporate forms, and the social responsibilities of corporations (or CSR), labour and livelihoods, conflict and resistance movements, gendered impacts, cultural heritage management, questions of indigeneity, and effects of displacement, to name but a few. These different forms of engagement raise important questions concerning positionality, and how this influences the production of knowledge—an issue that has divided anthropologists working in this contested field. Anthropologists must also grapple with questions concerning good ethnography, or what constitutes a “good enough” account of the relations between indigenous people and the multiple actors assembled in resource extraction contexts.
Susan Brownell and Niko Besnier
Sport offers a unique path to mobility to men—and to a much lesser degree women—who are members of disadvantaged groups and whose options for seeking a better life are otherwise limited. This mobility may be either social class mobility—as in basketball as a way out of racially segregated ghettos in the United States—or geographic mobility—as in the migration of soccer and rugby players from the Global South to the Global North in order to play in professional leagues there. Sport mobility potentially differs from the mobility based on manual and menial labor that is the more common path for such groups because successful professional athletes are regarded as heroes both by urban elites in their transplanted homes and by their compatriots back in their home neighborhoods, villages, and countries. At the same time, the hope to migrate to a successful career is often thwarted by the same structural conditions that thwart ordinary migrants’ mobility.
Different sports are associated with different social values that reflect the race, gender, social class, national, and global structures of power that underpin them. Until the past few decades, sports acquired their social value through a process of distinction in which gender, class, racial, and other differences were exaggerated by strategies of inclusion and exclusion. These differences were most closely guarded in sports organized by exclusive clubs, but they were also defended by other types of organizations such as schools and professional leagues. In the West, where most global sports originated, this produced a system of contrasting relationships between sport meanings: for example, golf, tennis, figure skating, and equestrian sports signified elite social status, while soccer, boxing, and—at least at the elite levels—basketball, baseball, and American football were identified with athletes from poorer socioeconomic backgrounds. In this way, sports produced an embodied social value in the form of the bodies of individual athletes, and until the last decades of the 20th century, this value was largely traded in the realm of symbolic capital and not economic capital—with the exception of a comparatively small number of athletes in professionalized sports. Furthermore, the embodied values of sports varied greatly between localities, nations, and world regions, shaped by the class structure, history, and culture of the body in a given locale.
However, at the end of the 20th century, the embodied values of many—if not most—individual sports became increasingly unmoored from their local, regional, ethnic, or national values and more tightly embroiled in global sport systems that have become increasingly commodified. Team sports, such as soccer, baseball, basketball, rugby, cricket, and ice hockey, and individual sports, such as tennis, golf, track and field, gymnastics, figure skating, and boxing, saw a large increase in the transnational mobility of athletes and coaches. These developments in the sports world reflected global changes in the global political economy: revenues from television broadcasting rights fees skyrocketed as television networks were privatized and proliferated; corporate sponsorship and advertising expanded along with the new television platforms; increasingly multinational sources of capital (such as corporations and billionaire team owners) were infused into sports; and elite athletes’ salaries, sponsorships, and transfer fees increased vertiginously in the most popular sports and seeped downward in the system. Clubs and teams began searching for talent further and further afield, bringing over players from the developing world. In US college sports (an anomaly on the world scene), the training of children toward the goal of gaining athletic scholarships became a growing industry that has even extended into China. In the Global South, at the same time, neoliberal development policies resulted in the reorganization or, in some cases, destruction of local agriculture and other forms of local production, as well as the social and economic relations that had been attached to them. Young men, who were particularly affected, now had to migrate to find employment and thus achieve the ideal of productive adult masculinity. These two factors produced a remarkable increase in the number of athletes from developing countries seeking employment as professionals in the industrialized world. For ever greater numbers of athletes, then, the embodied value of the body was no longer limited to symbolic or social capital but was all about economic capital.
The commodification of the sporting body and the transnationalization of the structures that determine its value provide novel and instructive insight into the changing nature of the global political economy since the end of the 20th century.
The anthropology of Islam has for a long time been concerned with questions of rules, orthodoxy, ritual practice, and piety. The idea that Muslim life can be studied through food practice is, therefore, a welcome reprieve from an over-determined association of Muslims and Islam with prayer, austerity, and un-freedom. Bringing Muslim culinary practices into view affords a lens onto the intersections between Islamic discourse, ritual practice, political economy, and changing notions of health, food, and the body in the contemporary world.
Food offers a unique opportunity to explore these overlapping developments, since what we eat is always subject both to past traditions, memory, and family histories as well as contemporary availability, affordability, and desirability of particular items. New ideas about medicine and the body can inform new notions of what counts as good food. Muslims, always in the world, are no different. However, the specific ways in which particular Muslims choose to include, avoid, or desire certain products may offer insights into the local political and economic expressions of Islam.
Halal, meaning permissible, is the name given to meat that is allowed for Muslim consumption. Islamic legal prescriptions evolved from the basic Quranic stipulations towards a complex regional, geographic, and sectarian taxonomy of animals. In practice, however, halal is assured, not through complicated legal discussions, but rather through consumption and trade within Muslim networks. Supply by a fellow Muslim constitutes halal. In the absence of obvious signs or evidence of dubious activity, halal must be assumed. Within Muslim networks of trade and consumption, the unintentional transgression of halal does not accrue sin. The practice of halal has thus been based on a communally charged notion of trust that has always been ripe for the articulation of regional and sectarian differences.
In recent decades, neoliberal developments have transformed the terrain of global food consumption, trade, and supply. Muslims increasingly consume through networks of non-Muslim producers, manufacturers, and suppliers. Advancements in food production technology mean that animal enzymes may end up in seemingly harmless everyday non-meat items. A new regime of halal certification has been established in a bid to standardize and regulate the supply of halal foods, cosmetics, and even tourist services. The new terrain of molecular halal that relies on DNA testing, and production and supply chain management, has been central to the ubiquity of halal as a label of assurance as well as a marketing tool. Many Muslims, particularly in the developed world, have become aware of the product ingredient listing of their favorite chocolate products and may even search for a certification label on bottled water. However, this development has not been hegemonic. Even in the face of new material and discursive arrangements, Muslims continue to draw on an older ethical basis for practice as they seek to trade, compete, and consume in the contemporary capitalist economy.
In many contexts, the explicit investigation and concern about halal among Muslims is subdued. Different interpretations of Islamic law produce different authoritative notions of what counts as halal. A famous hadith commands ignorant companions to recite the name of God before consumption. A Quranic verse declares the food of the people of the book (ahl-al-kitab, which refers to the Christians and Jews) as halal. Although the certification industry produces arguments to negate these sources, many Muslims continue to draw on these sources for practice.
Importantly, these instances of thinking beyond halal afford an opportunity to consider how food features in broader Muslim life. Festivals of sacrifice and fasting are focused on the preparation, distribution, and sharing of food. The famous Muslim notion of hospitality is emphasized around festivals and ritual events in different parts of the world, as Muslims articulate sharing and feeding each other as a way of extending God’s grace (barakat). To eat is to remain entangled in relations of reciprocity, friendship, and community. Food cooked in the home is considered of higher value as it carries the well wishes of the host. Indeed, instances of gifting are also opportunities for competition as households outdo each other in the lavishness of preparation and the amounts distributed.
Finally, the theme of ingestion is carried over into medicine. In India, the Unani (lit: Greek) system of medicine links foods substances to prophetic sources and complex medical theory. In Africa and Asia, the words of the Quran are handwritten in ink onto wooden boards. The ink-water, once washed away, is consumed as a cure for physical and spiritual ailments. And around the world, newborn babies are offered a taste of honey as part of the ritual name-giving ceremony. In each case, authoritative notions from a discursive tradition of past text and practice are articulated and contested in locally specific ways.
Augustin F. C. Holl
The “Three Age System” designed in the middle of the 19th century framed the general pattern of universal technological evolution. It all started with the use of stone tools in the very long “Stone Age.” The much shorter “Bronze Age” followed, to be capped by the even shorter “Iron Age.” This evolutionary taxonomy was crafted in Scandinavia, based on evidence from Denmark, and Europe by extension. Patterns of global long-term technological evolution recorded in Africa are at variance with this Stone-Bronze-Iron Age sequence; there is no Bronze Age yet.
The advent of copper and iron metallurgy is one of the most fascinating debates taking place in African archaeology at the beginning of the 21st century. The debate on the origins of African metallurgies has a long history with multiple implications. It is anchored on 19th-century evolutionism and touches on the patterns and pace of technological evolution worldwide. It has also impacted the history of discourses on human progress. As such, it has strong sociopolitical implications. It was used to support the assumption of “African backwardness,” an assumption according to which all important material and institutional inventions and innovations took place elsewhere—in the Near East precisely—and spread from there to Africa through demic or stimulus diffusion.
Does such a scheme capture global human technological history or is it a specific case of local areal development? That is the core of the current debate on the origins of African metallurgy.
A speculative phase, without any input of field data, took place in the 1950s–1960s. It was represented by the interesting exchanges between R. Mauny and H. Lhote. The former was a proponent of metallurgy diffusion and the latter argued for local inventions. For Mauny, metallurgy is such a complex process, requiring sophisticated mastery of elaborate pyrotechnology, that its independent invention anywhere else is totally ruled out. For Lhote, the diversity of African metallurgical practices and traditions is an indication of its local roots. Despite this debate, the dominant view asserted that iron metallurgy was invented in the Anatolian Hittite Empire in the middle of the 2nd millennium (1600–1500)
Sustained archaeological research was carried out in different parts of the continent from the early 1980s on. Evidence of copper and iron metallurgies was documented in different parts of the continent, in West, Central, and East Africa. Early copper metallurgies were recorded in the Akjoujt region of Mauritania and the Eghazzer basin in Niger. Surprisingly early iron smelting installations were found in the Eghazzer basin (Niger), the middle Senegal valley (Senegal), the Mouhoun Bend (Burkina Faso), the Nsukka region and Taruga (Nigeria), the Great Lakes region in East Africa, the Djohong (Cameroons), and the Ndio (Central African Republic) areas. It is, however, the discoveries from the northern margins of the equatorial rainforest in North-Central Africa, in the northeastern part of the Adamawa Plateau, that radically falsify the “iron technology diffusion” hypothesis. Iron production activities are documented to have taken place as early as 3000–2500
Philip Carl Salzman
Pastoralists depend for their livelihood on raising livestock on natural pasture. Livestock may be selected for meat, milk, wool, traction, carriage, or riding, or a combination of these. Pastoralists rarely rely solely on their livestock; they may also engage in hunting, fishing, cultivation, commerce, predatory raiding, or extortion. Some pastoral peoples are nomadic and others are sedentary, while yet others are partially mobile. Economically, some pastoralists are subsistence oriented, while others are market oriented, with others combining the two. Politically, some pastoralists are independent or quasi-independent tribes, while others, largely under the control of states, are peasants, and yet others are citizens engaged in commercial production in modern states.
All pastoralists have to address a common set of issues. The first issue is gaining and taking possession of livestock, including good breeding stock. Ownership of livestock may involve individual, group, or distributed rights. The second concern is managing the livestock through husbandry and herding. Husbandry refers to the selection of animals for breeding and maintenance, while herding involves ensuring that the livestock gains access to adequate pasture and water. Pasture access can be gained through territorial ownership and control, purchase, rent, or patronage. Security must be provided for the livestock through active human oversight or restriction by means of fences or other barriers. Manpower is provided by kin relations, exchange of labor, barter, monetary payment, or some combination.
Prominent pastoral peoples are sheep, goat, and camel herders in the arid band running from North Africa through the Middle East and northwest India; the cattle and small stock herders of Africa south of the Sahara; reindeer herders of the sub-Arctic northern Eurasia; the camelid herders of the Andes; and the ranchers of North and South America.