1-20 of 428 Results

Article

Paul D. Miller

Afghanistan has twice been thrust front and center of US national security concerns in the past half-century: first, during the Soviet-Afghan War, when Afghanistan served as a proxy for American efforts to combat Soviet influence; and second, as the frontline state and host for America’s global response to al-Qaida’s terrorist attacks of 2001. In both instances, American involvement swung from intensive investment and engagement to withdrawal and neglect. In both cases, American involvement reflected US concerns more than Afghan realities. And both episodes resulted in short-term successes for American security with long-term consequences for Afghanistan and its people. The signing of a strategic partnership agreement between the two countries in 2012 and a bilateral security agreement in 2013 created the possibility of a steadier and more forward-looking relationship—albeit one that the American and Afghan people may be less inclined to pursue as America’s longest war continues to grind on.

Article

In the 20th century, US policymakers often attempted to solve domestic agricultural oversupply problems by extending food aid to foreign recipients. In some instances, the United States donated food in times of natural disasters. In other instances, the United States offered commodities to induce foreign governments to support US foreign policy aims or to spur agricultural modernization. These efforts coalesced during the 1950s with the enactment of Public Law 480, commonly known as the Food for Peace program, which provided for a formal, bureaucratic mechanism for the disbursement of commodities. Throughout the second half of the 20th century, successive presidential administrations continued to deploy commodities in advance of their often disparate foreign policy objectives.

Article

During the Holocene, the present geological epoch, an increasing portion of humans began to manipulate the reproduction of plants and animals in a series of environmental practices known as agriculture. No other ecological relationship sustains as many humans as farming; no other has transformed the landscape to the same extent. The domestication of plants by American Indians followed the end of the last glacial maximum (the Ice Age). About eight thousand years ago, the first domesticated maize and squash arrived from central Mexico, spreading to every region and as far north as the subarctic boreal forest. The incursion of Europeans into North America set off widespread deforestation, soil depletion, and the spread of settlement, followed by the introduction of industrial machines and chemicals. A series of institutions sponsored publically funded research into fertilizers and insecticides. By the late 19th century, writers and activists criticized the technological transformation of farming as destructive to the environment and rural society. During the 20th century, wind erosion contributed to the depopulation of much of the Great Plains. Vast projects in environmental engineering transformed deserts into highly productive regions of intensive fruit and vegetable production. Throughout much of the 19th and 20th centuries, access to land remained limited to whites, with American Indians, African Americans, Latinas/os, Chinese, and peoples of other ethnicities attempting to gain farms or hold on to the land they owned. Two broad periods describe the history of agriculture and the environment in that portion of North America that became the United States. In the first, the environment dominated, forcing humans to adapt during the end of thousands of years of extreme climate variability. In the second, institutional and technological change became more significant, though the environment remained a constant factor against which American agriculture took shape. A related historical pattern within this shift was the capitalist transformation of the United States. For thousands of years, households sustained themselves and exchanged some of what they produced for money. But during the 19th century among a majority of American farmers, commodities took over the entire purpose of agriculture, transforming environments to reflect commercial opportunity.

Article

Kathleen A. Brosnan and Jacob Blackwell

Throughout history, food needs bonded humans to nature. The transition to agriculture constituted slow, but revolutionary ecological transformations. After 1500 ce, agricultural goods, as well as pests that undermined them, dominated the exchange of species between four continents. In the United States, increasingly more commercial efforts simplified ecosystems. Improved technologies and market mechanisms facilitated surpluses in the 19th century that fueled industrialization and urbanization. In the 20th century, industrial agriculture involved expensive machinery and chemical pesticides and fertilizers in pursuit of higher outputs and profits, while consumers’ relations with their food sources and nature became attenuated.

Article

Spanning countries across the globe, the antinuclear movement was the combined effort of millions of people to challenge the superpowers’ reliance on nuclear weapons during the Cold War. Encompassing an array of tactics, from radical dissent to public protest to opposition within the government, this movement succeeded in constraining the arms race and helping to make the use of nuclear weapons politically unacceptable. Antinuclear activists were critical to the establishment of arms control treaties, although they failed to achieve the abolition of nuclear weapons, as anticommunists, national security officials, and proponents of nuclear deterrence within the United States and Soviet Union actively opposed the movement. Opposition to nuclear weapons evolved in tandem with the Cold War and the arms race, leading to a rapid decline in antinuclear activism after the Cold War ended.

Article

From its inception as a nation in 1789, the United States has engaged in an environmental diplomacy that has included attempts to gain control of resources, as well as formal diplomatic efforts to regulate the use of resources shared with other nations and peoples. American environmental diplomacy has sought to gain control of natural resources, to conserve those resources for the future, and to protect environmental amenities from destruction. As an acquirer of natural resources, the United States has focused on arable land as well as on ocean fisheries, although around 1900, the focus on ocean fisheries turned into a desire to conserve marine resources from unregulated harvesting. The main 20th-century U.S. goal was to extend beyond its borders its Progressive-era desire to utilize resources efficiently, meaning the greatest good for the greatest number for the longest time. For most of the 20th century, the United States was the leader in promoting global environmental protection through the best science, especially emphasizing wildlife. Near the end of the century, U.S. government science policy was increasingly out of step with global environmental thinking, and the United States often found itself on the outside. Most notably, the attempts to address climate change moved ahead with almost every country in the world except the United States. While a few monographs focus squarely on environmental diplomacy, it is safe to say that historians have not come close to tapping the potential of the intersection of the environmental and diplomatic history of the United States.

Article

Richard N. L. Andrews

Between 1964 and 2017, the United States adopted the concept of environmental policy as a new focus for a broad range of previously disparate policy issues affecting human interactions with the natural environment. These policies ranged from environmental health, pollution, and toxic exposure to management of ecosystems, resources, and use of the public lands, environmental aspects of urbanization, agricultural practices, and energy use, and negotiation of international agreements to address global environmental problems. In doing so, it nationalized many responsibilities that had previously been considered primarily state or local matters. It changed the United States’ approach to federalism by authorizing new powers for the federal government to set national minimum environmental standards and regulatory frameworks with the states mandated to participate in their implementation and compliance. Finally, it explicitly formalized administrative procedures for federal environmental decision-making with stricter requirements for scientific and economic justification rather than merely administrative discretion. In addition, it greatly increased public access to information and opportunities for input, as well as for judicial review, thus allowing citizen advocates for environmental protection and appreciative uses equal legitimacy with commodity producers to voice their preferences for use of public environmental resources. These policies initially reflected widespread public demand and broad bipartisan support. Over several decades, however, they became flashpoints, first, between business interests and environmental advocacy groups and, subsequently, between increasingly ideological and partisan agendas concerning the role of the federal government. Beginning in the 1980s, the long-standing Progressive ideal of the “public interest” was increasingly supplanted by a narrative of “government overreach,” and the 1990s witnessed campaigns to delegitimize the underlying evidence justifying environmental policies by labeling it “junk science” or a “hoax.” From the 1980s forward, the stated priorities of environmental policy vacillated repeatedly between presidential administrations and Congresses supporting continuation and expansion of environmental protection and preservation policies versus those seeking to weaken or even reverse protections in favor of private-property rights and more damaging uses of resources. Yet despite these apparent shifts, the basic environmental laws and policies enacted during the 1970s remained largely in place: political gridlock, in effect, maintained the status quo, with the addition of a very few innovations such as “cap and trade” policies. One reason was that environmental policies retained considerable latent public support: in electoral campaigns, they were often overshadowed by economic and other issues, but they still aroused widespread support in their defense when threatened. Another reason was that decisions by the courts also continued to reaffirm many existing policies and to reject attempts to dismantle them. With the election of Donald Trump in 2016, along with conservative majorities in both houses of Congress, US environmental policy came under the most hostile and wide-ranging attack since its origins. More than almost any other issue, the incoming president targeted environmental policy for rhetorical attacks and budget cuts, and sought to eradicate the executive policies of his predecessor, weaken or rescind protective regulations, and undermine the regulatory and even the scientific capacity of the federal environmental agencies. In the early 21st century, it is as yet unclear how much of his agenda will actually be accomplished, or whether, as in past attempts, much of it will ultimately be blocked by Congress, the courts, public backlash, and business and state government interests seeking stable policy expectations rather than disruptive deregulation.

Article

On the eve of World War II many Americans were reluctant to see the United States embark on overseas involvements. Yet the Japanese attack on the U.S. Pacific fleet at Pearl Harbor on December 7, 1941, seemingly united the nation in determination to achieve total victory in Asia and Europe. Underutilized industrial plants expanded to full capacity producing war materials for the United States and its allies. Unemployment was sucked up by the armed services and war work. Many Americans’ standard of living improved, and the United States became the wealthiest nation in world history. Over time, this proud record became magnified into the “Good War” myth that has distorted America’s very real achievement. As the era of total victories receded and the United States went from leading creditor to debtor nation, the 1940s appeared as a golden age when everything worked better, people were united, and the United States saved the world for democracy (an exaggeration that ignored the huge contributions of America’s allies, including the British Empire, the Soviet Union, and China). In fact, during World War II the United States experienced marked class, sex and gender, and racial tensions. Groups such as gays made some social progress, but the poor, especially many African Americans, were left behind. After being welcomed into the work force, women were pressured to go home when veterans returned looking for jobs in late 1945–1946, losing many of the gains they had made during the conflict. Wartime prosperity stunted the development of a welfare state; universal medical care and social security were cast as unnecessary. Combat had been a horrific experience, leaving many casualties with major physical or emotional wounds that took years to heal. Like all major global events, World War II was complex and nuanced, and it requires careful interpretation.

Article

The first forty years of cinema in the United States, from the development and commercialization of modern motion picture technology in the mid-1890s to the full blossoming of sound-era Hollywood during the early 1930s, represents one of the most consequential periods in the history of the medium. It was a time of tremendous artistic and economic transformation, including but not limited to the storied transition from silent motion pictures to “the talkies” in the late 1920s. Though the nomenclature of the silent era implies a relatively unified period in film history, the years before the transition to sound saw a succession of important changes in film artistry and its means of production, and film historians generally regard the epoch as divided into at least three separate and largely distinct temporalities. During the period of early cinema, which lasted about a decade from the medium’s emergence in the mid-1890s through the middle years of the new century’s first decade, motion pictures existed primarily as a novelty amusement presented in vaudeville theatres and carnival fairgrounds. Film historians Tom Gunning and André Gaudreault have famously defined the aesthetic of this period as a “cinema of attractions,” in which the technology of recording and reproducing the world, along with the new ways in which it could frame, orient, and manipulate time and space, marked the primary concerns of the medium’s artists and spectators. A transitional period followed from around 1907 to the later 1910s when changes in the distribution model for motion pictures enabled the development of purpose-built exhibition halls and led to a marked increase in demand for the entertainment. On a formal and artistic level, the period saw a rise in the prominence of the story film and widespread experimentation with new techniques of cinematography and editing, many of which would become foundational to later cinematic style. The era also witnessed the introduction and growing prominence of feature-length filmmaking over narrative shorts. The production side was marked by intensifying competition between the original American motion picture studios based in and around New York City, several of which attempted to cement their influence by forming an oligopolistic trust, and a number of upstart “independent” West Coast studios located around Los Angeles. Both the artistic and production trends of the transitional period came to a head during the classical era that followed, when the visual experimentation of the previous years consolidated into the “classical style” favored by the major studios, and the competition between East Coast and West Coast studios resolved definitively in favor of the latter. This was the era of Hollywood’s ascendance over domestic filmmaking in the United States and its growing influence over worldwide film markets, due in part to the decimation of the European film industry during World War I. After nearly a decade of dominance, the Hollywood studio system was so refined that the advent of marketable synchronized sound technology around 1927 produced relatively few upheavals among the coterie of top studios. Rather, the American film industry managed to reorient itself around the production of talking motion pictures so swiftly that silent film production in the United States had effectively ceased at any appreciable scale by 1929. Artistically, the early years of “the talkies” proved challenging, as filmmakers struggled with the imperfections of early recording technology and the limitations they imposed on filmmaking practice. But filmgoing remained popular in the United States even during the depths of the Great Depression, and by the early 1930s a combination of improved technology and artistic adaptation led to such a marked increase in quality that many film historians regard the period to be the beginning of Hollywood’s Golden Era. With a new voluntary production code put in place to respond to criticism of immorality in Hollywood fare, the American film industry was poised by the early 1930s to solidify its prominent position in American cultural life.

Article

Joshua Gleich

Over the past seventy years, the American film industry has transformed from mass-producing movies to producing a limited number of massive blockbuster movies on a global scale. Hollywood film studios have moved from independent companies to divisions of media conglomerates. Theatrical attendance for American audiences has plummeted since the mid-1940s; nonetheless, American films have never been more profitable. In 1945, American films could only be viewed in theaters; now they are available in myriad forms of home viewing. Throughout, Hollywood has continued to dominate global cinema, although film and now video production reaches Americans in many other forms, from home videos to educational films. Amid declining attendance, the Supreme Court in 1948 forced the major studios to sell off their theaters. Hollywood studios instead focused their power on distribution, limiting the supply of films and focusing on expensive productions to sell on an individual basis to theaters. Growing production costs and changing audiences caused wild fluctuations in profits, leading to an industry-wide recession in the late 1960s. The studios emerged under new corporate ownership and honed their blockbuster strategy, releasing “high concept” films widely on the heels of television marketing campaigns. New technologies such as cable and VCRs offered new windows for Hollywood movies beyond theatrical release, reducing the risks of blockbuster production. Deregulation through the 1980s and 1990s allowed for the “Big Six” media conglomerates to join film, theaters, networks, publishing, and other related media outlets under one corporate umbrella. This has expanded the scale and stability of Hollywood revenue while reducing the number and diversity of Hollywood films, as conglomerates focus on film franchises that can thrive on various digital media. Technological change has also lowered the cost of non-Hollywood films and thus encouraged a range of alternative forms of filmmaking, distribution, and exhibition.

Article

The first half of the 20th century saw extraordinary changes in the ways Americans produced, procured, cooked, and ate food. Exploding food production easily outstripped population growth in this era as intensive plant and animal breeding, the booming use of synthetic fertilizers and pesticides, and technological advances in farm equipment all resulted in dramatically greater yields on American farms. At the same time, a rapidly growing transportation network of refrigerated ships, railroads, and trucks hugely expanded the reach of different food crops and increased the variety of foods consumers across the country could buy, even as food imports from other countries soared. Meanwhile, new technologies, such as mechanical refrigeration, reliable industrial canning, and, by the end of the era, frozen foods, subtly encouraged Americans to eat less locally and seasonally than ever before. Yet as American food became more abundant and more affordable, diminishing want and suffering, it also contributed to new problems, especially rising body weights and mounting rates of cardiac disease. American taste preferences themselves changed throughout the era as more people came to expect stronger flavors, grew accustomed to the taste of industrially processed foods, and sampled so-called “foreign” foods, which played an enormous role in defining 20th-century American cuisine. Food marketing exploded, and food companies invested ever greater sums in print and radio advertising and eye-catching packaging. At home, a range of appliances made cooking easier, and modern grocery stores and increasing car ownership made it possible for Americans to food shop less frequently. Home economics provided Americans, especially girls and women, with newly scientific and managerial approaches to cooking and home management, and Americans as a whole increasingly approached food through the lens of science. Virtually all areas related to food saw fundamental shifts in the first half of the 20th century, from agriculture to industrial processing, from nutrition science to weight-loss culture, from marketing to transportation, and from kitchen technology to cuisine. Not everything about food changed in this era, but the rapid pace of change probably exaggerated the transformations for the many Americans who experienced them.

Article

American Indian activism after 1945 was as much a part of the larger, global decolonization movement rooted in centuries of imperialism as it was a direct response to the ethos of civic nationalism and integration that had gained momentum in the United States following World War II. This ethos manifested itself in the disastrous federal policies of termination and relocation, which sought to end federal services to recognized Indian tribes and encourage Native people to leave reservations for cities. In response, tribal leaders from throughout Indian Country formed the National Congress of American Indians (NCAI) in 1944 to litigate and lobby for the collective well-being of Native peoples. The NCAI was the first intertribal organization to embrace the concepts of sovereignty, treaty rights, and cultural preservation—principles that continue to guide Native activists today. As American Indian activism grew increasingly militant in the late 1960s and 1970s, civil disobedience, demonstrations, and takeovers became the preferred tactics of “Red Power” organizations such as the National Indian Youth Council (NIYC), the Indians of All Tribes, and the American Indian Movement (AIM). At the same time, others established more focused efforts that employed less confrontational methods. For example, the Native American Rights Fund (NARF) served as a legal apparatus that represented Native nations, using the courts to protect treaty rights and expand sovereignty; the Council of Energy Resource Tribes (CERT) sought to secure greater returns on the mineral wealth found on tribal lands; and the American Indian Higher Education Consortium (AIHEC) brought Native educators together to work for greater self-determination and culturally rooted curricula in Indian schools. While the more militant of these organizations and efforts have withered, those that have exploited established channels have grown and flourished. Such efforts will no doubt continue into the unforeseeable future so long as the state of Native nations remains uncertain.

Article

Early 20th century American labor and working-class history is a subfield of American social history that focuses attention on the complex lives of working people in a rapidly changing global political and economic system. Once focused closely on institutional dynamics in the workplace and electoral politics, labor history has expanded and refined its approach to include questions about the families, communities, identities, and cultures workers have developed over time. With a critical eye on the limits of liberal capitalism and democracy for workers’ welfare, labor historians explore individual and collective struggles against exclusion from opportunity, as well as accommodation to political and economic contexts defined by rapid and volatile growth and deep inequality. Particularly important are the ways that workers both defined and were defined by differences of race, gender, ethnicity, class, and place. Individual workers and organized groups of working Americans both transformed and were transformed by the main struggles of the industrial era, including conflicts over the place of former slaves and their descendants in the United States, mass immigration and migrations, technological change, new management and business models, the development of a consumer economy, the rise of a more active federal government, and the evolution of popular culture. The period between 1896 and 1945 saw a crucial transition in the labor and working-class history of the United States. At its outset, Americans were working many more hours a day than the eight for which they had fought hard in the late 19th century. On average, Americans labored fifty-four to sixty-three hours per week in dangerous working conditions (approximately 35,000 workers died in accidents annually at the turn of the century). By 1920, half of all Americans lived in growing urban neighborhoods, and for many of them chronic unemployment, poverty, and deep social divides had become a regular part of life. Workers had little power in either the Democratic or Republican party. They faced a legal system that gave them no rights at work but the right to quit, judges who took the side of employers in the labor market by issuing thousands of injunctions against even nonviolent workers’ organizing, and vigilantes and police forces that did not hesitate to repress dissent violently. The ranks of organized labor were shrinking in the years before the economy began to recover in 1897. Dreams of a more democratic alternative to wage labor and corporate-dominated capitalism had been all but destroyed. Workers struggled to find their place in an emerging consumer-oriented culture that assumed everyone ought to strive for the often unattainable, and not necessarily desirable, marks of middle-class respectability. Yet American labor emerged from World War II with the main sectors of the industrial economy organized, with greater earning potential than any previous generation of American workers, and with unprecedented power as an organized interest group that could appeal to the federal government to promote its welfare. Though American workers as a whole had made no grand challenge to the nation’s basic corporate-centered political economy in the preceding four and one-half decades, they entered the postwar world with a greater level of power, and a bigger share in the proceeds of a booming economy, than anyone could have imagined in 1896. The labor and working-class history of the United States between 1900 and 1945, then, is the story of how working-class individuals, families, and communities—members of an extremely diverse American working class—managed to carve out positions of political, economic, and cultural influence, even as they remained divided among themselves, dependent upon corporate power, and increasingly invested in a individualistic, competitive, acquisitive culture.

Article

The story of mass culture from 1900 to 1945 is the story of its growth and increasing centrality to American life. Sparked by the development of such new media as radios, phonographs, and cinema that required less literacy and formal education, and the commodification of leisure pursuits, mass culture extended its purview to nearly the entire nation by the end of the Second World War. In the process, it became one way in which immigrant and second-generation Americans could learn about the United States and stake a claim to participation in civic and social life. Mass culture characteristically consisted of artifacts that stressed pleasure, sensation, and glamor rather than, as previously been the case, eternal and ethereal beauty, moral propriety, and personal transcendence. It had the power to determine acceptable values and beliefs and define qualities and characteristics of social groups. The constant and graphic stimulation led many custodians of culture to worry about the kinds of stimulation that mass culture provided and about a breakdown in social morality that would surely follow. As a result, they formed regulatory agencies and watchdogs to monitor the mass culture available on the market. Other critics charged the regime of mass culture with inducing homogenization of belief and practice and contributing to passive acceptance of the status quo. The spread of mass culture did not terminate regional, class, or racial cultures; indeed, mass culture artifacts often borrowed them. Nor did marginalized groups accept stereotypical portrayals; rather, they worked to expand the possibilities of prevailing ones and to provide alternatives.

Article

American activists who challenged South African apartheid during the Cold War era extended their opposition to racial discrimination in the United States into world politics. US antiapartheid organizations worked in solidarity with forces struggling against the racist regime in South Africa and played a significant role in the global antiapartheid movement. More than four decades of organizing preceded the legislative showdown of 1986, when a bipartisan coalition in Congress overrode President Ronald Reagan’s veto, to enact economic sanctions against the apartheid regime in South Africa. Adoption of sanctions by the United States, along with transnational solidarity with the resistance to apartheid by South Africans, helped prompt the apartheid regime to relinquish power and allow the democratic elections that brought Nelson Mandela and the African National Congress to power in 1994. Drawing on the tactics, strategies and moral authority of the civil rights movement, antiapartheid campaigners mobilized public opinion while increasing African American influence in the formulation of US foreign policy. Long-lasting organizations such as the American Committee on Africa and TransAfrica called for boycotts and divestment while lobbying for economic sanctions. Utilizing tactics such as rallies, demonstrations, and nonviolent civil disobedience actions, antiapartheid activists made their voices heard on college campuses, corporate boardrooms, municipal and state governments, as well as the halls of Congress. Cultural expressions of criticism and resistance served to reinforce public sentiment against apartheid. Novels, plays, movies, and music provided a way for Americans to connect to the struggles of those suffering under apartheid. By extending the moral logic of the movement for African American civil rights, American anti-apartheid activists created a multicultural coalition that brought about institutional and governmental divestment from apartheid, prompted Congress to impose economic sanctions on South Africa, and increased the influence of African Americans regarding issues of race and American foreign policy.

Article

Radio debuted as a wireless alternative to telegraphy in the late 19th century. At its inception, wireless technology could only transmit signals and was incapable of broadcasting actual voices. During the 1920s, however, it transformed into a medium primarily identified as one used for entertainment and informational broadcasting. The commercialization of American broadcasting, which included the establishment of national networks and reliance on advertising to generate revenue, became the so-called American system of broadcasting. This transformation demonstrates how technology is shaped by the dynamic forces of the society in which it is embedded. Broadcasting’s aural attributes also engaged listeners in a way that distinguished it from other forms of mass media. Cognitive processes triggered by the disembodied voices and sounds emanating from radio’s loudspeakers illustrate how listeners, grounded in particular social, cultural, economic, and political contexts, made sense of and understood the content with which they were engaged. Through the 1940s, difficulties in expanding the international radio presence of the United States further highlight the significance of surrounding contexts in shaping the technology and in promoting (or discouraging) listener engagement with programing content.

Article

As places of dense habitation, cities have always required coordination and planning. City planning has involved the design and construction of large-scale infrastructure projects to provide basic necessities such as a water supply and drainage. By the 1850s, immigration and industrialization were fueling the rise of big cities, creating immense, collective problems of epidemics, slums, pollution, gridlock, and crime. From the 1850s to the 1900s, both local governments and utility companies responded to this explosive physical and demographic growth by constructing a “networked city” of modern technologies such as gaslight, telephones, and electricity. Building the urban environment also became a wellspring of innovation in science, medicine, and administration. In 1909–1910, a revolutionary idea—comprehensive city planning—opened a new era of professionalization and institutionalization in the planning departments of city halls and universities. Over the next thirty-five years, however, wars and depression limited their influence. From 1945 to 1965, in contrast, represents the golden age of formal planning. During this unprecedented period of peace and prosperity, academically trained experts played central roles in the modernization of the inner cities and the sprawl of the suburbs. But the planners’ clean-sweep approach to urban renewal and the massive destruction caused by highway construction provoked a revolt of the grassroots. Beginning in the Watts district of Los Angeles in 1965, mass uprisings escalated over the next three years into a national crisis of social disorder, racial and ethnic inequality, and environmental injustice. The postwar consensus of theory and practice was shattered, replaced by a fragmented profession ranging from defenders of top-down systems of computer-generated simulations to proponents of advocacy planning from the bottom up. Since the late 1980s, the ascendency of public-private partnerships in building the urban environment has favored the planners promoting systems approaches, who promise a future of high-tech “smart cities” under their complete control.

Article

The American War for Independence lasted eight years. It was one of the longest and bloodiest wars in America’s history, and yet it was not such a protracted conflict merely because the might of the British armed forces was brought to bear on the hapless colonials. The many divisions among Americans themselves over whether to fight, what to fight for, and who would do the fighting often had tragic and violent consequences. The Revolutionary War was by any measure the first American civil war. Yet national narratives of the Revolution and even much of the scholarship on the era focus more on simple stories of a contest between the Patriots and the British. Loyalists and other opponents of the Patriots are routinely left out of these narratives, or given short shrift. So, too, are the tens of thousands of ordinary colonists—perhaps a majority of the population—who were disaffected or alienated from either side or who tried to tack between the two main antagonists to make the best of a bad situation. Historians now estimate that as many as three-fifths of the colonial population were neither active Loyalists nor Patriots. When we take the war seriously and begin to think about narratives that capture the experience of the many, rather than the few, an illuminating picture emerges. The remarkably wide scope of the activities of the disaffected during the war—ranging from nonpayment of taxes to draft dodging and even to armed resistance to protect their neutrality—has to be integrated with older stories of militant Patriots and timid Loyalists. Only then can we understand the profound consequences of disaffection—particularly in creating divisions within the states, increasing levels of violence, prolonging the war, and changing the nature of the political settlements in each state. Indeed, the very divisions among diverse Americans that made the War for Independence so long, bitter, and bloody also explains much of the Revolutionary energy of the period. Though it is not as seamless as traditional narratives of the Revolution would suggest, a more complicated story also helps better explain the many problems the new states and eventually the new nation would face. In making this argument, we may finally suggest ways we can overcome what John Shy long ago noted as the tendency of scholars to separate the ‘destructive’ War for Independence from the ‘constructive’ political Revolution.

Article

On January 5, 2014—the fiftieth anniversary of President Lyndon Johnson’s launch of the War on Poverty—the New York Times asked a panel of opinion leaders a simple question: “Does the U.S. Need Another War on Poverty?” While the answers varied, all the invited debaters accepted the martial premise of the question—that a war on poverty had been fought and that eliminating poverty was, without a doubt, a “fight,” or a “battle.” Yet the debate over the manner—martial or not—by which the federal government and public policy has dealt with the issue of poverty in the United States is still very much an open-ended one. The evolution and development of the postwar American welfare state is a story not only of a number of “wars,” or individual political initiatives, against poverty, but also about the growth of institutions within and outside government that seek to address, alleviate, and eliminate poverty and its concomitant social ills. It is a complex and at times messy story, interwoven with the wider historical trajectory of this period: civil rights, the rise and fall of a “Cold War consensus,” the emergence of a counterculture, the Vietnam War, the credibility gap, the rise of conservatism, the end of “welfare,” and the emergence of compassionate conservatism. Mirroring the broader organization of the American political system, with a relatively weak center of power and delegated authority and decision-making in fifty states, the welfare model has developed and grown over decades. Policies viewed in one era as unmitigated failures have instead over time evolved and become part of the fabric of the welfare state.

Article

The foreign relations of the Jacksonian age reflected Andrew Jackson’s own sense of the American “nation” as long victimized by non-white enemies and weak politicians. His goal as president from 1829 to 1837 was to restore white Americans’ “sovereignty,” to empower them against other nations both within and beyond US territory. Three priorities emerged from this conviction. First, Jackson was determined to deport the roughly 50,000 Creeks, Cherokees, Choctaws, Chickasaws, and Seminoles living in southern states and territories. He saw them as hostile nations who threatened American safety and checked American prosperity. Far from a domestic issue, Indian Removal was an imperial project that set the stage for later expansion over continental and oceanic frontiers. Second and somewhat paradoxically, Jackson sought better relations with Great Britain. These were necessary because the British Empire was both the main threat to US expansion and the biggest market for slave-grown exports from former Indian lands. Anglo-American détente changed investment patterns and economic development throughout the Western Hemisphere, encouraging American leaders to appease London even when patriotic passions argued otherwise. Third, Jackson wanted to open markets and secure property rights around the globe, by treaty if possible but by force when necessary. He called for a larger navy, pressed countries from France to Mexico for outstanding debts, and embraced retaliatory strikes on “savages” and “pirates” as far away as Sumatra. Indeed, the Jacksonian age brought a new American presence in the Pacific. By the mid-1840s the United States was the dominant power in the Hawaiian Islands and a growing force in China. The Mexican War that followed made the Union a two-ocean colossus—and pushed its regional tensions to the breaking point.