You are looking at 1-20 of 385 articles
Kathleen A. Brosnan and Jacob Blackwell
Throughout history, food needs bonded humans to nature. The transition to agriculture constituted slow, but revolutionary ecological transformations. After 1500
Patricio N. Abinales
An enduring resilience characterizes Philippine–American relationship for several reasons. For one, there is an unusual colonial relationship wherein the United States took control of the Philippines from the Spanish and then shared power with an emergent Filipino elite, introduced suffrage, implemented public education, and promised eventual national independence. A shared experience fighting the Japanese in World War II and defeating a postwar communist rebellion further cemented the “special relationship” between the two countries. The United States took advantage of this partnership to compel the Philippines to sign an economic and military treaty that favored American businesses and the military, respectively. Filipino leaders not only accepted the realities of this strategic game and exploited every opening to assert national interests but also benefitted from American largesse. Under the dictatorship of President Ferdinand Marcos, this mutual cadging was at its most brazen. As a result, the military alliance suffered when the Philippines terminated the agreement, and the United States considerably reduced its support to the country. But the estrangement did not last long, and both countries rekindled the “special relationship” in response to the U.S. “Global War on Terror” and, of late, Chinese military aggression in the West Philippine Sea.
Spanning countries across the globe, the antinuclear movement was the combined effort of millions of people to challenge the superpowers’ reliance on nuclear weapons during the Cold War. Encompassing an array of tactics, from radical dissent to public protest to opposition within the government, this movement succeeded in constraining the arms race and helping to make the use of nuclear weapons politically unacceptable. Antinuclear activists were critical to the establishment of arms control treaties, although they failed to achieve the abolition of nuclear weapons, as anticommunists, national security officials, and proponents of nuclear deterrence within the United States and Soviet Union actively opposed the movement. Opposition to nuclear weapons evolved in tandem with the Cold War and the arms race, leading to a rapid decline in antinuclear activism after the Cold War ended.
From its inception as a nation in 1789, the United States has engaged in an environmental diplomacy that has included attempts to gain control of resources, as well as formal diplomatic efforts to regulate the use of resources shared with other nations and peoples. American environmental diplomacy has sought to gain control of natural resources, to conserve those resources for the future, and to protect environmental amenities from destruction. As an acquirer of natural resources, the United States has focused on arable land as well as on ocean fisheries, although around 1900, the focus on ocean fisheries turned into a desire to conserve marine resources from unregulated harvesting.
The main 20th-century U.S. goal was to extend beyond its borders its Progressive-era desire to utilize resources efficiently, meaning the greatest good for the greatest number for the longest time. For most of the 20th century, the United States was the leader in promoting global environmental protection through the best science, especially emphasizing wildlife. Near the end of the century, U.S. government science policy was increasingly out of step with global environmental thinking, and the United States often found itself on the outside. Most notably, the attempts to address climate change moved ahead with almost every country in the world except the United States.
While a few monographs focus squarely on environmental diplomacy, it is safe to say that historians have not come close to tapping the potential of the intersection of the environmental and diplomatic history of the United States.
Richard N. L. Andrews
Between 1964 and 2017, the United States adopted the concept of environmental policy as a new focus for a broad range of previously disparate policy issues affecting human interactions with the natural environment. These policies ranged from environmental health, pollution, and toxic exposure to management of ecosystems, resources, and use of the public lands, environmental aspects of urbanization, agricultural practices, and energy use, and negotiation of international agreements to address global environmental problems. In doing so, it nationalized many responsibilities that had previously been considered primarily state or local matters. It changed the United States’ approach to federalism by authorizing new powers for the federal government to set national minimum environmental standards and regulatory frameworks with the states mandated to participate in their implementation and compliance. Finally, it explicitly formalized administrative procedures for federal environmental decision-making with stricter requirements for scientific and economic justification rather than merely administrative discretion. In addition, it greatly increased public access to information and opportunities for input, as well as for judicial review, thus allowing citizen advocates for environmental protection and appreciative uses equal legitimacy with commodity producers to voice their preferences for use of public environmental resources.
These policies initially reflected widespread public demand and broad bipartisan support. Over several decades, however, they became flashpoints, first, between business interests and environmental advocacy groups and, subsequently, between increasingly ideological and partisan agendas concerning the role of the federal government. Beginning in the 1980s, the long-standing Progressive ideal of the “public interest” was increasingly supplanted by a narrative of “government overreach,” and the 1990s witnessed campaigns to delegitimize the underlying evidence justifying environmental policies by labeling it “junk science” or a “hoax.”
From the 1980s forward, the stated priorities of environmental policy vacillated repeatedly between presidential administrations and Congresses supporting continuation and expansion of environmental protection and preservation policies versus those seeking to weaken or even reverse protections in favor of private-property rights and more damaging uses of resources. Yet despite these apparent shifts, the basic environmental laws and policies enacted during the 1970s remained largely in place: political gridlock, in effect, maintained the status quo, with the addition of a very few innovations such as “cap and trade” policies. One reason was that environmental policies retained considerable latent public support: in electoral campaigns, they were often overshadowed by economic and other issues, but they still aroused widespread support in their defense when threatened. Another reason was that decisions by the courts also continued to reaffirm many existing policies and to reject attempts to dismantle them.
With the election of Donald Trump in 2016, along with conservative majorities in both houses of Congress, US environmental policy came under the most hostile and wide-ranging attack since its origins. More than almost any other issue, the incoming president targeted environmental policy for rhetorical attacks and budget cuts, and sought to eradicate the executive policies of his predecessor, weaken or rescind protective regulations, and undermine the regulatory and even the scientific capacity of the federal environmental agencies. In the early 21st century, it is as yet unclear how much of his agenda will actually be accomplished, or whether, as in past attempts, much of it will ultimately be blocked by Congress, the courts, public backlash, and business and state government interests seeking stable policy expectations rather than disruptive deregulation.
Michael C. C. Adams
On the eve of World War II many Americans were reluctant to see the United States embark on overseas involvements. Yet the Japanese attack on the U.S. Pacific fleet at Pearl Harbor on December 7, 1941, seemingly united the nation in determination to achieve total victory in Asia and Europe. Underutilized industrial plants expanded to full capacity producing war materials for the United States and its allies. Unemployment was sucked up by the armed services and war work. Many Americans’ standard of living improved, and the United States became the wealthiest nation in world history.
Over time, this proud record became magnified into the “Good War” myth that has distorted America’s very real achievement. As the era of total victories receded and the United States went from leading creditor to debtor nation, the 1940s appeared as a golden age when everything worked better, people were united, and the United States saved the world for democracy (an exaggeration that ignored the huge contributions of America’s allies, including the British Empire, the Soviet Union, and China). In fact, during World War II the United States experienced marked class, sex and gender, and racial tensions. Groups such as gays made some social progress, but the poor, especially many African Americans, were left behind. After being welcomed into the work force, women were pressured to go home when veterans returned looking for jobs in late 1945–1946, losing many of the gains they had made during the conflict. Wartime prosperity stunted the development of a welfare state; universal medical care and social security were cast as unnecessary. Combat had been a horrific experience, leaving many casualties with major physical or emotional wounds that took years to heal. Like all major global events, World War II was complex and nuanced, and it requires careful interpretation.
The first forty years of cinema in the United States, from the development and commercialization of modern motion picture technology in the mid-1890s to the full blossoming of sound-era Hollywood during the early 1930s, represents one of the most consequential periods in the history of the medium. It was a time of tremendous artistic and economic transformation, including but not limited to the storied transition from silent motion pictures to “the talkies” in the late 1920s.
Though the nomenclature of the silent era implies a relatively unified period in film history, the years before the transition to sound saw a succession of important changes in film artistry and its means of production, and film historians generally regard the epoch as divided into at least three separate and largely distinct temporalities. During the period of early cinema, which lasted about a decade from the medium’s emergence in the mid-1890s through the middle years of the new century’s first decade, motion pictures existed primarily as a novelty amusement presented in vaudeville theatres and carnival fairgrounds. Film historians Tom Gunning and André Gaudreault have famously defined the aesthetic of this period as a “cinema of attractions,” in which the technology of recording and reproducing the world, along with the new ways in which it could frame, orient, and manipulate time and space, marked the primary concerns of the medium’s artists and spectators.
A transitional period followed from around 1907 to the later 1910s when changes in the distribution model for motion pictures enabled the development of purpose-built exhibition halls and led to a marked increase in demand for the entertainment. On a formal and artistic level, the period saw a rise in the prominence of the story film and widespread experimentation with new techniques of cinematography and editing, many of which would become foundational to later cinematic style. The era also witnessed the introduction and growing prominence of feature-length filmmaking over narrative shorts. The production side was marked by intensifying competition between the original American motion picture studios based in and around New York City, several of which attempted to cement their influence by forming an oligopolistic trust, and a number of upstart “independent” West Coast studios located around Los Angeles.
Both the artistic and production trends of the transitional period came to a head during the classical era that followed, when the visual experimentation of the previous years consolidated into the “classical style” favored by the major studios, and the competition between East Coast and West Coast studios resolved definitively in favor of the latter. This was the era of Hollywood’s ascendance over domestic filmmaking in the United States and its growing influence over worldwide film markets, due in part to the decimation of the European film industry during World War I. After nearly a decade of dominance, the Hollywood studio system was so refined that the advent of marketable synchronized sound technology around 1927 produced relatively few upheavals among the coterie of top studios. Rather, the American film industry managed to reorient itself around the production of talking motion pictures so swiftly that silent film production in the United States had effectively ceased at any appreciable scale by 1929.
Artistically, the early years of “the talkies” proved challenging, as filmmakers struggled with the imperfections of early recording technology and the limitations they imposed on filmmaking practice. But filmgoing remained popular in the United States even during the depths of the Great Depression, and by the early 1930s a combination of improved technology and artistic adaptation led to such a marked increase in quality that many film historians regard the period to be the beginning of Hollywood’s Golden Era. With a new voluntary production code put in place to respond to criticism of immorality in Hollywood fare, the American film industry was poised by the early 1930s to solidify its prominent position in American cultural life.
Over the past seventy years, the American film industry has transformed from mass-producing movies to producing a limited number of massive blockbuster movies on a global scale. Hollywood film studios have moved from independent companies to divisions of media conglomerates. Theatrical attendance for American audiences has plummeted since the mid-1940s; nonetheless, American films have never been more profitable. In 1945, American films could only be viewed in theaters; now they are available in myriad forms of home viewing. Throughout, Hollywood has continued to dominate global cinema, although film and now video production reaches Americans in many other forms, from home videos to educational films.
Amid declining attendance, the Supreme Court in 1948 forced the major studios to sell off their theaters. Hollywood studios instead focused their power on distribution, limiting the supply of films and focusing on expensive productions to sell on an individual basis to theaters. Growing production costs and changing audiences caused wild fluctuations in profits, leading to an industry-wide recession in the late 1960s. The studios emerged under new corporate ownership and honed their blockbuster strategy, releasing “high concept” films widely on the heels of television marketing campaigns. New technologies such as cable and VCRs offered new windows for Hollywood movies beyond theatrical release, reducing the risks of blockbuster production. Deregulation through the 1980s and 1990s allowed for the “Big Six” media conglomerates to join film, theaters, networks, publishing, and other related media outlets under one corporate umbrella. This has expanded the scale and stability of Hollywood revenue while reducing the number and diversity of Hollywood films, as conglomerates focus on film franchises that can thrive on various digital media. Technological change has also lowered the cost of non-Hollywood films and thus encouraged a range of alternative forms of filmmaking, distribution, and exhibition.
Helen Zoe Veit
The first half of the 20th century saw extraordinary changes in the ways Americans produced, procured, cooked, and ate food. Exploding food production easily outstripped population growth in this era as intensive plant and animal breeding, the booming use of synthetic fertilizers and pesticides, and technological advances in farm equipment all resulted in dramatically greater yields on American farms. At the same time, a rapidly growing transportation network of refrigerated ships, railroads, and trucks hugely expanded the reach of different food crops and increased the variety of foods consumers across the country could buy, even as food imports from other countries soared. Meanwhile, new technologies, such as mechanical refrigeration, reliable industrial canning, and, by the end of the era, frozen foods, subtly encouraged Americans to eat less locally and seasonally than ever before. Yet as American food became more abundant and more affordable, diminishing want and suffering, it also contributed to new problems, especially rising body weights and mounting rates of cardiac disease.
American taste preferences themselves changed throughout the era as more people came to expect stronger flavors, grew accustomed to the taste of industrially processed foods, and sampled so-called “foreign” foods, which played an enormous role in defining 20th-century American cuisine. Food marketing exploded, and food companies invested ever greater sums in print and radio advertising and eye-catching packaging. At home, a range of appliances made cooking easier, and modern grocery stores and increasing car ownership made it possible for Americans to food shop less frequently. Home economics provided Americans, especially girls and women, with newly scientific and managerial approaches to cooking and home management, and Americans as a whole increasingly approached food through the lens of science. Virtually all areas related to food saw fundamental shifts in the first half of the 20th century, from agriculture to industrial processing, from nutrition science to weight-loss culture, from marketing to transportation, and from kitchen technology to cuisine. Not everything about food changed in this era, but the rapid pace of change probably exaggerated the transformations for the many Americans who experienced them.
Foreign economic policy involves the mediation and management of economic flows across borders. Over two and a half centuries, the context for U.S. foreign economic policy has transformed. Once a fledgling republic on the periphery of the world economy, the United States has become the world’s largest economy, the arbiter of international economic order, and a predominant influence on the global economy. Throughout this transformation, the making of foreign economic policy has entailed delicate tradeoffs between diverse interests—political and material, foreign and domestic, sectional and sectoral, and so on. Ideas and beliefs have also shaped U.S. foreign economic policy—from Enlightenment-era convictions about the pacifying effects of international commerce to late 20th-century convictions about the efficacy of free markets.
The Soviet Union’s successful launch of the first artificial satellite Sputnik 1 on October 4, 1957, captured global attention and achieved the initial victory in what would soon become known as the space race. This impressive technological feat and its broader implications for Soviet missile capability rattled the confidence of the American public and challenged the credibility of U.S. leadership abroad. With the U.S.S.R.’s launch of Sputnik, and then later the first human spaceflight in 1961, U.S. policymakers feared that the public and political leaders around the world would view communism as a viable and even more dynamic alternative to capitalism, tilting the global balance of power away from the United States and towards the Soviet Union.
Reactions to Sputnik confirmed what members of the U.S. National Security Council had predicted: the image of scientific and technological superiority had very real, far-reaching geopolitical consequences. By signaling Soviet technological and military prowess, Sputnik solidified the link between space exploration and national prestige, setting a course for nationally funded space exploration for years to come. For over a decade, both the Soviet Union and the United States funneled significant financial and personnel resources into achieving impressive firsts in space, as part of a larger effort to win alliances in the Cold War contest for global influence.
From a U.S. vantage point, the space race culminated in the first Moon landing in July 1969. In 1961, President John F. Kennedy proposed Project Apollo, a lunar exploration program, as a tactic for restoring U.S. prestige in the wake of Soviet cosmonaut Yuri Gagarin’s spaceflight and the failure of the Bay of Pigs invasion. To achieve Kennedy’s goal of sending a man to the Moon and returning him safely back to Earth by the end of the decade, the United States mobilized a workforce in the hundreds of thousands. Project Apollo became the most expensive government funded civilian engineering program in U.S. history, at one point stretching to more than 4 percent of the federal budget. The United States’ substantial investment in winning the space race reveals the significant status of soft power in American foreign policy strategy during the Cold War.
American Indian activism after 1945 was as much a part of the larger, global decolonization movement rooted in centuries of imperialism as it was a direct response to the ethos of civic nationalism and integration that had gained momentum in the United States following World War II. This ethos manifested itself in the disastrous federal policies of termination and relocation, which sought to end federal services to recognized Indian tribes and encourage Native people to leave reservations for cities. In response, tribal leaders from throughout Indian Country formed the National Congress of American Indians (NCAI) in 1944 to litigate and lobby for the collective well-being of Native peoples. The NCAI was the first intertribal organization to embrace the concepts of sovereignty, treaty rights, and cultural preservation—principles that continue to guide Native activists today. As American Indian activism grew increasingly militant in the late 1960s and 1970s, civil disobedience, demonstrations, and takeovers became the preferred tactics of “Red Power” organizations such as the National Indian Youth Council (NIYC), the Indians of All Tribes, and the American Indian Movement (AIM). At the same time, others established more focused efforts that employed less confrontational methods. For example, the Native American Rights Fund (NARF) served as a legal apparatus that represented Native nations, using the courts to protect treaty rights and expand sovereignty; the Council of Energy Resource Tribes (CERT) sought to secure greater returns on the mineral wealth found on tribal lands; and the American Indian Higher Education Consortium (AIHEC) brought Native educators together to work for greater self-determination and culturally rooted curricula in Indian schools. While the more militant of these organizations and efforts have withered, those that have exploited established channels have grown and flourished. Such efforts will no doubt continue into the unforeseeable future so long as the state of Native nations remains uncertain.
Early 20th century American labor and working-class history is a subfield of American social history that focuses attention on the complex lives of working people in a rapidly changing global political and economic system. Once focused closely on institutional dynamics in the workplace and electoral politics, labor history has expanded and refined its approach to include questions about the families, communities, identities, and cultures workers have developed over time. With a critical eye on the limits of liberal capitalism and democracy for workers’ welfare, labor historians explore individual and collective struggles against exclusion from opportunity, as well as accommodation to political and economic contexts defined by rapid and volatile growth and deep inequality.
Particularly important are the ways that workers both defined and were defined by differences of race, gender, ethnicity, class, and place. Individual workers and organized groups of working Americans both transformed and were transformed by the main struggles of the industrial era, including conflicts over the place of former slaves and their descendants in the United States, mass immigration and migrations, technological change, new management and business models, the development of a consumer economy, the rise of a more active federal government, and the evolution of popular culture.
The period between 1896 and 1945 saw a crucial transition in the labor and working-class history of the United States. At its outset, Americans were working many more hours a day than the eight for which they had fought hard in the late 19th century. On average, Americans labored fifty-four to sixty-three hours per week in dangerous working conditions (approximately 35,000 workers died in accidents annually at the turn of the century). By 1920, half of all Americans lived in growing urban neighborhoods, and for many of them chronic unemployment, poverty, and deep social divides had become a regular part of life. Workers had little power in either the Democratic or Republican party. They faced a legal system that gave them no rights at work but the right to quit, judges who took the side of employers in the labor market by issuing thousands of injunctions against even nonviolent workers’ organizing, and vigilantes and police forces that did not hesitate to repress dissent violently. The ranks of organized labor were shrinking in the years before the economy began to recover in 1897. Dreams of a more democratic alternative to wage labor and corporate-dominated capitalism had been all but destroyed. Workers struggled to find their place in an emerging consumer-oriented culture that assumed everyone ought to strive for the often unattainable, and not necessarily desirable, marks of middle-class respectability.
Yet American labor emerged from World War II with the main sectors of the industrial economy organized, with greater earning potential than any previous generation of American workers, and with unprecedented power as an organized interest group that could appeal to the federal government to promote its welfare. Though American workers as a whole had made no grand challenge to the nation’s basic corporate-centered political economy in the preceding four and one-half decades, they entered the postwar world with a greater level of power, and a bigger share in the proceeds of a booming economy, than anyone could have imagined in 1896. The labor and working-class history of the United States between 1900 and 1945, then, is the story of how working-class individuals, families, and communities—members of an extremely diverse American working class—managed to carve out positions of political, economic, and cultural influence, even as they remained divided among themselves, dependent upon corporate power, and increasingly invested in a individualistic, competitive, acquisitive culture.
The story of mass culture from 1900 to 1945 is the story of its growth and increasing centrality to American life. Sparked by the development of such new media as radios, phonographs, and cinema that required less literacy and formal education, and the commodification of leisure pursuits, mass culture extended its purview to nearly the entire nation by the end of the Second World War. In the process, it became one way in which immigrant and second-generation Americans could learn about the United States and stake a claim to participation in civic and social life. Mass culture characteristically consisted of artifacts that stressed pleasure, sensation, and glamor rather than, as previously been the case, eternal and ethereal beauty, moral propriety, and personal transcendence. It had the power to determine acceptable values and beliefs and define qualities and characteristics of social groups. The constant and graphic stimulation led many custodians of culture to worry about the kinds of stimulation that mass culture provided and about a breakdown in social morality that would surely follow. As a result, they formed regulatory agencies and watchdogs to monitor the mass culture available on the market. Other critics charged the regime of mass culture with inducing homogenization of belief and practice and contributing to passive acceptance of the status quo. The spread of mass culture did not terminate regional, class, or racial cultures; indeed, mass culture artifacts often borrowed them. Nor did marginalized groups accept stereotypical portrayals; rather, they worked to expand the possibilities of prevailing ones and to provide alternatives.
Military assistance programs have been crucial instruments of American foreign policy since World War II, valued by policymakers for combating internal subversion in the “free world,” deterring aggression, and protecting overseas interests. The 1958 Draper Committee, consisting of eight members of the Senate Foreign Relations Committee, concluded that economic and military assistance were interchangeable; as the committee put it, without internal security and the “feeling of confidence engendered by adequate military forces, there is little hope for economic progress.” Less explicitly, military assistance was also designed to uphold the U.S. global system of military bases established after World War II, ensure access to raw materials, and help recruit intelligence assets while keeping a light American footprint. Police and military aid was often invited and welcomed by government elites in so-called free world nations for enhancing domestic security or enabling the swift repression of political opponents. It sometimes coincided with an influx of economic aid, as under the Marshall Plan and Alliance for Progress. In cases like Vietnam, the programs contributed to stark human rights abuses owing to political circumstances and prioritizing national security over civil liberties.
Gregory A. Daddis
For nearly a decade, American combat soldiers fought in South Vietnam to help sustain an independent, noncommunist nation in Southeast Asia. After U.S. troops departed in 1973, the collapse of South Vietnam in 1975 prompted a lasting search to explain the United States’ first lost war. Historians of the conflict and participants alike have since critiqued the ways in which civilian policymakers and uniformed leaders applied—some argued misapplied—military power that led to such an undesirable political outcome. While some claimed U.S. politicians failed to commit their nation’s full military might to a limited war, others contended that most officers fundamentally misunderstood the nature of the war they were fighting. Still others argued “winning” was essentially impossible given the true nature of a struggle over Vietnamese national identity in the postcolonial era. On their own, none of these arguments fully satisfy. Contemporary policymakers clearly understood the difficulties of waging a war in Southeast Asia against an enemy committed to national liberation. Yet the faith of these Americans in their power to resolve deep-seated local and regional sociopolitical problems eclipsed the possibility there might be limits to that power. By asking military strategists to simultaneously fight a war and build a nation, senior U.S. policymakers had asked too much of those crafting military strategy to deliver on overly ambitious political objectives. In the end, the Vietnam War exposed the limits of what American military power could achieve in the Cold War era.
David L. Hostetter
American activists who challenged South African apartheid during the Cold War era extended their opposition to racial discrimination in the United States into world politics. US antiapartheid organizations worked in solidarity with forces struggling against the racist regime in South Africa and played a significant role in the global antiapartheid movement. More than four decades of organizing preceded the legislative showdown of 1986, when a bipartisan coalition in Congress overrode President Ronald Reagan’s veto, to enact economic sanctions against the apartheid regime in South Africa. Adoption of sanctions by the United States, along with transnational solidarity with the resistance to apartheid by South Africans, helped prompt the apartheid regime to relinquish power and allow the democratic elections that brought Nelson Mandela and the African National Congress to power in 1994.
Drawing on the tactics, strategies and moral authority of the civil rights movement, antiapartheid campaigners mobilized public opinion while increasing African American influence in the formulation of US foreign policy. Long-lasting organizations such as the American Committee on Africa and TransAfrica called for boycotts and divestment while lobbying for economic sanctions. Utilizing tactics such as rallies, demonstrations, and nonviolent civil disobedience actions, antiapartheid activists made their voices heard on college campuses, corporate boardrooms, municipal and state governments, as well as the halls of Congress. Cultural expressions of criticism and resistance served to reinforce public sentiment against apartheid. Novels, plays, movies, and music provided a way for Americans to connect to the struggles of those suffering under apartheid.
By extending the moral logic of the movement for African American civil rights, American anti-apartheid activists created a multicultural coalition that brought about institutional and governmental divestment from apartheid, prompted Congress to impose economic sanctions on South Africa, and increased the influence of African Americans regarding issues of race and American foreign policy.
Michael A. Krysko
Radio debuted as a wireless alternative to telegraphy in the late 19th century. At its inception, wireless technology could only transmit signals and was incapable of broadcasting actual voices. During the 1920s, however, it transformed into a medium primarily identified as one used for entertainment and informational broadcasting. The commercialization of American broadcasting, which included the establishment of national networks and reliance on advertising to generate revenue, became the so-called American system of broadcasting. This transformation demonstrates how technology is shaped by the dynamic forces of the society in which it is embedded. Broadcasting’s aural attributes also engaged listeners in a way that distinguished it from other forms of mass media. Cognitive processes triggered by the disembodied voices and sounds emanating from radio’s loudspeakers illustrate how listeners, grounded in particular social, cultural, economic, and political contexts, made sense of and understood the content with which they were engaged. Through the 1940s, difficulties in expanding the international radio presence of the United States further highlight the significance of surrounding contexts in shaping the technology and in promoting (or discouraging) listener engagement with programing content.
As places of dense habitation, cities have always required coordination and planning. City planning has involved the design and construction of large-scale infrastructure projects to provide basic necessities such as a water supply and drainage. By the 1850s, immigration and industrialization were fueling the rise of big cities, creating immense, collective problems of epidemics, slums, pollution, gridlock, and crime. From the 1850s to the 1900s, both local governments and utility companies responded to this explosive physical and demographic growth by constructing a “networked city” of modern technologies such as gaslight, telephones, and electricity. Building the urban environment also became a wellspring of innovation in science, medicine, and administration. In 1909–1910, a revolutionary idea—comprehensive city planning—opened a new era of professionalization and institutionalization in the planning departments of city halls and universities. Over the next thirty-five years, however, wars and depression limited their influence.
From 1945 to 1965, in contrast, represents the golden age of formal planning. During this unprecedented period of peace and prosperity, academically trained experts played central roles in the modernization of the inner cities and the sprawl of the suburbs. But the planners’ clean-sweep approach to urban renewal and the massive destruction caused by highway construction provoked a revolt of the grassroots. Beginning in the Watts district of Los Angeles in 1965, mass uprisings escalated over the next three years into a national crisis of social disorder, racial and ethnic inequality, and environmental injustice. The postwar consensus of theory and practice was shattered, replaced by a fragmented profession ranging from defenders of top-down systems of computer-generated simulations to proponents of advocacy planning from the bottom up. Since the late 1980s, the ascendency of public-private partnerships in building the urban environment has favored the planners promoting systems approaches, who promise a future of high-tech “smart cities” under their complete control.
Michael A. McDonnell
The American War for Independence lasted eight years. It was one of the longest and bloodiest wars in America’s history, and yet it was not such a protracted conflict merely because the might of the British armed forces was brought to bear on the hapless colonials. The many divisions among Americans themselves over whether to fight, what to fight for, and who would do the fighting often had tragic and violent consequences. The Revolutionary War was by any measure the first American civil war. Yet national narratives of the Revolution and even much of the scholarship on the era focus more on simple stories of a contest between the Patriots and the British. Loyalists and other opponents of the Patriots are routinely left out of these narratives, or given short shrift. So, too, are the tens of thousands of ordinary colonists—perhaps a majority of the population—who were disaffected or alienated from either side or who tried to tack between the two main antagonists to make the best of a bad situation. Historians now estimate that as many as three-fifths of the colonial population were neither active Loyalists nor Patriots.
When we take the war seriously and begin to think about narratives that capture the experience of the many, rather than the few, an illuminating picture emerges. The remarkably wide scope of the activities of the disaffected during the war—ranging from nonpayment of taxes to draft dodging and even to armed resistance to protect their neutrality—has to be integrated with older stories of militant Patriots and timid Loyalists. Only then can we understand the profound consequences of disaffection—particularly in creating divisions within the states, increasing levels of violence, prolonging the war, and changing the nature of the political settlements in each state. Indeed, the very divisions among diverse Americans that made the War for Independence so long, bitter, and bloody also explains much of the Revolutionary energy of the period. Though it is not as seamless as traditional narratives of the Revolution would suggest, a more complicated story also helps better explain the many problems the new states and eventually the new nation would face. In making this argument, we may finally suggest ways we can overcome what John Shy long ago noted as the tendency of scholars to separate the ‘destructive’ War for Independence from the ‘constructive’ political Revolution.