You are looking at 261-280 of 370 articles
Chia Youyee Vang
In geopolitical terms, the Asian sub-region Southeast Asia consists of ten countries that are organized under the Association of Southeast Asian Nations (ASEAN). Current member nations include Brunei Darussalam, Kingdom of Cambodia, Republic of Indonesia, Lao People’s Democratic Republic (Laos), Malaysia, Republic of the Union of Myanmar (formerly Burma), Republic of the Philippines, Singapore, Kingdom of Thailand, and Socialist Republic of Vietnam. The term Southeast Asian Americans has been shaped largely by the flow of refugees from the American War in Vietnam’ however, Americans with origins in Southeast Asia have much more diverse migration and settlement experiences that are intricately tied to the complex histories of colonialism, imperialism, and war from the late 19th through the end of the 20th century. A commonality across Southeast Asian American groups today is that their immigration history resulted primarily from the political and military involvement of the United States in the region, aimed at building the United States as a global power. From Filipinos during the Spanish-American War in 1898 to Vietnamese, Cambodian, Lao, and Hmong refugees from the American War in Vietnam, military interventions generated migration flows that, once begun, became difficult to stop. Complicating this history is its role in supporting the international humanitarian apparatus by creating the possibility for displaced people to seek refuge in the United States. Additionally, the relationships between the United States, Malaysia, Indonesia, and Singapore are different from those of other SEA countries involved in the Vietnam War. Consequently, today’s Southeast Asian Americans are heterogeneous with varying levels of acculturation to U.S. society.
The Spanish-American War is best understood as a series of linked conflicts. Those conflicts punctuated Madrid’s decline to a third-rank European state and marked the United States’ transition from a regional to an imperial power. The central conflict was a brief conventional war fought in the Caribbean and the Pacific between Madrid and Washington. Those hostilities were preceded and followed by protracted and costly guerrilla wars in Cuba and the Philippines. The Spanish-American War was the consequence of the protracted stalemate in the Spanish-Cuban War. The economic and humanitarian distress which accompanied the fighting made it increasingly difficult for the United States to remain neutral until a series of Spanish missteps and bad fortune in early 1898 hastened the American entry to the war. The US Navy quickly moved to eliminate or blockade the strongest Spanish squadrons in the Philippines and Cuba; Spain’s inability to contest American control of the sea in either theater was decisive and permitted successful American attacks on outnumbered Spanish garrisons in Santiago de Cuba, Puerto Rico, and Manila. The transfer of the Philippines, along with Cuba, Puerto Rico, and Guam, to the United States in the Treaty of Paris confirmed American imperialist appetites for the Filipino nationalists, led by Emilio Aguinaldo, and contributed to tensions between the Filipino and American armies around and in Manila. Fighting broke out in February 1899, but the Filipino conventional forces were soon driven back from Manila and were utterly defeated by the end of the year. The Filipino forces that evaded capture re-emerged as guerrillas in early 1900, and for the next two and a half years the United States waged an increasingly severe anti-guerrilla war against Filipino irregulars. Despite Aguinaldo’s capture in early 1901, fighting continued in a handful of provinces until the spring of 1902, when the last organized resistance to American governance ended in Samar and Batangas provinces.
During the 1890s, the word segregation became the preferred term for the practice of coercing different groups of people, especially those designated by race, to live in separate and unequal urban residential neighborhoods. In the southern states of the United States, segregationists imported the word—originally used in the British colonies of Asia—to describe Jim Crow laws, and, in 1910, whites in Baltimore passed a “segregation ordinance” mandating separate black and white urban neighborhoods. Copy-cat legislation sprang up in cities across the South and the Midwest. But in 1917, a multiracial team of lawyers from the fledgling National Association for the Advancement of Colored People (NAACP) mounted a successful legal challenge to these ordinances in the U.S. Supreme Court—even as urban segregation laws were adopted in other places in the world, most notably in South Africa. The collapse of the movement for legislated racial segregation in the United States occurred just as African Americans began migrating in large numbers into cities in all regions of the United States, resulting in waves of anti-black mob violence. Segregationists were forced to rely on nonstatutory or formally nonracial techniques. In Chicago, an alliance of urban reformers and real estate professionals invented alternatives to explicitly racist segregation laws. The practices they promoted nationwide created one of the most successful forms of urban racial segregation in world history, rivaling and finally outliving South African apartheid. Understanding how this system came into being and how it persists today requires understanding both how the Chicago segregationists were connected to counterparts elsewhere in the world and how they adapted practices of city-splitting to suit the peculiarities of racial politics in the United States.
Peter C. Baldwin
Today the term nightlife typically refers to social activities in urban commercial spaces—particularly drinking, dancing, dining, and listening to live musical performances. This was not always so. Cities in the 18th and early 19th centuries knew relatively limited nightlife, most of it occurring in drinking places for men. Theater attracted mixed-gender audiences but was sometimes seen as disreputable in both its content and the character of the audience. Theater owners worked to shed this negative reputation starting in the mid-19th century, while nightlife continued to be tainted by the profusion of saloons, brothels, and gambling halls. Gradual improvements in street lighting and police protection encouraged people to go out at night, as did growing incomes and decreasing hours of labor. Nightlife attracted more women in the decades around 1900 as it expanded and diversified. Dance halls, vaudeville houses, movie theaters, restaurants, and cabarets thrived in the electrified “bright lights” districts of central cities. Commercial entertainment contracted again in the 1950s and 1960s as Americans spent more of their evening leisure hours watching television and began to regard urban public spaces with suspicion. Still, nightlife is viewed as an important component of urban economic life and is actively promoted by many municipal governments.
Over the first half of the 20th century, Rabbi Stephen S. Wise (1874–1949) devoted himself to solving the most controversial social and political problems of his day: corruption in municipal politics, abuse of industrial workers, women’s second-class citizenship, nativism and racism, and global war. He considered his activities an effort to define “Americanism” and apply its principles toward humanity’s improvement. On the one hand, Wise joined a long tradition of American Christian liberals committed to seeing their fellow citizens as their equals and to grounding this egalitarianism in their religious beliefs. On the other hand, he was in the vanguard of the Jewish Reform, or what he referred to as the Liberal Judaism movement, with its commitment to apply Jewish moral teachings to improve the world. His life’s work demonstrated that the two—liberal democracy and Liberal Judaism—went hand in hand. And while concerned with equality and justice, Wise’s Americanism had a democratic elitist character. His advocacy to engage the public on the meaning of citizenship and the role of the state relied on his own Jewish, male, and economically privileged perspective as well as those of an elite circle of political and business leaders, intellectual trendsetters, social scientists, philanthropists, labor leaders, and university faculty. In doing so, Wise drew upon on Jewish liberal teachings, transformed America’s liberal tradition, and helped to remake American’s national understanding of itself.
Conceptions of what constitutes a street gang or a youth gang have varied since the seminal sociological studies on these entities in the 1920s. Organizations of teenage youths and young adults in their twenties, congregating in public spaces and acting collectively, were fixtures of everyday life in American cities throughout the 20th century. While few studies historicize gangs in their own right, historians in a range of subfields cast gangs as key actors in critical dimensions of the American urban experience: the formation and defense of ethno-racial identities and communities; the creation and maintenance of segregated metropolitan spaces; the shaping of gender norms and forms of sociability in working-class districts; the structuring of contentious political mobilization challenging police practices and municipal policies; the evolution of underground and informal economies and organized crime activities; and the epidemic of gun violence that spread through minority communities in many major cities at the end of the 20th and beginning of the 21st centuries.
Although groups of white youths patrolling the streets of working-class neighborhoods and engaging in acts of defensive localism were commonplace in the urban Northeast, Mid-Atlantic, and Midwest states by the mid-19th century, street gangs exploded onto the urban landscape in the early 20th century as a consequence of massive demographic changes related to the wave of immigration from Europe, Asia, and Latin America and the migration of African Americans from the South. As immigrants and migrants moved into urban working-class neighborhoods and industrial workplaces, street gangs proliferated at the boundaries of ethno-racially defined communities, shaping the context within which immigrant and second-generation youths negotiated Americanization and learned the meanings of race and ethnicity. Although social workers in some cities noted the appearance of some female gangs by the 1930s, the milieu of youth gangs during this era was male dominated, and codes of honor and masculinity were often at stake in increasingly violent clashes over territory and resources like parks and beaches.
The interplay of race, ethnicity, and masculinity continued to shape the world of gangs in the 1940s and 1950s, when white male gangs claiming to defend the whiteness of their communities used terror tactics to reinforce the boundaries of ghettos and barrios in many cities. Such aggressions spurred the formation of fighting gangs in black and Latino neighborhoods, where youths entered into at times deadly combat against their aggressors but also fought for honor, respect, and status with rivals within their communities. In the 1960s and 1970s, with civil rights struggles and ideologies of racial empowerment circulating through minority neighborhoods, some of these same gangs, often with the support of community organizers affiliated with political organizations like the Black Panther Party, turned toward defending the rights of their communities and participating in contentious politics. However, such projects were cut short by the fierce repression of gangs in minority communities by local police forces, working at times in collaboration with the Federal Bureau of Investigation. By the mid-1970s, following the withdrawal of the Black Panthers and other mediating organizations from cities like Chicago and Los Angeles, so-called “super-gangs” claiming the allegiance of thousands of youths began federating into opposing camps—“People” against “Folks” in Chicago, “Crips” against “Bloods” in LA—to wage war for control of emerging drug markets. In the 1980s and 1990s, with minority communities dealing with high unemployment, cutbacks in social services, failing schools, hyperincarceration, drug trafficking, gun violence, and toxic relations with increasingly militarized police forces waging local “wars” against drugs and gangs, gangs proliferated in cities throughout the urban Sun Belt. Their prominence within popular and political discourse nationwide made them symbols of the urban crisis and of the cultural deficiencies that some believed had caused it.
Stephen H. Norwood
Strikebreakers have been drawn from many parts of the American population, most notably the permanently and seasonally unemployed and underemployed. Excluded from a vast range of occupations and shunned by many trade unions, African Americans constituted another potential pool of strikebreakers, especially during the early decades of the 20th century. During the first quarter of the 20th century, college students enthusiastically volunteered for strikebreaking, both because of their generally pro-business outlook and a desire to test their manhood in violent clashes.
A wide array of private and government forces has suppressed strikes. Beginning in the late 19th century, private detective agencies supplied guards who protected company property against strikers, sometimes assaulting them. During the early 20th century, several firms emerged that supplied strikebreakers and guards at companies’ request, drawing on what amounted to private armies of thousands of men. The largest of these operated nationally.
On many occasions the state itself intervened to break strikes. Like some strikebreaking firms, state militiamen deployed advanced weaponry against strikers and their sympathizers, including machine guns. Presidents Hayes and Cleveland called out federal troops to break the 1877 and 1894 interregional railroad strikes. In 1905, Pennsylvania established an elite mounted force to suppress coal miners’ strikes modeled on the British Constabulary patrols in Ireland.
Corporations directly intervened to break strikes, building weapons arsenals, including large supplies of tear gas, that they distributed to police forces. They initiated “back to work” movements to destroy strikers’ morale and used their considerable influence with the media to propagandize in the press and on the radio. Corporations, of course, discharged strikers, often permanently.
In the highly bureaucratized society of the late twentieth and early 21st century that stigmatized public displays of anger, management turned to new “union avoidance” firms to break strikes. These firms emphasized legal and psychological methods rather than violence. They advised employers on how to blur the line between management and labor, defame union leaders and activists, and sow discord among strikers.
From the 1890s to World War I, progressive reformers in the United States called upon their local, state, and federal governments to revitalize American democracy and address the most harmful social consequences of industrialization. The emergence of an increasingly powerful administrative state, which intervened on behalf of the public welfare in the economy and society, generated significant levels of conflict. Some of the opposition came from conservative business interests, who denounced state labor laws and other market regulations as meddlesome interferences with liberty of contract. But the historical record of the Progressive Era also reveals a broad undercurrent of resistance from ordinary Americans, who fought for personal liberty against the growth of police power in such areas as public health administration and the regulation of radical speech. Their struggles in the streets, statehouses, and courtrooms of the United States in the early 20th century shaped the legal culture of the period and revealed the contested meaning of individual liberty in a new social age.
Ann Durkin Keating
Since the beginning of the 19th century, outlying areas of American cities have been home to a variety of settlements and enterprises with close links to urban centers. Beginning in the early 19th century, the increasing scale of business and industrial enterprises separated workplaces from residences. This allowed some urban dwellers to live at a distance from their place of employment and commute to work. Others lived in the shadow of factories located at some distance from the city center. Still others provided food or raw materials for urban residents and businesses. The availability of employment led to further suburban growth. Changing intracity transportation, including railroads, interurbans, streetcars, and cable cars, enabled people and businesses to locate beyond the limits of a walking city.
By the late 19th century, metropolitan areas across the United States included outlying farm centers, industrial towns, residential rail (or streetcar) suburbs, and recreational/institutional centers. With suburbs generally located along rail or ferry lines into the early 20th century, the physical development of metropolitan areas often resembled a hub and spokes. However, across metropolitan regions, suburbs had a great range of function and diversity of populations. With the advent of automobile commutation and the growing use of trucks to haul freight, suburban development took place between railroad lines, filling in the earlier hub-and-spokes patterns into a more deliberate built-up area.
Although suburban settlements were integrally connected to their neighbors and within a metropolitan economy and society, independent suburban governments emerged to serve these outlying settlements and keep them separate. Developers often took the lead in providing differential services (and regulations). Suburban governments emerged as hybrid forms, serving relatively homogeneous populations by providing only some urban functions. Well before 1945, suburbs were home to a wide range of work and residents.
Becky Nicolaides and Andrew Wiese
Mass migration to suburban areas was a defining feature of American life after 1945. Before World War II, just 13% of Americans lived in suburbs. By 2010, however, suburbia was home to more than half of the U.S. population. The nation’s economy, politics, and society suburbanized in important ways. Suburbia shaped habits of car dependency and commuting, patterns of spending and saving, and experiences with issues as diverse as race and taxes, energy and nature, privacy and community. The owner occupied, single-family home, surrounded by a yard, and set in a neighborhood outside the urban core came to define everyday experience for most American households, and in the world of popular culture and the imagination, suburbia was the setting for the American dream. The nation’s suburbs were an equally critical economic landscape, home to vital high-tech industries, retailing, “logistics,” and office employment. In addition, American politics rested on a suburban majority, and over several decades, suburbia incubated political movements across the partisan spectrum, from grass-roots conservativism, to centrist meritocratic individualism, environmentalism, feminism, and social justice. In short, suburbia was a key setting for postwar American life.
Even as suburbia grew in magnitude and influence, it also grew more diverse, coming to reflect a much broader cross-section of America itself. This encompassing shift marked two key chronological stages in suburban history since 1945: the expansive, racialized, mass suburbanization of the postwar years (1945–1970) and an era of intensive social diversification and metropolitan complexity (since 1970). In the first period, suburbia witnessed the expansion of segregated white privilege, bolstered by government policies, exclusionary practices, and reinforced by grassroots political movements. By the second period, suburbia came to house a broader cross section of Americans, who brought with them a wide range of outlooks, lifeways, values, and politics. Suburbia became home to large numbers of immigrants, ethnic groups, African Americans, the poor, the elderly and diverse family types. In the face of stubborn exclusionism by affluent suburbs, inequality persisted across metropolitan areas and manifested anew in proliferating poorer, distressed suburbs. Reform efforts sought to alleviate metro-wide inequality and promote sustainable development, using coordinated regional approaches. In recent years, the twin discourses of suburban crisis and suburban rejuvenation captured the continued complexity of America’s suburbs.
Since the turn of the 20th century, teachers have tried to find a balance between bettering their own career prospects as workers and educating their students as public servants. To reach a workable combination, teachers have utilized methods drawn from union movements, the militant and labor-conscious approach favored by the American Federation of Teachers (AFT), as well as to professional organizations, the tradition from which the National Education Association (NEA) arose. Because teachers lacked the federally guaranteed labor rights that private-sector workers enjoyed after Congress passed the National Labor Relations Act in 1935, teachers’ fortunes—in terms of collective bargaining rights, control over classroom conditions, pay, and benefits—often remained tied to the broader public-sector labor movement and to state rather than federal law.
Opponents of teacher unionization consistently charged that as public servants paid by tax revenues, teachers and other public employees should not be allowed to form unions. Further, because women constituted the vast majority of teachers and union organizing often represented a “manly” domain, the opposition’s approach worked quite well, successfully preventing teachers from gaining widespread union recognition. But by the late 1960s and early 1970s, thanks to an improved economic climate and invigoration from the women’s movement, civil rights struggles, and the New Left, both AFT and NEA teacher unionism surged forward, infused with a powerful militancy devoted to strikes and other political action, and appeared poised to capture federal collective bargaining rights. Their newfound assertiveness proved ill-timed, however.
After the economic problems of the mid-1970s, opponents of teacher unions once again seized the opportunity to portray teacher unions and other public-sector unions as greedy and privileged interest groups functioning at the public’s expense. President Ronald Reagan accentuated this point when he fired all of the more than 10,000 striking air traffic controllers during the 1981 Professional Air Traffic Controllers Organization (PATCO) strike. Facing such opposition, teacher unions—and public-sector unions in general—shifted their efforts away from strikes and toward endorsing political candidates and lobbying governments to pass favorable legislation.
Given these constraints, public-sector unions enjoyed a large degree of success in the 1990s through the early 2000s, even as private-sector union membership plunged to less than 10 percent of the workforce. After the Great Recession of 2008, however, austerity politics targeted teachers and other public-sector workers and renewed political confrontations surrounding the legitimacy of teacher unions.
During the latter half of the 20th century, the International Brotherhood of Teamsters (IBT) represented the labor organization most readily recognized by the American public. Rooted in the vital sectors of transportation, warehousing, and distribution, the Teamsters became one of the largest unions in the United States, wielded considerable economic leverage, and used this leverage to improve the lives of low-wage workers across a broad swath of occupations and industries. The union’s reputation for militancy and toughness reached its apotheosis in the controversial career of Jimmy Hoffa, the Teamsters’ most prominent post-World War II leader. Under the leadership of Hoffa and his immediate predecessor Dave Beck, the union (with some notable exceptions) embraced a business ethos, often engaged in collusive and corrupt practices, and came to symbolize the labor movement’s squandered potential as a transformational social force. Fear of Hoffa and his associations with underworld figures provoked an intense backlash, resulting in the IBT’s 1957 expulsion from the AFL-CIO, concerted legal and legislative action aimed at curbing Teamster influence, and a lingering public perception that the union was a hopelessly corrupt and malign force.
Hoffa’s unsolved disappearance in 1975 cemented the Teamsters’ image as a suspect institution, and analysts of the IBT have often offered either superficial or sensational accounts of the organization’s history and operations. With the deregulation of the trucking industry in the 1980s, the IBT suffered serious losses in market share and membership that eclipsed many of the union’s crowning collective bargaining achievements. A series of lackluster, corrupt leaders who followed Hoffa as union president proved unable to counter these developments, triggering the rise of an aggressive internal reform movement (Teamsters for a Democratic Union), federal intervention and monitoring, and the election of a reform slate in 1991 that assumed leadership of the union. However, since the union’s victory in an epic strike against United Parcel Service in 1997, the Teamsters have struggled to regain their ability to assert working-class power, especially within the private sector transportation industry, where they once exercised nearly unchallenged hegemony.
Timothy James LeCain
Technology and environmental history are both relatively young disciplines among Americanists, and during their early years they developed as distinctly different and even antithetical fields, at least in topical terms. Historians of technology initially focused on human-made and presumably “unnatural” technologies, whereas environmental historians focused on nonhuman and presumably “natural” environments. However, in more recent decades, both disciplines have moved beyond this oppositional framing. Historians of technology increasingly came to view anthropogenic artifacts such as cities, domesticated animals, and machines as extensions of the natural world rather than its antithesis. Even the British and American Industrial Revolutions constituted not a distancing of humans from nature, as some scholars have suggested, but rather a deepening entanglement with the material environment. At the same time, many environmental historians were moving beyond the field’s initial emphasis on the ideal of an American and often Western “wilderness” to embrace a concept of the environment as including humans and productive work. Nonetheless, many environmental historians continued to emphasize the independent agency of the nonhuman environment of organisms and things. This insistence that not everything could be reduced to human culture remained the field’s most distinctive feature.
Since the turn of millennium, the two fields have increasingly come together in a variety of synthetic approaches, including Actor Network Theory, envirotechnical analysis, and neomaterialist theory. As the influence of the cultural turn has waned, the environmental historians’ emphasis on the independent agency of the nonhuman has come to the fore, gaining wider influence as it is applied to the dynamic “nature” or “wildness” that some scholars argue exists within both the technological and natural environment. The foundational distinctions between the history of technology and environmental history may now be giving way to more materially rooted attempts to understand how a dynamic hybrid environment helps to create human history in all of its dimensions—cultural, social, and biological.
Michael A. Krysko
Technology is ubiquitous in the history of US foreign relations. Throughout US history, technology has played an essential role in how a wide array of Americans have traveled to and from, learned about, understood, recorded and conveyed information about, and attempted to influence, benefit from, and exert power over other lands and peoples. The challenge for the historian is not to find where technology intersects with the history of US foreign relations, but how to place a focus on technology without falling prey to deterministic assumptions about the inevitability of the global power and influence—or lack thereof—the United States has exerted through the technology it has wielded.
“Foreign relations” and “technology” are, in fact, two terms with extraordinarily broad connotations. “Foreign relations” is not synonymous with “diplomacy,” but encompasses all aspects and arenas of American engagement with the world. “Technology” is itself “an unusually slippery term,” notes prominent technology historian David Nye, and can refer to simple tools, more complex machines, and even more complicated and expansive systems on which the functionality of many other innovations depends. Furthermore, processes of technological innovation, proliferation, and patterns of use are shaped by a dizzying array of influences embedded within the larger surrounding context, including but by no means limited to politics, economics, laws, culture, international exchanges, and environment. While some of the variables that have shaped how the United States has deployed its technological capacities were indeed distinctly American, others arose outside the United States and lay beyond any American ability to control. A technology-focused rendering of US foreign relations and global ascendancy is not, therefore, a narrative of uninterrupted progress and achievement, but an accounting of both successes and failures that illuminate how surrounding contexts and decisions have variably shaped, encouraged, and limited the technology and power Americans have wielded.
Described as a “chief among chiefs” by the British, and by his arch-rival, William Henry Harrison, as “one of those uncommon geniuses which spring up occasionally to produce revolutions and overturn the established order of things,” Tecumseh impressed all who knew him. Lauded for his oratory, military and diplomatic skills, and, ultimately, his humanity, Tecumseh presided over the greatest Indian resistance movement that had ever been assembled in the eastern half of North America. His genius lay in his ability to fully articulate religious, racial, and cultural ideals borne out of his people’s existence on fault lines between competing empires and Indian confederacies. Known as “southerners” by their Algonquian relatives, the Shawnees had a history of migrating between worlds. Tecumseh, and his brother, Tenskwatawa, converted this inheritance into a widespread social movement in the first decade and a half of the 19th-century, when more than a thousand warriors, from many different tribes, heeded their call to halt American expansion along the border of what is now Ohio and Indiana. Tecumseh articulated a vision of intertribal, pan-Indian unity based on revitalization and reform, and his ambitions very nearly rewrote early American history.
H. Paul Thompson Jr.
The temperance and prohibition movement—a social reform movement that pursued many approaches to limit or prohibit the use and/or sale of alcoholic beverages—is arguably the longest-running reform movement in US history, extending from the 1780s through the repeal of national prohibition in 1933. During this 150-year period the movement experienced many ideological, organizational, and methodological changes. Probably the most widely embraced antebellum reform, many of its earliest assumptions and much of its earliest literature was explicitly evangelical, but over time the movement assumed an increasingly secular image while retaining strong ties to organized religion. During the movement’s first fifty years, its definition of temperance evolved successively from avoiding drunkenness, to abstaining from all distilled beverages, to abstaining from all intoxicating beverages (i.e., “teetotalism”). During these years, reformers sought merely to persuade others of their views—what was called “moral suasion.” But by the 1840s many reformers began seeking the coercive power of local and state governments to prohibit the “liquor traffic.” These efforts were called “legal suasion,” and in the early 20th century, when local and state laws were deemed insufficient, movement leaders turned to the federal government. Throughout its history, movement leaders produced an extensive and well-preserved serial and monographic literature to chronicle their efforts, which makes the movement relatively easy to study.
No less than five national temperance organizations rose and fell across the movement’s history, aided by many other organizations also promoted the message with great effect. Grass roots reformers organized innumerable state and local temperance societies and fraternal lodges committed to abstinence. Temperance reformers, hailing from nearly every conceivable demographic, networked through a series of national and international temperance conventions, and at any given time were pursuing a diverse and often conflicting array of priorities and methodologies.
Finally, during the Progressive Era, reformers focused their hatred for alcohol almost exclusively on saloons and the liquor traffic. Through groundbreaking lobbying efforts and a fortuitous convergence of social and political forces, reformers witnessed the ratification of the Eighteenth Amendment in January 1919 that established national prohibition. Despite such a long history of reform, the success seemed sudden and caught many in the movement off guard. The rise of liquor-related violence, a transformation in federal-state relations, increasingly organized and outspoken opposition, the Great Depression, and a re-alignment of political party coalitions all culminated in the sweeping repudiation of prohibition and its Republican supporters in the 1932 presidential election. On December 5, 1933, the Twenty-first Amendment to the Constitution repealed the Eighteenth Amendment, returning liquor regulation to the states, which have since maintained a wide variety of ever changing laws controlling the sale of alcoholic beverages. But national prohibition permanently altered the federal government’s role in law enforcement, and its legacy remains.
Brian J. McCammack
Urban areas have been the main source of pollution for centuries. The United States is no exception to this more general rule. Pollution of air, water, and soil only multiplied as cities grew in size and complexity; people generated ever more domestic waste and industry continually generated new unwanted byproducts. Periods of pollution intensification—most notably those spurts that came with late 19th-century urban industrialization and the rapid technological innovation and consumer culture of the post-World War II era—spurred social movements and scientific research on the problem, mostly as it pertained to adverse impacts on human health. Technological innovations aimed to eliminate unwanted wastes and more stringent regulations followed. Those technological and political solutions largely failed to keep pace with the increasing volume and diversity of pollutants industrial capitalism introduced into the environment, however, and rarely stopped pollution at its root cause. Instead, they often merely moved pollutants from one “sink”—a repository of pollution—to another (from water to land, for instance) and/or from one place to another (to a city downstream, for instance, or from one urban neighborhood to another).
This “end of pipe” approach remained overwhelmingly predominant even as most pollution mitigation policies became nationalized in the 1970s. Prior to that, municipalities and states were primarily responsible for addressing air, water, and land pollution. During this post-World War II period, policy—driven by ecological science—began to exhibit an understanding of urban pollution’s detrimental effects beyond human health. More broadly, evolving scientific understanding of human health and ecosystemic impacts of pollution, new technology, and changing social relations within growing metropolitan areas shifted the public perception of pollution’s harmful impacts. Scientific understanding of how urban and suburban residents risked ill health when exposed to polluted water, air, and soil grew, as did the social understanding of who was most vulnerable to these hazards. From the nation’s founding, the cumulative impact of both urban exposure to pollutants and attempts to curb that exposure has been unequal along lines of race and ethnicity, class, and gender. Despite those consistent inequalities, the 21st-century American city looks little like the 18th-century American city, whether in terms of population size, geographical footprint, demographics, economic activity, or the policies that governed them: all of these factors influenced the very definitions of ideas such as pollution and the urban.
Ross A. Kennedy
World War I profoundly affected the United States. It led to an expansion of America’s permanent military establishment, a foreign policy focused on reforming world politics, and American preeminence in international finance. In domestic affairs, America’s involvement in the war exacerbated class, racial, and ethnic conflict. It also heightened both the ethos of voluntarism in progressive ideology and the progressive desire to step up state intervention in the economy and society. These dual impulses had a coercive thrust that sometimes advanced progressive goals of a more equal, democratic society and sometimes repressed any perceived threat to a unified war effort. Ultimately the combination of progressive and repressive coercion undermined support for the Democratic Party, shifting the nation’s politics in a conservative direction as it entered the 1920s.
In the decade after 1965, radicals responded to the alienating features of America’s technocratic society by developing alternative cultures that emphasized authenticity, individualism, and community. The counterculture emerged from a handful of 1950s bohemian enclaves, most notably the Beat subcultures in the Bay Area and Greenwich Village. But new influences shaped an eclectic and decentralized counterculture after 1965, first in San Francisco’s Haight-Ashbury district, then in urban areas and college towns, and, by the 1970s, on communes and in myriad counter-institutions. The psychedelic drug cultures around Timothy Leary and Ken Kesey gave rise to a mystical bent in some branches of the counterculture and influenced counterculture style in countless ways: acid rock redefined popular music; tie dye, long hair, repurposed clothes, and hip argot established a new style; and sexual mores loosened. Yet the counterculture’s reactionary elements were strong. In many counterculture communities, gender roles mirrored those of mainstream society, and aggressive male sexuality inhibited feminist spins on the sexual revolution. Entrepreneurs and corporate America refashioned the counterculture aesthetic into a marketable commodity, ignoring the counterculture’s incisive critique of capitalism. Yet the counterculture became the basis of authentic “right livelihoods” for others. Meanwhile, the politics of the counterculture defy ready categorization. The popular imagination often conflates hippies with radical peace activists. But New Leftists frequently excoriated the counterculture for rejecting political engagement in favor of hedonistic escapism or libertarian individualism. Both views miss the most important political aspects of the counterculture, which centered on the embodiment of a decentralized anarchist bent, expressed in the formation of counter-institutions like underground newspapers, urban and rural communes, head shops, and food co-ops. As the counterculture faded after 1975, its legacies became apparent in the redefinition of the American family, the advent of the personal computer, an increasing ecological and culinary consciousness, and the marijuana legalization movement.
During the Cold War, the United States and the Soviet Union each sought to portray their way of organizing society—liberal democracy or Communism, respectively—as materially and morally superior. In their bids for global leadership, each sponsored “front” groups that defended their priorities and values to audiences around the world. These campaigns frequently enrolled artists and intellectuals, whose lives, works, and prestige could be built up, torn down, exploited, or enhanced through their participation in these groups. Alongside overt diplomatic efforts, the United States funded a number of organizations secretly through the Central Intelligence Agency (CIA). These efforts are often described as belonging to the “Cultural Cold War,” although the programs in fact supported overlapping networks that did anti-Communist work among labor unions, students, and others in addition to artists and intellectuals. The major CIA-sponsored group of intellectuals was the Congress for Cultural Freedom, established in 1950, and the “freedom” in its name was the major concept deployed by United States–aligned propagandists, to emphasize their differences from totalitarianism. The Cultural Cold War, as a program of psychological warfare conducted by the US government, grew out of the intersecting experiences of the left in the 1930s and the security apparatus of the United States at the dawn of the Cold War. The covert nature of the programs allowed them to evade scrutiny from the US Congress, and therefore to engage in activities that might otherwise have been stopped: working with people with radical political biographies or who still identified as “socialists,” or sponsoring avant-garde art, such as abstract expressionist painting. The programs spanned the globe, and grew in scope and ambition until their exposure in 1967. Subsequently, the United States has developed other mechanisms, such as the National Endowment for Democracy, to promote organizations within civil society that support its interests.