As places of dense habitation, cities have always required coordination and planning. City planning has involved the design and construction of large-scale infrastructure projects to provide basic necessities such as a water supply and drainage. By the 1850s, immigration and industrialization were fueling the rise of big cities, creating immense, collective problems of epidemics, slums, pollution, gridlock, and crime. From the 1850s to the 1900s, both local governments and utility companies responded to this explosive physical and demographic growth by constructing a “networked city” of modern technologies such as gaslight, telephones, and electricity. Building the urban environment also became a wellspring of innovation in science, medicine, and administration. In 1909–1910, a revolutionary idea—comprehensive city planning—opened a new era of professionalization and institutionalization in the planning departments of city halls and universities. Over the next thirty-five years, however, wars and depression limited their influence.
From 1945 to 1965, in contrast, represents the golden age of formal planning. During this unprecedented period of peace and prosperity, academically trained experts played central roles in the modernization of the inner cities and the sprawl of the suburbs. But the planners’ clean-sweep approach to urban renewal and the massive destruction caused by highway construction provoked a revolt of the grassroots. Beginning in the Watts district of Los Angeles in 1965, mass uprisings escalated over the next three years into a national crisis of social disorder, racial and ethnic inequality, and environmental injustice. The postwar consensus of theory and practice was shattered, replaced by a fragmented profession ranging from defenders of top-down systems of computer-generated simulations to proponents of advocacy planning from the bottom up. Since the late 1980s, the ascendency of public-private partnerships in building the urban environment has favored the planners promoting systems approaches, who promise a future of high-tech “smart cities” under their complete control.
The relationship between the car and the city remains complex and involves numerous private and public forces, innovations in technology, global economic fluctuations, and shifting cultural attitudes that only rarely consider the efficiency of the automobile as a long-term solution to urban transit. The advantages of privacy, speed, ease of access, and personal enjoyment that led many to first embrace the automobile were soon shared and accentuated by transit planners as the surest means to realize the long-held ideals of urban beautification, efficiency, and accessible suburbanization. The remarkable gains in productivity provided by industrial capitalism brought these dreams within reach and individual car ownership became the norm for most American families by the middle of the 20th century. Ironically, the success in creating such a “car country” produced the conditions that again congested traffic, raised questions about the quality of urban (and now suburban) living, and further distanced the nation from alternative transit options. The “hidden costs” of postwar automotive dependency in the United States became more apparent in the late 1960s, leading to federal legislation compelling manufacturers and transit professionals to address the long-standing inefficiencies of the car. This most recent phase coincides with a broader reappraisal of life in the city and a growing recognition of the material limits to mass automobility.
Robert R. Gioielli
By the late 19th century, American cities like Chicago and New York were marvels of the industrializing world. The shock urbanization of the previous quarter century, however, brought on a host of environmental problems. Skies were acrid with coal smoke, and streams ran fetid with raw sewage. Disease outbreaks were as common as parks and green space was rare. In response to these hazards, particular groups of urban residents responded to them with a series of activist movements to reform public and private policies and practices, from the 1890s until the end of the 20th century. Those environmental burdens were never felt equally, with the working class, poor, immigrants, and minorities bearing an overwhelming share of the city’s toxic load. By the 1930s, many of the Progressive era reform efforts were finally bearing fruit. Air pollution was regulated, access to clean water improved, and even America’s smallest cities built robust networks of urban parks. But despite this invigoration of the public sphere, after World War II, for many the solution to the challenges of a dense modern city was a private choice: suburbanization. Rather than continue to work to reform and reimagine the city, they chose to leave it, retreating to the verdant (and pollution free) greenfields at the city’s edge. These moves, encouraged and subsidized by local and federal policies, provided healthier environments for the mostly white, middle-class suburbanites, but created a new set of environmental problems for the poor, working-class, and minority residents they left behind. Drained of resources and capital, cities struggled to maintain aging infrastructure and regulate remaining industry and then exacerbated problems with destructive urban renewal and highway construction projects. These remaining urban residents responded with a dynamic series of activist movements that emerged out of the social and community activism of the 1960s and presaged the contemporary environmental justice movement.
Changing foodways, the consumption and production of food, access to food, and debates over food shaped the nature of American cities in the 20th century. As American cities transformed from centers of industrialization at the start of the century to post-industrial societies at the end of the 20th century, food cultures in urban America shifted in response to the ever-changing urban environment. Cities remained centers of food culture, diversity, and food reform despite these shifts.
Growing populations and waves of immigration changed the nature of food cultures throughout the United States in the 20th century. These changes were significant, all contributing to an evolving sense of American food culture. For urban denizens, however, food choice and availability were dictated and shaped by a variety of powerful social factors, including class, race, ethnicity, gender, and laboring status. While cities possessed an abundance of food in a variety of locations to consume food, fresh food often remained difficult for the urban poor to obtain as the 20th century ended.
As markets expanded from 1900 to 1950, regional geography became a less important factor in determining what types of foods were available. In the second half of the 20th century, even global geography became less important to food choices. Citrus fruit from the West Coast was readily available in northeastern markets near the start of the century, and off-season fruits and vegetables from South America filled shelves in grocery stores by the end of the 20th century. Urban Americans became further disconnected from their food sources, but this dislocation spurred counter-movements that embraced ideas of local, seasonal foods and a rethinking of the city’s relationship with its food sources.
While American gambling has a historical association with the lawlessness of the frontier and with the wasteful leisure practices of Southern planters, it was in large cities where American gambling first flourished as a form of mass leisure, and as a commercial enterprise of significant scale. In the urban areas of the Mid-Atlantic, the Northeast, and the upper Mid-West, for the better part of two centuries the gambling economy was deeply intertwined with municipal politics and governance, the practices of betting were a prominent feature of social life, and controversies over the presence of gambling both legal and illegal, were at the center of public debate. In New York and Chicago in particular, but also in Cleveland, Pittsburgh, Detroit, Baltimore, and Philadelphia, gambling channeled money to municipal police forces and sustained machine politics. In the eyes of reformers, gambling corrupted governance and corroded social and economic interactions. Big city gambling has changed over time, often in a manner reflecting important historical processes and transformations in economics, politics, and demographics. Yet irrespective of such change, from the onset of Northern urbanization during the 19th century, through much of the 20th century, gambling held steady as a central feature of city life and politics. From the poolrooms where recently arrived Irish New Yorkers bet on horseracing after the Civil War, to the corner stores where black and Puerto Rican New Yorkers bet on the numbers game in the 1960s, the gambling activity that covered the urban landscape produced argument and controversy, particularly with respect to drawing the line between crime and leisure, and over the question of where and to what ends the money of the gambling public should be directed.
Gentrification is one of the most controversial issues in American cities today. But it also remains one of the least understood. Few agree on how to define it or whether it is boon or curse for cities. Gentrification has changed over time and has a history dating back to the early 20th century. Historically, gentrification has had a smaller demographic impact on American cities than suburbanization or immigration. But since the late 1970s, gentrification has dramatically reshaped cities like Seattle, San Francisco, and Boston. Furthermore, districts such as the French Quarter in New Orleans, New York City’s Greenwich Village, and Georgetown in Washington DC have had an outsized influence on the political, cultural, and architectural history of cities. Gentrification thus must be examined alongside suburbanization as one of the major historical trends shaping the 20th-century American metropolis.
The Immigration Act of 1924 was in large part the result of a deep political and cultural divide in America between heavily immigrant cities and far less diverse small towns and rural areas. The 1924 legislation, together with growing residential segregation, midcentury federal urban policy, and postwar suburbanization, undermined scores of ethnic enclaves in American cities between 1925 and the 1960s. The deportation of Mexicans and their American children during the Great Depression, the incarceration of West Coast Japanese Americans during World War II, and the wartime and postwar shift of so many jobs to suburban and Sunbelt areas also reshaped many US cities in these years. The Immigration Act of 1965, which enabled the immigration of large numbers of people from Asia, Latin America, and, eventually, Africa, helped to revitalize many depressed urban areas and inner-ring suburbs. In cities and suburbs across the country, the response to the new immigration since 1965 has ranged from welcoming to hostile. The national debate over immigration in the early 21st century reflects both familiar and newer cultural, linguistic, religious, racial, and regional rifts. However, urban areas with a history of immigrant incorporation remain the most politically supportive of such people, just as they were a century ago.
Mass transit has been part of the urban scene in the United States since the early 19th century. Regular steam ferry service began in New York City in the early 1810s and horse-drawn omnibuses plied city streets starting in the late 1820s. Expanding networks of horse railways emerged by the mid-19th century. The electric streetcar became the dominant mass transit vehicle a half century later. During this era, mass transit had a significant impact on American urban development. Mass transit’s importance in the lives of most Americans started to decline with the growth of automobile ownership in the 1920s, except for a temporary rise in transit ridership during World War II. In the 1960s, congressional subsidies began to reinvigorate mass transit and heavy-rail systems opened in several cities, followed by light rail systems in several others in the next decades. Today concerns about environmental sustainability and urban revitalization have stimulated renewed interest in the benefits of mass transit.
By serving travelers and commerce, roads and streets unite people and foster economic growth. But as they develop, roads and streets also disrupt old patterns, upset balances of power, and isolate some as they serve others. The consequent disagreements leave historical records documenting social struggles that might otherwise be overlooked. For long-distance travel in America before the middle of the 20th century, roads were generally poor alternatives, resorted to when superior means of travel, such as river and coastal vessels, canal boats, or railroads were unavailable. Most roads were unpaved, unmarked, and vulnerable to the effects of weather. Before the railroads, for travelers willing to pay the toll, rare turnpikes and plank roads could be much better. Even in towns, unpaved streets were common until the late 19th century, and persisted into the 20th. In the late 19th century, rapid urban growth, rural free delivery of the mails, and finally the proliferation of electric railways and bicycling contributed to growing pressure for better roads and streets. After 1910, the spread of the automobile accelerated the trend, but only with great controversy, especially in cities. Partly in response to the controversy, advocates of the automobile organized to promote state and county motor highways funded substantially by gasoline taxes; such roads were intended primarily for motor vehicles. In the 1950s, massive federal funds accelerated the trend; by then, motor vehicles were the primary transportation mode for both long and short distances. The consequences have been controversial, and alternatives have been attracting growing interest.
Housing in America has long stood as a symbol of the nation’s political values and a measure of its economic health. In the 18th century, a farmhouse represented Thomas Jefferson’s ideal of a nation of independent property owners; in the mid-20th century, the suburban house was seen as an emblem of an expanding middle class. Alongside those well-known symbols were a host of other housing forms—tenements, slave quarters, row houses, French apartments, loft condos, and public housing towers—that revealed much about American social order and the material conditions of life for many people.
Since the 19th century, housing markets have been fundamental forces driving the nation’s economy and a major focus of government policies. Home construction has provided jobs for skilled and unskilled laborers. Land speculation, housing development, and the home mortgage industry have generated billions of dollars in investment capital, while ups and downs in housing markets have been considered signals of major changes in the economy. Since the New Deal of the 1930s, the federal government has buttressed the home construction industry and offered economic incentives for home buyers, giving the United States the highest home ownership rate in the world. The housing market crash of 2008 slashed property values and sparked a rapid increase in home foreclosures, especially in places like Southern California and the suburbs of the Northeast, where housing prices had ballooned over the previous two decades. The real estate crisis led to government efforts to prop up the mortgage banking industry and to assist struggling homeowners. The crisis led, as well, to a drop in rates of home ownership, an increase in rental housing, and a growth in homelessness.
Home ownership remains a goal for many Americans and an ideal long associated with the American dream. The owner-occupied home—whether single-family or multifamily dwelling—is typically the largest investment made by an American family. Through much of the 18th and 19th centuries, housing designs varied from region to region. In the mid-20th century, mass production techniques and national building codes tended to standardize design, especially in new suburban housing. In the 18th century, the family home was a site of waged and unwaged work; it was the center of a farm, plantation, or craftsman’s workshop. Two and a half centuries later, a house was a consumer good: its size, location, and decor marked the family’s status and wealth.
American cities expanded during the late 19th century, as industrial growth was fueled by the arrival of millions of immigrants and migrants. Poverty rates escalated, overwhelming existing networks of private charities. Progressive reformers created relief organizations and raised public awareness of urban poverty. The devastating effects of the Great Depression inspired greater focus on poverty from state and federal agencies. The Social Security Act, the greatest legacy of the New Deal, would provide a safety net for millions of Americans. During the postwar era of general prosperity, federal housing policies often reinforced and deepened racial and socioeconomic inequality and segregation. The 1960s War on Poverty created vital aid programs that expanded access to food, housing, and health care. These programs also prompted a rising tide of conservative backlash against perceived excesses. Fueled by such critical sentiments, the Reagan administration implemented dramatic cuts to assistance programs. Later, the Clinton administration further reformed welfare by tying aid to labor requirements. Throughout the 20th century, the urban homeless struggled to survive in hostile environments. Skid row areas housed the homeless for decades, providing shelter, food, and social interaction within districts that were rarely visited by the middle and upper classes. The loss of such spaces to urban renewal and gentrification in many cities left many of the homeless unsheltered and dislocated.
Ansley T. Erickson
“Urban infrastructure” calls to mind railways, highways, and sewer systems. Yet the school buildings—red brick, limestone, or concrete, low-slung, turreted, or glass-fronted—that hold and seek to shape the city’s children are ubiquitous forms of infrastructure as well. Schools occupy one of the largest line items in a municipal budget, and as many as a fifth of a city’s residents spend the majority of their waking hours in school classrooms, hallways, and gymnasiums. In the 19th and 20th centuries urban educational infrastructure grew, supported by developing consensus for publicly funded and publicly governed schools (if rarely fully accessible to all members of the public). Even before state commitment to other forms of social welfare, from pensions to public health, and infrastructure, from transit to fire, schooling was a government function.
This commitment to public education ultimately was national, but schools in cities had their own story. Schooling in the United States is chiefly a local affair: Constitutional responsibility for education lies with the states; power is then further decentralized as states entrust decisions about school function and funding to school districts. School districts can be as small as a single town or a part of a city. Such localism is one reason that it is possible to speak about schools in U.S. cities as having a particular history, determined as much by the specificities of urban life as by national questions of citizenship, economy, religion, and culture.
While city schools have been distinct, they have also been nationally influential. Urban scale both allowed for and demanded the most extensive educational system-building. Urban growth and diversity galvanized innovation, via exploration in teaching methods, curriculum, and understanding of children and communities. And it generated intense conflict. Throughout U.S. history, urban residents from myriad social, political, religious, and economic positions have struggled to define how schools would operate, for whom, and who would decide.
During the 19th and 20th centuries, U.S. residents struggled over the purposes, funding, and governance of schools in cities shaped by capitalism, nativism, and white supremacy. They built a commitment to schooling as a public function of their cities, with many compromises and exclusions. In the 21st century, old struggles re-emerged in new form, perhaps raising the question of whether schools will continue as public, urban infrastructure.
Conceptions of what constitutes a street gang or a youth gang have varied since the seminal sociological studies on these entities in the 1920s. Organizations of teenage youths and young adults in their twenties, congregating in public spaces and acting collectively, were fixtures of everyday life in American cities throughout the 20th century. While few studies historicize gangs in their own right, historians in a range of subfields cast gangs as key actors in critical dimensions of the American urban experience: the formation and defense of ethno-racial identities and communities; the creation and maintenance of segregated metropolitan spaces; the shaping of gender norms and forms of sociability in working-class districts; the structuring of contentious political mobilization challenging police practices and municipal policies; the evolution of underground and informal economies and organized crime activities; and the epidemic of gun violence that spread through minority communities in many major cities at the end of the 20th and beginning of the 21st centuries.
Although groups of white youths patrolling the streets of working-class neighborhoods and engaging in acts of defensive localism were commonplace in the urban Northeast, Mid-Atlantic, and Midwest states by the mid-19th century, street gangs exploded onto the urban landscape in the early 20th century as a consequence of massive demographic changes related to the wave of immigration from Europe, Asia, and Latin America and the migration of African Americans from the South. As immigrants and migrants moved into urban working-class neighborhoods and industrial workplaces, street gangs proliferated at the boundaries of ethno-racially defined communities, shaping the context within which immigrant and second-generation youths negotiated Americanization and learned the meanings of race and ethnicity. Although social workers in some cities noted the appearance of some female gangs by the 1930s, the milieu of youth gangs during this era was male dominated, and codes of honor and masculinity were often at stake in increasingly violent clashes over territory and resources like parks and beaches.
The interplay of race, ethnicity, and masculinity continued to shape the world of gangs in the 1940s and 1950s, when white male gangs claiming to defend the whiteness of their communities used terror tactics to reinforce the boundaries of ghettos and barrios in many cities. Such aggressions spurred the formation of fighting gangs in black and Latino neighborhoods, where youths entered into at times deadly combat against their aggressors but also fought for honor, respect, and status with rivals within their communities. In the 1960s and 1970s, with civil rights struggles and ideologies of racial empowerment circulating through minority neighborhoods, some of these same gangs, often with the support of community organizers affiliated with political organizations like the Black Panther Party, turned toward defending the rights of their communities and participating in contentious politics. However, such projects were cut short by the fierce repression of gangs in minority communities by local police forces, working at times in collaboration with the Federal Bureau of Investigation. By the mid-1970s, following the withdrawal of the Black Panthers and other mediating organizations from cities like Chicago and Los Angeles, so-called “super-gangs” claiming the allegiance of thousands of youths began federating into opposing camps—“People” against “Folks” in Chicago, “Crips” against “Bloods” in LA—to wage war for control of emerging drug markets. In the 1980s and 1990s, with minority communities dealing with high unemployment, cutbacks in social services, failing schools, hyperincarceration, drug trafficking, gun violence, and toxic relations with increasingly militarized police forces waging local “wars” against drugs and gangs, gangs proliferated in cities throughout the urban Sun Belt. Their prominence within popular and political discourse nationwide made them symbols of the urban crisis and of the cultural deficiencies that some believed had caused it.
Rioting in the United States since 1800 has adhered to three basic traditions: regulating communal morality, defending community from outside threats, and protesting government abuse of power. Typically, crowds have had the shared interests of class, group affiliation, geography, or a common enemy. Since American popular disorder has frequently served as communal policing, the state—especially municipal police—has had an important role in facilitating, constraining, or motivating unrest.
Rioting in the United States retained strong legitimacy and popular resonance from 1800 to the 1960s. In the decades after the founding, Americans adapted English traditions of restrained mobbing to more diverse, urban conditions. During the 19th century, however, rioting became more violent and ambitious as Americans—especially white men—asserted their right to use violence to police heterogeneous public space. In the 1840s and 1850s, whites combined the lynch mob with the disorderly crowd to create a lethal and effective instrument of white settler sovereignty both in the western territories and in the states. From the 1860s to the 1930s, white communities across the country, particularly in the South, used racial killings and pogroms to seize political power and establish and enforce Jim Crow segregation. Between the 1910s and the 1970s, African Americans and Latinos, increasingly living in cities, rioted to defend their communities against civilian and police violence. The frequency of rioting declined after the urban rebellions of the 1960s, partly due to the militarization of local police. Yet the continued use of aggressive police tactics against racial minorities has contributed to a surge in rioting in US cities in the early 21st century.
Zoning is a legal tool employed by local governments to regulate land development. It determines the use, intensity, and form of development in localities through enforcement of the zoning ordinance, which consists of a text and an accompanying map that divides the locality into zones. Zoning is an exercise of the police powers by local governments, typically authorized through state statutes. Components of what became part of the zoning process emerged piecemeal in U.S. cities during the 19th century in response to development activities deemed injurious to the health, safety, and welfare of the community. American zoning was influenced by and drew upon models already in place in German cities early in the 20th century. Following the First National Conference on Planning and Congestion, held in Washington, DC in 1909, the zoning movement spread throughout the United States. The first attempt to apply a version of the German zoning model to a U.S. city was in New York City in 1916. In the landmark U.S. Supreme Court case, Ambler Realty v. Village of Euclid (1926), zoning was ruled as a constitutional exercise of the police power, a precedent-setting case that defined the perimeters of land use regulation the remainder of the 20th century.
Zoning was explicitly intended to sanction regulation of real property use to serve the public interest, but frequently, it was used to facilitate social and economic segregation. This was most often accomplished by controlling the size and type of housing, where high density housing (for lower income residents) could be placed in relation to commercial and industrial uses, and in some cases through explicit use of racial zoning categories for zones. The U.S. Supreme Court ruled, in Buchanan v. Warley (1917), that a racial zoning plan of the city of Louisville, Kentucky violated the due process clause of the14th Amendment. The decision, however, did not directly address the discriminatory aspects of the law. As a result, efforts to fashion legally acceptable racial zoning schemes persisted late into the 1920s. These were succeeded by the use of restrictive covenants to prohibit black (and other minority) occupancy in certain white neighborhoods (until declared unconstitutional in the late 1940s). More widespread was the use of highly differentiated residential zoning schemes and real estate steering that imbedded racial and ethnic segregation into the residential fabric of American communities.
The Standard State Zoning Enabling Act (SSZEA) of 1924 facilitated zoning. Disseminated by the U.S. Department of Commerce, the SSZEA created a relatively uniform zoning process in U.S. cities, although depending upon their size and functions, there were definite differences in the complexity and scope of zoning schemes. The reason why localities followed the basic form prescribed by the SSZEA was to minimize the chance of the zoning ordinance being struck down by the courts. Nonetheless, from the 1920s through the 1970s, thousands of court cases tested aspects of zoning, but only a few reached the federal courts, and typically, zoning advocates prevailed.
In the 1950s and 1960s, critics of zoning charged that the fragmented city was an unintended consequence. This critique was a response to concerns that zoning created artificial separations among the various types of development in cities, and that this undermined their vitality. Zoning nevertheless remained a cornerstone of U.S. urban and suburban land regulation, and new techniques such as planned unit developments, overlay zones, and form-based codes introduced needed flexibility to reintegrate urban functions previously separated by conventional zoning approaches.