During the 1890s, the word segregation became the preferred term for the practice of coercing different groups of people, especially those designated by race, to live in separate and unequal urban residential neighborhoods. In the southern states of the United States, segregationists imported the word—originally used in the British colonies of Asia—to describe Jim Crow laws, and, in 1910, whites in Baltimore passed a “segregation ordinance” mandating separate black and white urban neighborhoods. Copy-cat legislation sprang up in cities across the South and the Midwest. But in 1917, a multiracial team of lawyers from the fledgling National Association for the Advancement of Colored People (NAACP) mounted a successful legal challenge to these ordinances in the U.S. Supreme Court—even as urban segregation laws were adopted in other places in the world, most notably in South Africa. The collapse of the movement for legislated racial segregation in the United States occurred just as African Americans began migrating in large numbers into cities in all regions of the United States, resulting in waves of anti-black mob violence. Segregationists were forced to rely on nonstatutory or formally nonracial techniques. In Chicago, an alliance of urban reformers and real estate professionals invented alternatives to explicitly racist segregation laws. The practices they promoted nationwide created one of the most successful forms of urban racial segregation in world history, rivaling and finally outliving South African apartheid. Understanding how this system came into being and how it persists today requires understanding both how the Chicago segregationists were connected to counterparts elsewhere in the world and how they adapted practices of city-splitting to suit the peculiarities of racial politics in the United States.
Jessica Ellen Sewell
From 1800 to 2000, cities grew enormously, and saw an expansion of public spaces to serve the varied needs of a diverse population living in ever more cramped and urban circumstances. While a wide range of commercial semipublic spaces became common in the late 19th century, parks and streets were the best examples of truly public spaces with full freedom of access. Changes in the design and management of streets, sidewalks, squares, parks, and plazas during this period reflect changing ideas about the purpose of public space and how it should be used. Streets shifted from being used for a wide range of activities, including vending, playing games, and storing goods, to becoming increasingly specialized spaces of movement, designed and managed by the early twentieth century for automobile traffic. Sidewalks, which in the early nineteenth century were paid for and liberally used by adjacent businesses, were similarly specialized as spaces of pedestrian movement. However, the tradition of using streets and sidewalks as a space of public celebration and public speech remained strong throughout the period. During parades and protests, streets and sidewalks were temporarily remade as spaces of the performance of the public, and the daily activities of circulation and commerce were set aside. In 1800, the main open public spaces in cities were public squares or commons, often used for militia training and public celebration. In the second half of the 19th century, these were augmented by large picturesque parks. Designed as an antidote to urbanity, these parks served the public as a place for leisure, redefining public space as a polite leisure amenity, rather than a place for people to congregate as a public. The addition of playgrounds, recreational spaces, and public plazas in the 20th century served both the physical and mental health of the public. In the late 20th century, responding to neoliberal ideas and urban fiscal crises, the ownership and management of public parks and plazas was increasingly privatized, further challenging public accessibility.
Many Asian American neighborhoods faced displacement after World War II because of urban renewal or redevelopment under the 1949 Housing Act. In the name of blight removal and slum clearance this Act allowed local elites to procure federal money to seize land designated as blighted, clear it of its structures, and sell this land to private developers—in the process displacing thousands of residents, small businesses, and community institutions. San Francisco’s Fillmore District, a multiracial neighborhood that housed the city’s largest Japanese American and African American communities, experienced this postwar redevelopment. Like many Asian American neighborhoods that shared space with other communities of color, the Fillmore formed at the intersection of class inequality and racism, and it was this intersection of structural factors that led to substandard urban conditions. Rather than recognize the root causes of urban decline, San Francisco urban and regional elites argued that the Fillmore was among the city’s most blighted neighborhoods and advocated for the neighborhood’s destruction in the name of the public good. They also targeted the Fillmore because their postwar plans for remaking the city’s political economy envisioned the Fillmore as (1) a space to house white- collar workers in the postwar economy and (2) as an Asian-themed space for tourism that connected the city symbolically and economically to Japan, an important U.S. postwar ally. For over four decades these elite-directed plans for the Fillmore displaced more than 20,000 residents in two phases, severely damaging the community. The Fillmore’s redevelopment, then, provides a window into other cases of redevelopment and aids further investigations of the connection between Asian Americans and urban crisis. It also sheds light on the deeper history of displacement in the Asian American experience and contextualizes contemporary gentrification in Asian American neighborhoods.