Between 1820 and 1924, nearly thirty-six million immigrants entered the United States. Prior to the Civil War, the vast majority of immigrants were northern and western Europeans, though the West Coast received Chinese immigration from the late 1840s onward. In mid-century, the United States received an unprecedented influx of Irish and German immigrants, who included a large number of Catholics and the poor. At the turn of the 20th century, the major senders of immigrants shifted to southern and eastern Europe, and Asians and Mexicans made up a growing portion of newcomers. Throughout the long 19th century, urban settlement remained a popular option for immigrants, and they contributed to the social, cultural, political, economic, and physical growth of the cities they resided in. Foreign-born workers also provided much-needed labor for America’s industrial development. At the same time, intense nativism emerged in cities in opposition to the presence of foreigners, who appeared to be unfit for American society, threats to Americans’ jobs, or sources of urban problems such as poverty. Anti-immigrant sentiment resulted in the introduction of state and federal laws for preventing the immigration of undesirable foreigners, such as the poor, southern and eastern Europeans, and Asians. Cities constituted an integral part of the 19th-century American immigration experience.
Jeffrey F. Taffet
In the first half of the 20th century, and more actively in the post–World War II period, the United States government used economic aid programs to advance its foreign policy interests. US policymakers generally believed that support for economic development in poorer countries would help create global stability, which would limit military threats and strengthen the global capitalist system. Aid was offered on a country-by-country basis to guide political development; its implementation reflected views about how humanity had advanced in richer countries and how it could and should similarly advance in poorer regions. Humanitarianism did play a role in driving US aid spending, but it was consistently secondary to political considerations. Overall, while funding varied over time, amounts spent were always substantial. Between 1946 and 2015, the United States offered almost $757 billion in economic assistance to countries around the world—$1.6 trillion in inflation-adjusted 2015 dollars. Assessing the impact of this spending is difficult; there has long been disagreement among scholars and politicians about how much economic growth, if any, resulted from aid spending and similar disputes about its utility in advancing US interests. Nevertheless, for most political leaders, even without solid evidence of successes, aid often seemed to be the best option for constructively engaging poorer countries and trying to create the kind of world in which the United States could be secure and prosperous.
Ronald Reagan’s foreign policy legacy remains hotly contested, and as new archival sources come to light, those debates are more likely to intensify than to recede into the background. In dealings with the Soviet Union, the Reagan administration set the superpowers on a course for the (largely) peaceful end of the Cold War. Reagan began his outreach to Soviet leaders almost immediately after taking office and enjoyed some success, even if the dominant theme of the period remains fears of Reagan as a “button-pusher” in the public’s perception. Mikhail Gorbachev’s election to the post of General Secretary proved the turning point. Reagan, now confident in US strength, and Gorbachev, keen to reduce the financial burden of the arms race, ushered in a new, cooperative phase of the Cold War. Elsewhere, in particular Latin America, the administration’s focus on fighting communism led it to support human rights–abusing regimes at the same time as it lambasted Moscow’s transgressions in that regard. But even so, over the course of the 1980s, the United States began pushing for democratization around the world, even where Reagan and his advisors had initially resisted it, fearing a communist takeover. In part, this was a result of public pressure, but the White House recognized and came to support the rising tide of democratization. When Reagan left office, a great many countries that had been authoritarian were no longer, often at least in part because of US policy. US–Soviet relations had improved to such an extent that Reagan’s successor, Vice President George H. W. Bush, worried that they had gone too far in working with Gorbachev and been hoodwinked.
Betsy A. Beasley
American cities have been transnational in nature since the first urban spaces emerged during the colonial period. Yet the specific shape of the relationship between American cities and the rest of the world has changed dramatically in the intervening years. In the mid-20th century, the increasing integration of the global economy within the American economy began to reshape US cities. In the Northeast and Midwest, the once robust manufacturing centers and factories that had sustained their residents—and their tax bases—left, first for the South and West, and then for cities and towns outside the United States, as capital grew more mobile and businesses sought lower wages and tax incentives elsewhere. That same global capital, combined with federal subsidies, created boomtowns in the once-rural South and West. Nationwide, city boosters began to pursue alternatives to heavy industry, once understood to be the undisputed guarantor of a healthy urban economy. Increasingly, US cities organized themselves around the service economy, both in high-end, white-collar sectors like finance, consulting, and education, and in low-end pink-collar and no-collar sectors like food service, hospitality, and health care. A new legal infrastructure related to immigration made US cities more racially, ethnically, and linguistically diverse than ever before.
At the same time, some US cities were agents of economic globalization themselves. Dubbed “global cities” by celebrants and critics of the new economy alike, these cities achieved power and prestige in the late 20th century not only because they had survived the ruptures of globalization but because they helped to determine its shape. By the end of the 20th century, cities that are not routinely listed among the “global city” elite jockeyed to claim “world-class” status, investing in high-end art, entertainment, technology, education, and health care amenities to attract and retain the high-income white-collar workers understood to be the last hope for cities hollowed out by deindustrialization and global competition. Today, the extreme differences between “global cities” and the rest of US cities, and the extreme socioeconomic stratification seen in cities of all stripes, is a key concern of urbanists.
The foreign relations of the Jacksonian age reflected Andrew Jackson’s own sense of the American “nation” as long victimized by non-white enemies and weak politicians. His goal as president from 1829 to 1837 was to restore white Americans’ “sovereignty,” to empower them against other nations both within and beyond US territory. Three priorities emerged from this conviction.
First, Jackson was determined to deport the roughly 50,000 Creeks, Cherokees, Choctaws, Chickasaws, and Seminoles living in southern states and territories. He saw them as hostile nations who threatened American safety and checked American prosperity. Far from a domestic issue, Indian Removal was an imperial project that set the stage for later expansion over continental and oceanic frontiers.
Second and somewhat paradoxically, Jackson sought better relations with Great Britain. These were necessary because the British Empire was both the main threat to US expansion and the biggest market for slave-grown exports from former Indian lands. Anglo-American détente changed investment patterns and economic development throughout the Western Hemisphere, encouraging American leaders to appease London even when patriotic passions argued otherwise.
Third, Jackson wanted to open markets and secure property rights around the globe, by treaty if possible but by force when necessary. He called for a larger navy, pressed countries from France to Mexico for outstanding debts, and embraced retaliatory strikes on “savages” and “pirates” as far away as Sumatra. Indeed, the Jacksonian age brought a new American presence in the Pacific. By the mid-1840s the United States was the dominant power in the Hawaiian Islands and a growing force in China. The Mexican War that followed made the Union a two-ocean colossus—and pushed its regional tensions to the breaking point.
Jason C. Parker
The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded.
Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.
R. Joseph Parrott
The United States never sought to build an empire in Africa in the 19th and 20th centuries, as did European nations from Britain to Portugal. However, economic, ideological, and cultural affinities gradually encouraged the development of relations with the southern third of the continent (the modern Anglophone nations of South Africa, Zimbabwe, Zambia, Namibia, the former Portuguese colonies of Mozambique and Angola, and a number of smaller states). With official ties limited for decades, missionaries and business concerns built a small but influential American presence mostly in the growing European settler states. This state of affairs made the United State an important trading partner during the 20th century, but it also reinforced the idea of a white Christian civilizing mission as justification for the domination of black peoples. The United States served as a comparison point for the construction of legal systems of racial segregation in southern Africa, even as it became more politically involved in the region as part of its ideological competition with the Soviet Union.
As Europe’s empires dissolved after World War II, official ties to white settler states such as South Africa, Angola, and Rhodesia (modern Zimbabwe) brought the United States into conflict with mounting demands for decolonization, self-determination, and racial equality—both international and domestic. Southern Africa illustrated the gap between a Cold War strategy predicated on Euro-American preponderance and national traditions of liberty and democracy, eliciting protests from civil and human rights groups that culminated in the successful anti-apartheid movement of the 1980s. Though still a region of low priority at the beginning of the 21st century, American involvement in southern Africa evolved to emphasize the pursuit of social and economic improvement through democracy promotion, emergency relief, and health aid—albeit with mixed results. The history of U.S. relations with southern Africa therefore illustrates the transformation of trans-Atlantic racial ideologies and politics over the last 150 years, first in the construction of white supremacist governance and later in the eventual rejection of this model.
James Graham Wilson
The Cold War may have ended on the evening of November 9, 1989, when East German border guards opened up checkpoints and allowed their fellow citizens to stream into West Berlin; it certainly was over by January 28, 1992, when U.S. president George H. W. Bush delivered his annual State of the Union Address one month after President Mikhail Gorbachev had announced his resignation and the end of the Soviet Union. After the Berlin Wall came down, Bush and Gorbachev spoke of the Cold War in the past tense in person and on the telephone. The reunification of Germany and U.S. military campaign in the Persian Gulf confirmed that reality. In January 1991, polls indicated that, for the first time, a majority of Americans believed that the Cold War was over. However, the poll results obscured the substantial foreign and domestic crises, challenges, and opportunities created by the end of the Cold War that occupied President Bush and his national-security team between November 1989 and Bush’s defeat in the 1992 presidential inauguration and the inauguration of William Jefferson Clinton as America’s first post–Cold War president in January 1993.
The U.S. relationship with Southeast Asia has always reflected the state of U.S. interactions with the three major powers that surround the region: Japan, China, and, to a lesser extent, India. Initially, Americans looked at Southeast Asia as an avenue to the rich markets that China and India seemed to offer, while also finding trading opportunities in the region itself. Later, American missionaries sought to save Southeast Asian souls, while U.S. officials often viewed Southeast Asia as a region that could tip the overall balance of power in East Asia if its enormous resources fell under the control of a hostile power.
American interest expanded enormously with the annexation of the Philippines in 1899, an outgrowth of the Spanish-American War. That acquisition resulted in a nearly half-century of American colonial rule, while American investors increased their involvement in exploiting the region’s raw materials, notably tin, rubber, and petroleum, and missionaries expanded into areas previously closed to them.
American occupation of the Philippines heightened tensions with Japan, which sought the resources of Southeast Asia, particularly in French Indochina, Malaya, and the Dutch East Indies (today’s Indonesia). Eventually, clashing ambitions and perceptions brought the United States into World War II. Peeling those territories away from Japan during the war was a key American objective. Americans resisted the Japanese in the Philippines and in Burma, but after Japan quickly subdued Southeast Asia, there was little contact in the region until the reconquest began in 1944. American forces participated in the liberation of Burma and also fought in the Dutch Indies and the Philippines before the war ended in 1945.
After the war, the United States had to face the independence struggles in several Southeast Asian countries, even as the Grand Alliance fell apart and the Cold War emerged, which for the next several decades overshadowed almost everything. American efforts to prevent communist expansion in the region inhibited American support for decolonization and led to war in Vietnam and Laos and covert interventions elsewhere.
With the end of the Cold War in 1991, relations with most of Southeast Asia have generally been normal, except for Burma/Myanmar, where a brutal military junta ruled. The opposition, led by the charismatic Aung San Suu Kyi, found support in the United States. More recently American concerns with China’s new assertiveness, particularly in the South China Sea, have resulted in even closer U.S. relations with Southeast Asian countries.
Oil played a central role in shaping US policy toward Iraq over the course of the 20th century. The United States first became involved in Iraq in the 1920s as part of an effort secure a role for American companies in Iraq’s emerging oil industry. As a result of State Department efforts, American companies gained a 23.75 percent ownership share of the Iraq Petroleum Company in 1928. In the 1940s, US interest in the country increased as a result of the Cold War with the Soviet Union. To defend against a perceived Soviet threat to Middle East oil, the US supported British efforts to “secure” the region. After nationalist officers overthrew Iraq’s British-supported Hashemite monarchy in 1958 and established friendly relations with the Soviet Union, the United States cultivated an alliance with the Iraqi Baath Party as an alternative to the Soviet-backed regime. The effort to cultivate an alliance with the Baath foundered as a result the Baath’s perceived support for Arab claims against Israel. The breakdown of US-Baath relations led the Baath to forge an alliance with the Soviet Union. With Soviet support, the Baath nationalized the Iraq Petroleum Company in 1972. Rather than resulting in a “supply cutoff,” Soviet economic and technical assistance allowed for a rapid expansion of the Iraqi oil industry and an increase in Iraqi oil flowing to world markets. As Iraq experienced a dramatic oil boom in the 1970s, the United States looked to the country as a lucrative market for US exports goods and adopted a policy of accommodation with regard to Baath. This policy of accommodation gave rise to close strategic and military cooperation throughout the 1980s as Iraq waged war against Iran. When Iraq invaded Kuwait and seized control of its oil fields in 1990, the United States shifted to a policy of Iraqi containment. The United States organized an international coalition that quickly ejected Iraqi forces from Kuwait, but chose not to pursue regime change for fear of destabilizing the country and wider region. Throughout the 1990s, the United States adhered to a policy of Iraqi containment but came under increasing pressure to overthrow the Baath and dismantle its control over the Iraqi oil industry. In 2003, the United States seized upon the 9/11 terrorist attacks as an opportunity to implement this policy of regime change and oil reprivatization.