You are looking at 341-360 of 385 articles
International law is the set of rules, formally agreed by treaty or understood as customary, by which nation-states interact with each other in a form of international society. Across the history of U.S. foreign relations, international law has provided both an animating vision, or ideology, for various American projects of world order, and a practical tool for the advancement of U.S. power and interests. As the American role in the world changed since the late 18th century, so too did the role of international law in U.S. foreign policy. Initially, international law was a source of authority to which the weak American government could appeal on questions of independence, sovereignty, and neutrality. As U.S. power grew in the 19th and early 20th centuries, international law became variously a liberal project for the advancement of peace, a civilizational discourse for justifying violence and dispossession, and a bureaucratic and commercial tool for the expansion of empire. With the advent of formal inter-governmental organizations in the 20th century, the traditional American focus on neutrality faded, to be replaced by an emphasis on collective security. But as the process of decolonization diluted the strength of the United States and its allies in the parliamentary chambers of the world’s international organizations, Washington increasingly advanced its own interpretations of international law, and opted out of a number of international legal regimes. At the same time, Americans increasingly came to perceive of international law as a vehicle to advance the human rights of individuals over the sovereign rights of states.
Economic nationalism tended to dominate U.S. foreign trade policy throughout the long 19th century, from the end of the American Revolution to the beginning of World War I, owing to a pervasive American sense of economic and geopolitical insecurity and American fear of hostile powers, especially the British but also the French and Spanish and even the Barbary States. Following the U.S. Civil War, leading U.S. protectionist politicians sought to curtail European trade policies and to create a U.S.-dominated customs union in the Western Hemisphere. American proponents of trade liberalization increasingly found themselves outnumbered in the halls of Congress, as the “American System” of economic nationalism grew in popularity alongside the perceived need for foreign markets. Protectionist advocates in the United States viewed the American System as a panacea that not only promised to provide the federal government with revenue but also to artificially insulate American infant industries from undue foreign-market competition through high protective tariffs and subsidies, and to retaliate against real and perceived threats to U.S. trade.
Throughout this period, the United States itself underwent a great struggle over foreign trade policy. By the late 19th century, the era’s boom-and-bust global economic system led to a growing perception that the United States needed more access to foreign markets as an outlet for the country’s surplus goods and capital. But whether the United States would obtain foreign market access through free trade or through protectionism led to a great debate over the proper course of U.S. foreign trade policy. By the time that the United States acquired a colonial empire from the Spanish in 1898, this same debate over U.S. foreign trade policy had effectively merged into debates over the course of U.S. imperial expansion. The country’s more expansionist-minded economic nationalists came out on top. The overwhelming 1896 victory of William McKinley—the Republican party’s “Napoleon of Protection”—marked the beginning of substantial expansion of U.S. foreign trade through a mixture of protectionism and imperialism in the years leading up to World War I.
Kathryn C. Statler
U.S.-French relations are long-standing, complex, and primarily cooperative in nature. Various crises have punctuated long periods of stability in the alliance, but after each conflict the Franco-American friendship emerged stronger than ever. Official U.S.-French relations began during the early stages of the American Revolution, when Louis XVI’s regime came to America’s aid by providing money, arms, and military advisers. French assistance, best symbolized by the Marquis de Lafayette, was essential in the revolution’s success. The subsequent French Revolution and Napoleon Bonaparte’s rise to power also benefitted the United States when Napoleon’s woes in Europe and the Caribbean forced him to sell the entire Louisiana territory to the United States, in 1803. Franco-American economic and cultural contacts increased throughout the 19th century, as trade between the two countries prospered and as Americans flocked to France to study art, architecture, music, and medicine. The French gift of the Statue of Liberty in the late 19th century solidified Franco-American bonds, which became even more secure during World War I. Indeed, during the war, the United States provided France with trade, loans, military assistance, and millions of soldiers, viewing such aid as repayment for French help during the American Revolution. World War II once again saw the United States fighting in France to liberate the country from Nazi control. The Cold War complicated the Franco-American relationship in new ways as American power waxed and French power waned. Washington and Paris clashed over military conflict in Vietnam, the Suez Crisis, and European security (the North Atlantic Treaty Organization or NATO, in particular) during the 1950s and 1960s. Ultimately, after French President Charles de Gaulle’s retirement, the Franco-American alliance stabilized by the mid-1970s and has flourished ever since, despite brief moments of crisis, such as the 2003 Second Gulf War in Iraq.
Thomas P. Cavanna
In its most general sense, grand strategy can be defined as the overarching vision that shapes a state’s foreign policy and approach to national security. Like any strategy, it requires the coherent articulation of the state’s ends and means, which necessitates prioritizing vital interests, identifying key threats and opportunities, and (within certain limits) adapting to circumstances. What makes it truly “grand” is that it encompasses both wartime and peacetime, harnesses immediate realities to long-term objectives, and requires the coordination of all instruments of power (military, economic, etc.). Although American leaders have practiced grand strategic thinking since the early days of the Republic, the concept of grand strategy itself only started to emerge during World War I due to the expansion and diversification of the state’s resources and prerogatives, the advent of industrial warfare, and the growing role of populations in domestic politics and international conflicts. Moreover, it was only during World War II that it detached itself from military strategy and gained real currency among decision-makers. The contours, desirability, and very feasibility of grand strategy have inspired lively debates. However, many scholars and leaders consider it a worthy (albeit complex) endeavor that can reduce the risk of resource-squandering, signal intentions to both allies and enemies, facilitate adjustments to international upheavals, and establish a baseline for accountability. America’s grand strategy evolved from relative isolationism to full-blown liberal internationalism after 1945. Yet its conceptualization and implementation are inherently contentious processes because of political/bureaucratic infighting and recurrent dilemmas such as the uncertain geographic delimitation of US interests, the clash of ideals and Realpolitik, and the tension between unilateralism and multilateralism. The end of the Cold War, the 9/11 attacks, China’s rise, and other challenges have further compounded those lines of fracture.
U.S. imperialism took a variety of forms in the early 20th century, ranging from colonies in Puerto Rico and the Philippines to protectorates in Cuba, Panama, and other countries in Latin America, and open door policies such as that in China. Formal colonies would be ruled with U.S.-appointed colonial governors and supported by U.S. troops. Protectorates and open door policies promoted business expansion overseas through American oversight of foreign governments and, in the case of threats to economic and strategic interests, the deployment of U.S. marines. In all of these imperial forms, U.S. empire-building both reflected and shaped complex social, cultural, and political histories with ramifications for both foreign nations and America itself.
David A. Nichols
From 1783 to 1830, American Indian policy reflected the new American nation-state’s desire to establish its own legitimacy and authority, by controlling Native American peoples and establishing orderly and prosperous white settlements in the continental interior. The Federalists focused on securing against Native American claims and attacks several protected enclaves of white settlement (Ohio, Kentucky, Tennessee), established—often violently—during the Revolutionary War. They used treaties to draw a legal boundary between these enclaves and Indian communities, and annuities and military force to keep Indians on their side of the line. The Jeffersonian Republicans adopted a more expansive plan of development, coupled with the promotion of Native American dependency. Treaty commissioners persuaded chiefs to cede road easements and riverfront acreage that the government used to link and develop dispersed white settlements. Meanwhile, the War Department built trading factories whose cheap merchandise would lure Indians into commercial dependency, and agents offered Indian families agricultural equipment and training, hoping that Native American farmers would no longer need “extensive forests” to support themselves. These pressures helped engender nativist movements in the Old Northwest and southeast, and Indian men from both regions fought the United States in the War of 1812, reinforcing frontier settlers’ view that Indians were a security threat. After this war’s end, the United States adopted a strategy of containment, pressuring Indian leaders to cede most of their peoples’ lands, confining Indians to enclaves, financing vocational schooling for Indian children, and encouraging Native peoples voluntarily to move west of the Mississippi. This policy, however, proved too respectful of Indian autonomy for the frontier settlers and politicians steadily gaining influence in the national government. After these settlers elected one of their own, Andrew Jackson, to the presidency, American Indian policy would enter a much more coercive and violent phase, as white Americans redefined the nation-state as a domain of white supremacy ethnically cleansed of indigenous peoples.
The US relationship with the Republic of Indonesia has gone through three distinct phases. From 1945 until 1966 Indonesia’s politics and foreign policy were driven by the imperatives of decolonization and nation building, dominated by its founding President Sukarno and cleaved by bitter rivalry between secular political forces, regional movements, Islamic parties and organizations, the Indonesian Communist Party (PKI), and the armed forces. In the aftermath of the September 30th Movement, an alleged coup by the PKI (the Indonesian Communist Party), under the leadership of General Suharto, launched a campaign of mass murder in which hundreds of thousands of alleged Communists were killed and Sukarno ousted. Suharto would rule Indonesia for the next thirty-two years (1966 to 1998). With the Cold War inside Indonesia effectively over and a staunchly anti-Communist and pro-US regime in power, US-Indonesian relations entered a long period of what one might call authoritarian development in which US officials focused on political stability, supported the military’s heavy involvement in politics, encouraged pro-Western investment and development policies, and sought to downplay growing criticism of Suharto’s abysmal record on human rights, democracy, corruption, and the environment. The end of the Cold War reduced the strategic imperative of backing authoritarian rule in Indonesia, and over the course of the 1990s domestic opposition to Suharto steadily built among moderate Islamic forces, human rights and women’s activists, environmental campaigners, and a burgeoning pro-democracy movement. The Asian financial crisis, which began in the summer of 1997, accelerated the forces undermining Suharto’s rule, forcing his resignation in May 1998 and inaugurating a third phase of formally democratic politics, which continues to the 21st century. Since 1998 US policy has focused on regional economic and security cooperation, counterterrorism, trade relations, and countering the growing regional power of China.
Oil played a central role in shaping US policy toward Iraq over the course of the 20th century. The United States first became involved in Iraq in the 1920s as part of an effort secure a role for American companies in Iraq’s emerging oil industry. As a result of State Department efforts, American companies gained a 23.75 percent ownership share of the Iraq Petroleum Company in 1928. In the 1940s, US interest in the country increased as a result of the Cold War with the Soviet Union. To defend against a perceived Soviet threat to Middle East oil, the US supported British efforts to “secure” the region. After nationalist officers overthrew Iraq’s British-supported Hashemite monarchy in 1958 and established friendly relations with the Soviet Union, the United States cultivated an alliance with the Iraqi Baath Party as an alternative to the Soviet-backed regime. The effort to cultivate an alliance with the Baath foundered as a result the Baath’s perceived support for Arab claims against Israel. The breakdown of US-Baath relations led the Baath to forge an alliance with the Soviet Union. With Soviet support, the Baath nationalized the Iraq Petroleum Company in 1972. Rather than resulting in a “supply cutoff,” Soviet economic and technical assistance allowed for a rapid expansion of the Iraqi oil industry and an increase in Iraqi oil flowing to world markets. As Iraq experienced a dramatic oil boom in the 1970s, the United States looked to the country as a lucrative market for US exports goods and adopted a policy of accommodation with regard to Baath. This policy of accommodation gave rise to close strategic and military cooperation throughout the 1980s as Iraq waged war against Iran. When Iraq invaded Kuwait and seized control of its oil fields in 1990, the United States shifted to a policy of Iraqi containment. The United States organized an international coalition that quickly ejected Iraqi forces from Kuwait, but chose not to pursue regime change for fear of destabilizing the country and wider region. Throughout the 1990s, the United States adhered to a policy of Iraqi containment but came under increasing pressure to overthrow the Baath and dismantle its control over the Iraqi oil industry. In 2003, the United States seized upon the 9/11 terrorist attacks as an opportunity to implement this policy of regime change and oil reprivatization.
Olivia L. Sohns
Moral, political, and strategic factors have contributed to the emergence and durability of the U.S.-Israel alliance. It took decades for American support for Israel to evolve from “a moral stance” to treating Israel as a “strategic asset” to adopting a policy of “strategic cooperation.” The United States supported Israel’s creation in 1948 not only because of the lobbying efforts of American Jews but also due to humanitarian considerations stemming from the Holocaust. Beginning in the 1950s, Israel sought to portray itself as an ally of the United States on grounds that America and Israel were fellow liberal democracies and shared a common Judeo-Christian cultural heritage. By the mid-1960s, Israel was considered a strategic proxy of American power in the Middle East in the Cold War, while the Soviet Union armed the radical Arab nationalist states and endorsed a Palestinian “people’s wars of national liberation” against Israel. Over the subsequent decades, Israel repeatedly sought to demonstrate that it was allied with the United States in opposing instability in the region that might threaten U.S. interests. Israel also sought to portray itself as a liberal democracy despite its continued occupation of territories that it conquered in the Arab-Israeli War of 1967. After the terrorist attacks of September 11, 2001, and the rise of regional instability and radicalism in the Middle East following the 2003 U.S. invasion of Iraq and the Arab Spring of 2011, Israel’s expertise in the realms of counterterrorism and homeland security provided a further basis for U.S.-Israel military-strategic cooperation. Although American and Israeli interests are not identical, and there have been disagreements between the two countries regarding the best means to secure comprehensive Arab-Israeli and Israeli-Palestinian peace, the foundations of the relationship are strong enough to overcome crises that would imperil a less robust alliance.
Jennifer M. Miller
Over the past 150 years, the United States and Japan have developed one of the United States’ most significant international relationships, marked by a potent mix of cooperation and rivalry. After a devastating war, these two states built a lasting alliance that stands at the center of US diplomacy, security, and economic policy in the Pacific and beyond. Yet this relationship is not simply the product of economic or strategic calculations. Japan has repeatedly shaped American understandings of empire, hegemony, race, democracy, and globalization, because these two states have often developed in remarkable parallel with one another. From the edges of the international order in the 1850s and 1860s, both entered a period of intense state-building at home and imperial expansion abroad in the late 19th and early 20th centuries. These imperial ambitions violently collided in the 1940s in an epic contest to determine the Pacific geopolitical order. After its victory in World War II, the United States embarked on an unprecedented occupation designed to transform Japan into a stable and internationally cooperative democracy. The two countries also forged a diplomatic and security alliance that offered crucial logistical, political, and economic support to the United States’ Cold War quest to prevent the spread of communism. In the 1970s and 1980s, Japan’s rise as the globe’s second-largest economy caused significant tension in this relationship and forced Americans to confront the changing nature of national power and economic growth in a globalizing world. However, in recent decades, rising tensions in the Asia-Pacific have served to focus this alliance on the construction of a stable trans-Pacific economic and geopolitical order.
Relations between the United States and Mexico have rarely been easy. Ever since the United States invaded its southern neighbor and seized half of its national territory in the 19th century, the two countries have struggled to establish a relationship based on mutual trust and respect. Over the two centuries since Mexico’s independence, the governments and citizens of both countries have played central roles in shaping each other’s political, economic, social, and cultural development. Although this process has involved—even required—a great deal of cooperation, relations between the United States and Mexico have more often been characterized by antagonism, exploitation, and unilateralism. This long history of tensions has contributed to the three greatest challenges that these countries face together today: economic development, immigration, and drug-related violence.
The United States–Mexico War was the first war in which the United States engaged in a conflict with a foreign nation for the purpose of conquest. It was also the first conflict in which trained soldiers (from West Point) played a large role. The war’s end transformed the United States into a continental nation as it acquired a vast portion of Mexico’s northern territories. In addition to shaping U.S.–Mexico relations into the present, the conflict also led to the forcible incorporation of Mexicans (who became Mexican Americans) as the nation’s first Latinos. Yet, the war has been identified as the nation’s “forgotten war” because few Americans know the causes and consequences of this conflict. Within fifteen years of the war’s end, the conflict faded from popular memory, but it did not disappear, due to the outbreak of the U.S. Civil War. By contrast, the U.S.–Mexico War is prominently remembered in Mexico as having caused the loss of half of the nation’s territory, and as an event that continues to shape Mexico’s relationship with the United States. Official memories (or national histories) of war affect international relations, and also shape how each nation’s population views citizens of other countries. Not surprisingly, there is a stark difference in the ways that American citizens and Mexican citizens remember and forget the war (e.g., Americans refer to the “Mexican American War” or the “U.S.–Mexican War,” for example, while Mexicans identify the conflict as the “War of North American Intervention”).
On April 4, 1949, twelve nations signed the North Atlantic Treaty: the United States, Canada, Iceland, the United Kingdom, Belgium, the Netherlands, Luxembourg, France, Portugal, Italy, Norway, and Denmark. For the United States, the North Atlantic Treaty signaled a major shift in foreign policy. Gone was the traditional aversion to “entangling alliances,” dating back to George Washington’s farewell address. The United States had entered into a collective security arrangement designed to preserve peace in Europe.
With the creation of the North Atlantic Treaty Organization (NATO), the United States took on a clear leadership role on the European continent. Allied defense depended on US military power, most notably the nuclear umbrella. Reliance on the United States unsurprisingly created problems. Doubts about the strength of the transatlantic partnership and rumors of a NATO in shambles were (and are) commonplace, as were anxieties about the West’s strength in comparison to NATO’s Eastern counterpart, the Warsaw Pact. NATO, it turned out, was more than a Cold War institution. After the fall of the Berlin Wall and the collapse of the Soviet Union, the Alliance remained vital to US foreign policy objectives. The only invocation of Article V, the North Atlantic Treaty’s collective defense clause, came in the wake of the September 11, 2001 terrorist attacks. Over the last seven decades, NATO has symbolized both US power and its challenges.
At the dawn of the 20th century, the region that would become the Democratic Republic of Congo fell to the brutal colonialism of Belgium’s King Leopold. Except for a brief moment when anti-imperialists decried the crimes of plantation slavery, the United States paid little attention to Congo before 1960. But after winning its independence from Belgium in June 1960, Congo suddenly became engulfed in a crisis of decolonization and the Cold War, a time when the United States and the Soviet Union competed for resources and influence. The confrontation in Congo was kept limited by a United Nations (UN) peacekeeping force, which ended the secession of the province of Katanga in 1964. At the same time, the CIA (Central Intelligence Agency) intervened to help create a pro-Western government and eliminate the Congo’s first prime minister, Patrice Lumumba. Ironically, the result would be a growing reliance on the dictatorship of Joseph Mobutu throughout the 1980s. In 1997 a rebellion succeeded in toppling Mobutu from power. Since 2001 President Joseph Kabila has ruled Congo. The United States has supported long-term social and economic growth but has kept its distance while watching Kabila fight internal opponents and insurgents in the east. A UN peacekeeping force returned to Congo and helped limit unrest. Despite serving out two full terms that ended in 2016, Kabila was slow to call elections amid rising turmoil.
Thomas I. Faith
Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.
David P. Fields
The United States and the Kingdom of Joseon (Korea) established formal diplomatic relations after signing a “Treaty of Peace, Commerce, Amity, and Navigation” in 1882. Relations between the two states were not close and the United States closed its legation in 1905 following the Japanese annexation of Korea subsequent to the Russo-Japanese War. No formal relations existed for the following forty-four years, but American interest in Korea grew following the 1907 Pyongyang Revival and the rapid growth of Christianity there. Activists in the Korean Independence movement kept the issue of Korea alive in the United States, especially during World War I and World War II, and pressured the American government to support the re-emergence of an independent Korea. Their activism, as well as a distrust of the Soviet Union, was among the factors that spurred the United States to suggest the joint occupation of the Korean peninsula in 1945, which subsequently led to the creation of the Republic of Korea (ROK) in the American zone and the Democratic People’s Republic of Korea (DPRK) in the Soviet zone. The United States withdrew from the ROK in 1948 only to return in 1950 to thwart the DPRK’s attempt to reunite the peninsula by force during the Korean War. The war ended in stalemate, with an armistice agreement in 1953. In the same year the United States and the ROK signed a military alliance and American forces have remained on the peninsula ever since. While the United States has enjoyed close political and security relations with the ROK, formal diplomatic relations have never been established between the United States and the DPRK, and the relationship between the two has been marked by increasing tensions over the latter’s nuclear program since the early 1990s.
The relationship between the United States and Saudi Arabia has shaped the history of both countries. Soon after the Saudi kingdom was founded in 1932, American geologists discovered enormous oil reserves near the Persian Gulf. Oil-driven development transformed Saudi society. Many Americans came to work in Saudi Arabia, while thousands of Saudis studied and traveled in the United States. During the mid-20th century, the American-owned oil company Aramco and the US government worked to strengthen the Saudi regime and empower conservative forces in the kingdom—not only to protect American oil interests, but also to suppress nationalist and leftist movements in Saudi Arabia and elsewhere in the Middle East. The partnership was complicated by disagreement over Israel, triggering an Arab oil embargo against the United States in 1973–1974. During the 1970s, Saudi Arabia became the world’s largest oil exporter, nationalized Aramco, and benefited from surging oil prices. In partnership with the United States, it used its new wealth at home to launch a huge economic development program, and abroad to subsidize political allies like the Afghan mujahideen. The United States led a massive military operation to expel Iraqi forces from Kuwait in 1990–1991, protecting the Saudi regime but angering Saudis who opposed their government’s close relationship with the United States. One result was the rise of Osama bin Laden’s al-Qaeda network and the 9/11 attacks, carried out by a largely Saudi group of hijackers. Despite public opposition on both sides, after 2001 the United States and Saudi Arabia continued their commercial relationship and their political partnership, originally directed against the Soviet Union and Nasser’s Egypt, and later increasingly aimed at Iran.
Amanda C. Demmer
It is a truism in the history of warfare that the victors impose the terms for postwar peace. The Vietnam War, however, stands as an exception to this general rule. There can be no doubt that with its capture of the former South Vietnamese capitol on April 30, 1975, the Democratic Republic of Vietnam won unequivocal military victory. Thereafter, the North achieved its longtime goal of reuniting the two halves of Vietnam into a new nation, the Socialist Republic of Vietnam (SRV), governed from Hanoi. These changes, however, did not alter the reality that, despite its military defeat, the United States still wielded a preponderant amount of power in global geopolitics. This tension between the war’s military outcome and the relatively unchanged asymmetry of power between Washington and Hanoi, combined with the passion the war evoked in both countries, created a postwar situation that was far from straightforward. In fact, for years the relationship between the former adversaries stood at an uneasy state, somewhere between war and peace. Scholars call this process by which US-Vietnam relations went from this nebulous state to more regular bilateral ties “normalization.”
Normalization between the United States and Vietnam was a protracted, highly contentious process. Immediately after the fall of Saigon, the Gerald Ford administration responded in a hostile fashion by extending the economic embargo that the United States had previously imposed on North Vietnam to the entire country, refusing to grant formal diplomatic recognition to the SRV, and vetoing the SRV’s application to the United Nations. Briefly in 1977 it seemed as though Washington and Hanoi might achieve a rapid normalization of relations, but lingering wartime animosity, internal dynamics in each country, regional transformations in Southeast Asia, and the reinvigoration of the Cold War on a global scale scuttled the negotiations.
Between the fall of 1978 and late 1991, the United States refused to have formal normalization talks with Vietnam, citing the Vietnamese occupation of Cambodia and the need to obtain a “full accounting” of missing American servicemen. In these same years, however, US-Vietnamese relations remained far from frozen. Washington and Hanoi met in a series of multilateral and bilateral forums to address the US quest to account for missing American servicemen and an ongoing refugee crisis in Southeast Asia. Although not a linear process, these discussions helped lay the personal and institutional foundations for US-Vietnamese normalization.
Beginning in the late 1980s, internal, regional, and international transformations once again rapidly altered the larger geopolitical context of US-Vietnamese normalization. These changes led to the resumption of formal economic and diplomatic relations in 1994 and 1995, respectively. Despite this tangible progress, however, the normalization process continued. After 1995 the economic, political, humanitarian, and defense aspects of bilateral relations increased cautiously but significantly. By the first decade of the 21st century, US-Vietnamese negotiations in each of these areas had accelerated considerably.
Timothy Andrews Sayle
In March 2003 US and coalition forces invaded Iraq. US forces withdrew in December 2008. Approximately 4,400 US troops were killed and 31,900 wounded during the initial invasion and the subsequent war. Estimates of Iraqi casualties vary widely, ranging from roughly 100,000 to more than half a million. The invasion was launched as part of the US strategic response to the terror attacks of September 11, 2001, and ended the rule of Iraqi President Saddam Hussein. After the collapse of the regime, Iraq experienced significant violence as former regime loyalists launched insurgent attacks against US forces, and al-Qaeda in Iraq (AQI), a group linked to al-Qaeda, also attacked US forces and sought to precipitate sectarian civil war. Simultaneously with the increasing violence, Iraq held a series of elections that resulted in a new Constitution and an elected parliament and government. In 2007, the United States deployed more troops to Iraq to quell the insurgency and sectarian strife. The temporary increase in troops was known as “the Surge.” In November 2008, the US and Iraqi governments agreed that all US troops would withdraw from Iraq by December 2011. In 2014, AQI, now calling itself the Islamic State of Iraq and the Levant (ISIL), attacked and captured large swaths of Iraq, including several large cities. That year, the United States and allied states launched new military operations in Iraq called Operation Inherent Resolve. The government of Iraq declared victory over ISIL in 2017.
Little Saigon is the preferred name of Vietnamese refugee communities throughout the world. This article focuses primarily on the largest such community, in Orange County, California. This suburban ethnic enclave is home to the largest concentration of overseas Vietnamese, nearly 200,000, or 10 percent of the Vietnamese American population. Because of its size, location, and demographics, Little Saigon is also home to some of the most influential intellectuals, entertainers, businesspeople, and politicians in the Vietnamese diaspora, many of whom are invested in constructing Little Saigon as a transnational oppositional party to the government of Vietnam. Unlike traditional immigrant ethnic enclaves, Little Saigon is a refugee community whose formation and development emerged in large part from America’s efforts to atone for its epic defeat in Vietnam by at least sparing some of its wartime allies a life under communism. Much of Little Saigon’s cultural politics revolve around this narrative of rescue, although the number guilt-ridden Americans grows smaller and more conservative, while the loyalists of the pre-1975 Saigon regime struggle to instill in the younger generation of Vietnamese an appreciation of their refugee roots.