You are looking at 341-360 of 375 articles
Oil played a central role in shaping US policy toward Iraq over the course of the 20th century. The United States first became involved in Iraq in the 1920s as part of an effort secure a role for American companies in Iraq’s emerging oil industry. As a result of State Department efforts, American companies gained a 23.75 percent ownership share of the Iraq Petroleum Company in 1928. In the 1940s, US interest in the country increased as a result of the Cold War with the Soviet Union. To defend against a perceived Soviet threat to Middle East oil, the US supported British efforts to “secure” the region. After nationalist officers overthrew Iraq’s British-supported Hashemite monarchy in 1958 and established friendly relations with the Soviet Union, the United States cultivated an alliance with the Iraqi Baath Party as an alternative to the Soviet-backed regime. The effort to cultivate an alliance with the Baath foundered as a result the Baath’s perceived support for Arab claims against Israel. The breakdown of US-Baath relations led the Baath to forge an alliance with the Soviet Union. With Soviet support, the Baath nationalized the Iraq Petroleum Company in 1972. Rather than resulting in a “supply cutoff,” Soviet economic and technical assistance allowed for a rapid expansion of the Iraqi oil industry and an increase in Iraqi oil flowing to world markets. As Iraq experienced a dramatic oil boom in the 1970s, the United States looked to the country as a lucrative market for US exports goods and adopted a policy of accommodation with regard to Baath. This policy of accommodation gave rise to close strategic and military cooperation throughout the 1980s as Iraq waged war against Iran. When Iraq invaded Kuwait and seized control of its oil fields in 1990, the United States shifted to a policy of Iraqi containment. The United States organized an international coalition that quickly ejected Iraqi forces from Kuwait, but chose not to pursue regime change for fear of destabilizing the country and wider region. Throughout the 1990s, the United States adhered to a policy of Iraqi containment but came under increasing pressure to overthrow the Baath and dismantle its control over the Iraqi oil industry. In 2003, the United States seized upon the 9/11 terrorist attacks as an opportunity to implement this policy of regime change and oil reprivatization.
Olivia L. Sohns
Moral, political, and strategic factors have contributed to the emergence and durability of the U.S.-Israel alliance. It took decades for American support for Israel to evolve from “a moral stance” to treating Israel as a “strategic asset” to adopting a policy of “strategic cooperation.” The United States supported Israel’s creation in 1948 not only because of the lobbying efforts of American Jews but also due to humanitarian considerations stemming from the Holocaust. Beginning in the 1950s, Israel sought to portray itself as an ally of the United States on grounds that America and Israel were fellow liberal democracies and shared a common Judeo-Christian cultural heritage. By the mid-1960s, Israel was considered a strategic proxy of American power in the Middle East in the Cold War, while the Soviet Union armed the radical Arab nationalist states and endorsed a Palestinian “people’s wars of national liberation” against Israel. Over the subsequent decades, Israel repeatedly sought to demonstrate that it was allied with the United States in opposing instability in the region that might threaten U.S. interests. Israel also sought to portray itself as a liberal democracy despite its continued occupation of territories that it conquered in the Arab-Israeli War of 1967. After the terrorist attacks of September 11, 2001, and the rise of regional instability and radicalism in the Middle East following the 2003 U.S. invasion of Iraq and the Arab Spring of 2011, Israel’s expertise in the realms of counterterrorism and homeland security provided a further basis for U.S.-Israel military-strategic cooperation. Although American and Israeli interests are not identical, and there have been disagreements between the two countries regarding the best means to secure comprehensive Arab-Israeli and Israeli-Palestinian peace, the foundations of the relationship are strong enough to overcome crises that would imperil a less robust alliance.
Jennifer M. Miller
Over the past 150 years, the United States and Japan have developed one of the United States’ most significant international relationships, marked by a potent mix of cooperation and rivalry. After a devastating war, these two states built a lasting alliance that stands at the center of US diplomacy, security, and economic policy in the Pacific and beyond. Yet this relationship is not simply the product of economic or strategic calculations. Japan has repeatedly shaped American understandings of empire, hegemony, race, democracy, and globalization, because these two states have often developed in remarkable parallel with one another. From the edges of the international order in the 1850s and 1860s, both entered a period of intense state-building at home and imperial expansion abroad in the late 19th and early 20th centuries. These imperial ambitions violently collided in the 1940s in an epic contest to determine the Pacific geopolitical order. After its victory in World War II, the United States embarked on an unprecedented occupation designed to transform Japan into a stable and internationally cooperative democracy. The two countries also forged a diplomatic and security alliance that offered crucial logistical, political, and economic support to the United States’ Cold War quest to prevent the spread of communism. In the 1970s and 1980s, Japan’s rise as the globe’s second-largest economy caused significant tension in this relationship and forced Americans to confront the changing nature of national power and economic growth in a globalizing world. However, in recent decades, rising tensions in the Asia-Pacific have served to focus this alliance on the construction of a stable trans-Pacific economic and geopolitical order.
Relations between the United States and Mexico have rarely been easy. Ever since the United States invaded its southern neighbor and seized half of its national territory in the 19th century, the two countries have struggled to establish a relationship based on mutual trust and respect. Over the two centuries since Mexico’s independence, the governments and citizens of both countries have played central roles in shaping each other’s political, economic, social, and cultural development. Although this process has involved—even required—a great deal of cooperation, relations between the United States and Mexico have more often been characterized by antagonism, exploitation, and unilateralism. This long history of tensions has contributed to the three greatest challenges that these countries face together today: economic development, immigration, and drug-related violence.
The United States–Mexico War was the first war in which the United States engaged in a conflict with a foreign nation for the purpose of conquest. It was also the first conflict in which trained soldiers (from West Point) played a large role. The war’s end transformed the United States into a continental nation as it acquired a vast portion of Mexico’s northern territories. In addition to shaping U.S.–Mexico relations into the present, the conflict also led to the forcible incorporation of Mexicans (who became Mexican Americans) as the nation’s first Latinos. Yet, the war has been identified as the nation’s “forgotten war” because few Americans know the causes and consequences of this conflict. Within fifteen years of the war’s end, the conflict faded from popular memory, but it did not disappear, due to the outbreak of the U.S. Civil War. By contrast, the U.S.–Mexico War is prominently remembered in Mexico as having caused the loss of half of the nation’s territory, and as an event that continues to shape Mexico’s relationship with the United States. Official memories (or national histories) of war affect international relations, and also shape how each nation’s population views citizens of other countries. Not surprisingly, there is a stark difference in the ways that American citizens and Mexican citizens remember and forget the war (e.g., Americans refer to the “Mexican American War” or the “U.S.–Mexican War,” for example, while Mexicans identify the conflict as the “War of North American Intervention”).
On April 4, 1949, twelve nations signed the North Atlantic Treaty: the United States, Canada, Iceland, the United Kingdom, Belgium, the Netherlands, Luxembourg, France, Portugal, Italy, Norway, and Denmark. For the United States, the North Atlantic Treaty signaled a major shift in foreign policy. Gone was the traditional aversion to “entangling alliances,” dating back to George Washington’s farewell address. The United States had entered into a collective security arrangement designed to preserve peace in Europe.
With the creation of the North Atlantic Treaty Organization (NATO), the United States took on a clear leadership role on the European continent. Allied defense depended on US military power, most notably the nuclear umbrella. Reliance on the United States unsurprisingly created problems. Doubts about the strength of the transatlantic partnership and rumors of a NATO in shambles were (and are) commonplace, as were anxieties about the West’s strength in comparison to NATO’s Eastern counterpart, the Warsaw Pact. NATO, it turned out, was more than a Cold War institution. After the fall of the Berlin Wall and the collapse of the Soviet Union, the Alliance remained vital to US foreign policy objectives. The only invocation of Article V, the North Atlantic Treaty’s collective defense clause, came in the wake of the September 11, 2001 terrorist attacks. Over the last seven decades, NATO has symbolized both US power and its challenges.
At the dawn of the 20th century, the region that would become the Democratic Republic of Congo fell to the brutal colonialism of Belgium’s King Leopold. Except for a brief moment when anti-imperialists decried the crimes of plantation slavery, the United States paid little attention to Congo before 1960. But after winning its independence from Belgium in June 1960, Congo suddenly became engulfed in a crisis of decolonization and the Cold War, a time when the United States and the Soviet Union competed for resources and influence. The confrontation in Congo was kept limited by a United Nations (UN) peacekeeping force, which ended the secession of the province of Katanga in 1964. At the same time, the CIA (Central Intelligence Agency) intervened to help create a pro-Western government and eliminate the Congo’s first prime minister, Patrice Lumumba. Ironically, the result would be a growing reliance on the dictatorship of Joseph Mobutu throughout the 1980s. In 1997 a rebellion succeeded in toppling Mobutu from power. Since 2001 President Joseph Kabila has ruled Congo. The United States has supported long-term social and economic growth but has kept its distance while watching Kabila fight internal opponents and insurgents in the east. A UN peacekeeping force returned to Congo and helped limit unrest. Despite serving out two full terms that ended in 2016, Kabila was slow to call elections amid rising turmoil.
Thomas I. Faith
Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.
Amanda C. Demmer
It is a truism in the history of warfare that the victors impose the terms for postwar peace. The Vietnam War, however, stands as an exception to this general rule. There can be no doubt that with its capture of the former South Vietnamese capitol on April 30, 1975, the Democratic Republic of Vietnam won unequivocal military victory. Thereafter, the North achieved its longtime goal of reuniting the two halves of Vietnam into a new nation, the Socialist Republic of Vietnam (SRV), governed from Hanoi. These changes, however, did not alter the reality that, despite its military defeat, the United States still wielded a preponderant amount of power in global geopolitics. This tension between the war’s military outcome and the relatively unchanged asymmetry of power between Washington and Hanoi, combined with the passion the war evoked in both countries, created a postwar situation that was far from straightforward. In fact, for years the relationship between the former adversaries stood at an uneasy state, somewhere between war and peace. Scholars call this process by which US-Vietnam relations went from this nebulous state to more regular bilateral ties “normalization.”
Normalization between the United States and Vietnam was a protracted, highly contentious process. Immediately after the fall of Saigon, the Gerald Ford administration responded in a hostile fashion by extending the economic embargo that the United States had previously imposed on North Vietnam to the entire country, refusing to grant formal diplomatic recognition to the SRV, and vetoing the SRV’s application to the United Nations. Briefly in 1977 it seemed as though Washington and Hanoi might achieve a rapid normalization of relations, but lingering wartime animosity, internal dynamics in each country, regional transformations in Southeast Asia, and the reinvigoration of the Cold War on a global scale scuttled the negotiations.
Between the fall of 1978 and late 1991, the United States refused to have formal normalization talks with Vietnam, citing the Vietnamese occupation of Cambodia and the need to obtain a “full accounting” of missing American servicemen. In these same years, however, US-Vietnamese relations remained far from frozen. Washington and Hanoi met in a series of multilateral and bilateral forums to address the US quest to account for missing American servicemen and an ongoing refugee crisis in Southeast Asia. Although not a linear process, these discussions helped lay the personal and institutional foundations for US-Vietnamese normalization.
Beginning in the late 1980s, internal, regional, and international transformations once again rapidly altered the larger geopolitical context of US-Vietnamese normalization. These changes led to the resumption of formal economic and diplomatic relations in 1994 and 1995, respectively. Despite this tangible progress, however, the normalization process continued. After 1995 the economic, political, humanitarian, and defense aspects of bilateral relations increased cautiously but significantly. By the first decade of the 21st century, US-Vietnamese negotiations in each of these areas had accelerated considerably.
Timothy Andrews Sayle
In March 2003 US and coalition forces invaded Iraq. US forces withdrew in December 2008. Approximately 4,400 US troops were killed and 31,900 wounded during the initial invasion and the subsequent war. Estimates of Iraqi casualties vary widely, ranging from roughly 100,000 to more than half a million. The invasion was launched as part of the US strategic response to the terror attacks of September 11, 2001, and ended the rule of Iraqi President Saddam Hussein. After the collapse of the regime, Iraq experienced significant violence as former regime loyalists launched insurgent attacks against US forces, and al-Qaeda in Iraq (AQI), a group linked to al-Qaeda, also attacked US forces and sought to precipitate sectarian civil war. Simultaneously with the increasing violence, Iraq held a series of elections that resulted in a new Constitution and an elected parliament and government. In 2007, the United States deployed more troops to Iraq to quell the insurgency and sectarian strife. The temporary increase in troops was known as “the Surge.” In November 2008, the US and Iraqi governments agreed that all US troops would withdraw from Iraq by December 2011. In 2014, AQI, now calling itself the Islamic State of Iraq and the Levant (ISIL), attacked and captured large swaths of Iraq, including several large cities. That year, the United States and allied states launched new military operations in Iraq called Operation Inherent Resolve. The government of Iraq declared victory over ISIL in 2017.
Little Saigon is the preferred name of Vietnamese refugee communities throughout the world. This article focuses primarily on the largest such community, in Orange County, California. This suburban ethnic enclave is home to the largest concentration of overseas Vietnamese, nearly 200,000, or 10 percent of the Vietnamese American population. Because of its size, location, and demographics, Little Saigon is also home to some of the most influential intellectuals, entertainers, businesspeople, and politicians in the Vietnamese diaspora, many of whom are invested in constructing Little Saigon as a transnational oppositional party to the government of Vietnam. Unlike traditional immigrant ethnic enclaves, Little Saigon is a refugee community whose formation and development emerged in large part from America’s efforts to atone for its epic defeat in Vietnam by at least sparing some of its wartime allies a life under communism. Much of Little Saigon’s cultural politics revolve around this narrative of rescue, although the number guilt-ridden Americans grows smaller and more conservative, while the loyalists of the pre-1975 Saigon regime struggle to instill in the younger generation of Vietnamese an appreciation of their refugee roots.
The meaning of the Vietnam War has enduringly divided Americans in the postwar period. In part because the political splits opened up by the war made it an awkward topic for conversation, Vietnam veterans felt a barrier of silence separating them from their fellow citizens. The situation of returning veterans in the war’s waning years serves as a baseline against which to measure subsequent attempts at their social reintegration. Veterans, as embodiments of the experience of the war, became vehicles through which American society could assimilate its troubled and troubling memories.
By the 1980s, greater public understanding of the difficulties of veterans’ homecoming experiences—particularly after the recognition in 1980 of the psychiatric condition, post-traumatic stress disorder (PTSD)—helped accelerate the efforts to recognize the service and sacrifices of Americans who fought in Vietnam through the creation of memorials. Because the homecoming experience was seen as crucial to the difficulties which a substantial minority suffered, the concept emerged that the nation needed to embrace its veterans in order to help restore their well-being.
Characteristic ways of talking about the veterans’ experiences coalesced into truisms and parables: the nation and its veterans needed to “reconcile” and “heal”; America must “never again” send young men to fight a war unless the government goes all-out for victory; protesters spat on the veterans and called them “baby killers” when they returned from Vietnam.
Strategists debated what the proper “lessons” of the Vietnam War were and how they should be applied to other military interventions. After the prevalent “overwhelming force” doctrine was discarded in 2003 in the invasion of Iraq, new “lessons” emerged from the Vietnam War: first came the concept of “rapid decisive operations,” and then counterinsurgency came back into vogue. In these interrelated dimensions, American society and politics shaped the memory of the Vietnam War.
Mark W. Deets
Since the founding of the United States of America, coinciding with the height of the Atlantic slave trade, U.S. officials have based their relations with West Africa primarily on economic interests. Initially, these interests were established on the backs of slaves, as the Southern plantation economy quickly vaulted the United States to prominence in the Atlantic world. After the U.S. abolition of the slave trade in 1808, however, American relations with West Africa focused on the establishment of the American colony of Liberia as a place of “return” for formerly enslaved persons. Following the turn to “legitimate commerce” in the Atlantic and the U.S. Civil War, the United States largely withdrew from large-scale interaction with West Africa. Liberia remained the notable exception, where prominent Pan-African leaders like Edward Blyden, W. E. B. DuBois, and Marcus Garvey helped foster cultural and intellectual ties between West Africa and the Diaspora in the early 1900s. These ties to Liberia were deepened in the 1920s when Firestone Rubber Corporation of Akron, Ohio established a long-term lease to harvest rubber. World War II marked a significant increase in American presence and influence in West Africa. Still focused on Liberia, the war years saw the construction of infrastructure that would prove essential to Allied war efforts and to American security interests during the Cold War. After 1945, the United States competed with the Soviet Union in West Africa for influence and access to important economic and national security resources as African nations ejected colonial regimes across most of the continent. West African independence quickly demonstrated a turn from nationalism to ethnic nationalism, as civil wars engulfed several countries in the postcolonial, and particularly the post-Cold War, era. After a decade of withdrawal, American interest in West Africa revived with the need for alternative sources of petroleum and concerns about transnational terrorism following the attacks of September 11, 2001.
In the early 20th century, West Virginia coal miners and mine operators fought a series of bloody battles that raged for two decades and prompted national debates over workers’ rights. Miners in the southern part of the state lived in towns wholly owned by coal companies and attempted to join the United Mine Workers of America (UMWA) to negotiate better working conditions but most importantly to restore their civil liberties. Mine operators saw unionization as a threat to their businesses and rights and hired armed guards to patrol towns and prevent workers from organizing. The operators’ allies in local and state government used their authority to help break strikes by sending troops to strike districts, declaring martial law, and jailing union organizers in the name of law and order. Observers around the country were shocked at the levels of violence as well as the conditions that fueled the battles. The Mine Wars include the Paint Creek–Cabin Creek Strike of 1912–1913, the so-called 1920 Matewan Massacre, the 1920 Three Days Battle, and the 1921 Battle of Blair Mountain. In this struggle over unionism, the coal operators prevailed, and West Virginia miners continued to work in nonunion mines and live in company towns through the 1920s.
An overview of Euro-American internal migration in the United States between 1940 and 1980 explores the overall population movement away from rural areas to cities and suburban areas. Although focused on white Americans and their migrations, there are similarities to the Great Migration of African Americans, who continued to move out of the South during the mid-20th century. In the early period, the industrial areas in the North and West attracted most of the migrants. Mobilization for World War II loosened rural dwellers who were long kept in place by low wages, political disfranchisement, and low educational attainment. The war also attracted significant numbers of women to urban centers in the North and West. After the war, migration increased, enticing white Americans to become not just less rural but also increasingly suburban. The growth of suburbs throughout the country was prompted by racial segregation in housing that made many suburban areas white and earmarked many urban areas for people of color. The result was incredible growth in suburbia: from 22 million living in those areas in 1940 to triple that in 1970. Later in the period, as the Steelbelt rusted, the rise of the West as a migration magnet was spurred by development strategies, federal investment in infrastructure, and military bases. Sunbelt areas were making investments that stood ready to recruit industries and of course people, especially from Rustbelt areas in the North. By the dawn of the 21st century, half of the American population resided in suburbs.
Michael Patrick Cullinane
Between 1897 and 1901 the administration of Republican President William McKinley transformed US foreign policy traditions and set a course for empire through interconnected economic policies and an open aspiration to achieve greater US influence in global affairs. The primary changes he undertook as president included the arrangement of inter-imperial agreements with world powers, a willingness to use military intervention as a political solution, the establishment of a standing army, and the adoption of a “large policy” that extended American jurisdiction beyond the North American continent. Opposition to McKinley’s policies coalesced around the annexation of the Philippines and the suppression of the Boxer Rebellion in China. Anti-imperialists challenged McKinley’s policies in many ways, but despite fierce debate, the president’s actions and advocacy for greater American power came to define US policymaking for generations to come. McKinley’s administration merits close study.
Stephen P. Randolph
Best known as Abraham Lincoln’s secretary of state during the Civil War, William Henry Seward conducted full careers as a statesman, politician, and visionary of America’s future, both before and after that traumatic conflict. His greatest legacy, however, lay in his service as the secretary of state, leading the diplomatic effort to prevent European intervention in the conflict. His success in that effort marked the margin between the salvation and the destruction of the Union. Beyond his role as diplomat, Seward’s signature qualities of energy, optimism, ambition, and opportunism enabled him to assume a role in the Lincoln administration extending well beyond his diplomatic role as the secretary of state. Those same qualities secured a close working relationship with the president as Seward overcame a rocky first few weeks in office to become Lincoln’s confidant and sounding board.
Seward’s career in politics stretched from the 1830s until 1869. Through that time, he maintained a vision of a United States of America built on opportunity and free labor, powered by government’s active role in internal improvement and education. He foresaw a nation fated to expand across the continent and overseas, with expansion occurring peacefully as a result of American industrial and economic strength and its model of government. During his second term as secretary of state, under the Johnson administration, Seward attempted a series of territorial acquisitions in the Caribbean, the Pacific, and on the North American continent. The state of the post-war nation and its fractious politics precluded success in most of these attempts, but Seward was successful in negotiating and securing Congressional ratification of the purchase of Alaska in 1867. In addition, Seward pursued a series of policies establishing paths followed later by US diplomats, including the open door in China and the acquisition of Hawaii and US naval bases in the Caribbean.
An ungainly word, it has proven tenacious. Since the early Cold War, “Wilsonianism” has been employed by historians and analysts of US foreign policy to denote two historically related but ideologically and operationally distinct approaches to world politics. One is the foreign policy of the term’s eponym, President Woodrow Wilson, during and after World War I—in particular his efforts to engage the United States and other powerful nations in the cooperative maintenance of order and peace through a League of Nations. The other is the tendency of later administrations and political elites to deem an assertive, interventionist, and frequently unilateralist foreign policy necessary to advance national interests and preserve domestic institutions. Both versions of Wilsonianism have exerted massive impacts on US and international politics and culture. Yet both remain difficult to assess or even define. As historical phenomena they are frequently conflated; as philosophical labels they are ideologically freighted. Perhaps the only consensus is that the term implies the US government’s active rather than passive role in the international order.
It is nevertheless important to distinguish Wilson’s “Wilsonianism” from certain doctrines and practices later attributed to him or traced to his influence. The major reasons are two. First, misconceptions surrounding the aims and outcomes of Wilson’s international policies continue to distort historical interpretation in multiple fields, including American political, cultural, and diplomatic history and the history of international relations. Second, these distortions encourage the conflation of Wilsonian internationalism with subsequent yet distinct developments in American foreign policy. The confused result promotes ideological over historical readings of the nation’s past, which in turn constrain critical and creative thinking about its present and future as a world power.
Rebecca J. Mead
Woman suffragists in the United States engaged in a sustained, difficult, and multigenerational struggle: seventy-two years elapsed between the Seneca Falls convention (1848) and the passage of the Nineteenth Amendment (1920). During these years, activists gained confidence, developed skills, mobilized resources, learned to maneuver through the political process, and built a social movement. This essay describes key turning points and addresses internal tensions as well as external obstacles in the U.S. woman suffrage movement. It identifies important strategic, tactical, and rhetorical approaches that supported women’s claims for the vote and influenced public opinion, and shows how the movement was deeply connected to contemporaneous social, economic, and political contexts.
Two images dominated popular portrayals of American women in the 1950s. One was the fictional June Cleaver, the female lead character in the popular television program, “Leave It to Beaver,” which portrayed Cleaver as the stereotypical happy American housewife, the exemplar of postwar American domesticity. The other was Cleaver’s alleged real-life opposite, described in Betty Friedan’s The Feminine Mystique (1963) as miserable, bored, isolated, addicted to tranquilizers, and trapped in look-alike suburban tract houses, which Friedan termed “comfortable concentration camps.” Both stereotypes ignore significant proportions of the postwar female population, both offer simplistic and partial views of domesticity, but both reveal the depth of the influence that lay behind the idea of domesticity, real or fictional. Aided and abetted by psychology, social science theory, advertising, popular media, government policy, law, and discriminatory private sector practices, domesticity was both a myth and a powerful ideology that shaped the trajectories of women’s lives.