You are looking at 301-320 of 335 articles
Jason C. Parker
The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded.
Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.
Ronald Reagan’s foreign policy legacy remains hotly contested, and as new archival sources come to light, those debates are more likely to intensify than to recede into the background. In dealings with the Soviet Union, the Reagan administration set the superpowers on a course for the (largely) peaceful end of the Cold War. Reagan began his outreach to Soviet leaders almost immediately after taking office and enjoyed some success, even if the dominant theme of the period remains fears of Reagan as a “button-pusher” in the public’s perception. Mikhail Gorbachev’s election to the post of General Secretary proved the turning point. Reagan, now confident in US strength, and Gorbachev, keen to reduce the financial burden of the arms race, ushered in a new, cooperative phase of the Cold War. Elsewhere, in particular Latin America, the administration’s focus on fighting communism led it to support human rights–abusing regimes at the same time as it lambasted Moscow’s transgressions in that regard. But even so, over the course of the 1980s, the United States began pushing for democratization around the world, even where Reagan and his advisors had initially resisted it, fearing a communist takeover. In part, this was a result of public pressure, but the White House recognized and came to support the rising tide of democratization. When Reagan left office, a great many countries that had been authoritarian were no longer, often at least in part because of US policy. US–Soviet relations had improved to such an extent that Reagan’s successor, Vice President George H. W. Bush, worried that they had gone too far in working with Gorbachev and been hoodwinked.
International law is the set of rules, formally agreed by treaty or understood as customary, by which nation-states interact with each other in a form of international society. Across the history of U.S. foreign relations, international law has provided both an animating vision, or ideology, for various American projects of world order, and a practical tool for the advancement of U.S. power and interests. As the American role in the world changed since the late 18th century, so too did the role of international law in U.S. foreign policy. Initially, international law was a source of authority to which the weak American government could appeal on questions of independence, sovereignty, and neutrality. As U.S. power grew in the 19th and early 20th centuries, international law became variously a liberal project for the advancement of peace, a civilizational discourse for justifying violence and dispossession, and a bureaucratic and commercial tool for the expansion of empire. With the advent of formal inter-governmental organizations in the 20th century, the traditional American focus on neutrality faded, to be replaced by an emphasis on collective security. But as the process of decolonization diluted the strength of the United States and its allies in the parliamentary chambers of the world’s international organizations, Washington increasingly advanced its own interpretations of international law, and opted out of a number of international legal regimes. At the same time, Americans increasingly came to perceive of international law as a vehicle to advance the human rights of individuals over the sovereign rights of states.
Economic nationalism tended to dominate U.S. foreign trade policy throughout the long 19th century, from the end of the American Revolution to the beginning of World War I, owing to a pervasive American sense of economic and geopolitical insecurity and American fear of hostile powers, especially the British but also the French and Spanish and even the Barbary States. Following the U.S. Civil War, leading U.S. protectionist politicians sought to curtail European trade policies and to create a U.S.-dominated customs union in the Western Hemisphere. American proponents of trade liberalization increasingly found themselves outnumbered in the halls of Congress, as the “American System” of economic nationalism grew in popularity alongside the perceived need for foreign markets. Protectionist advocates in the United States viewed the American System as a panacea that not only promised to provide the federal government with revenue but also to artificially insulate American infant industries from undue foreign-market competition through high protective tariffs and subsidies, and to retaliate against real and perceived threats to U.S. trade.
Throughout this period, the United States itself underwent a great struggle over foreign trade policy. By the late 19th century, the era’s boom-and-bust global economic system led to a growing perception that the United States needed more access to foreign markets as an outlet for the country’s surplus goods and capital. But whether the United States would obtain foreign market access through free trade or through protectionism led to a great debate over the proper course of U.S. foreign trade policy. By the time that the United States acquired a colonial empire from the Spanish in 1898, this same debate over U.S. foreign trade policy had effectively merged into debates over the course of U.S. imperial expansion. The country’s more expansionist-minded economic nationalists came out on top. The overwhelming 1896 victory of William McKinley—the Republican party’s “Napoleon of Protection”—marked the beginning of substantial expansion of U.S. foreign trade through a mixture of protectionism and imperialism in the years leading up to World War I.
Kathryn C. Statler
U.S.-French relations are long-standing, complex, and primarily cooperative in nature. Various crises have punctuated long periods of stability in the alliance, but after each conflict the Franco-American friendship emerged stronger than ever. Official U.S.-French relations began during the early stages of the American Revolution, when Louis XVI’s regime came to America’s aid by providing money, arms, and military advisers. French assistance, best symbolized by the Marquis de Lafayette, was essential in the revolution’s success. The subsequent French Revolution and Napoleon Bonaparte’s rise to power also benefitted the United States when Napoleon’s woes in Europe and the Caribbean forced him to sell the entire Louisiana territory to the United States, in 1803. Franco-American economic and cultural contacts increased throughout the 19th century, as trade between the two countries prospered and as Americans flocked to France to study art, architecture, music, and medicine. The French gift of the Statue of Liberty in the late 19th century solidified Franco-American bonds, which became even more secure during World War I. Indeed, during the war, the United States provided France with trade, loans, military assistance, and millions of soldiers, viewing such aid as repayment for French help during the American Revolution. World War II once again saw the United States fighting in France to liberate the country from Nazi control. The Cold War complicated the Franco-American relationship in new ways as American power waxed and French power waned. Washington and Paris clashed over military conflict in Vietnam, the Suez Crisis, and European security (the North Atlantic Treaty Organization or NATO, in particular) during the 1950s and 1960s. Ultimately, after French President Charles de Gaulle’s retirement, the Franco-American alliance stabilized by the mid-1970s and has flourished ever since, despite brief moments of crisis, such as the 2003 Second Gulf War in Iraq.
U.S. imperialism took a variety of forms in the early 20th century, ranging from colonies in Puerto Rico and the Philippines to protectorates in Cuba, Panama, and other countries in Latin America, and open door policies such as that in China. Formal colonies would be ruled with U.S.-appointed colonial governors and supported by U.S. troops. Protectorates and open door policies promoted business expansion overseas through American oversight of foreign governments and, in the case of threats to economic and strategic interests, the deployment of U.S. marines. In all of these imperial forms, U.S. empire-building both reflected and shaped complex social, cultural, and political histories with ramifications for both foreign nations and America itself.
David A. Nichols
From 1783 to 1830, American Indian policy reflected the new American nation-state’s desire to establish its own legitimacy and authority, by controlling Native American peoples and establishing orderly and prosperous white settlements in the continental interior. The Federalists focused on securing against Native American claims and attacks several protected enclaves of white settlement (Ohio, Kentucky, Tennessee), established—often violently—during the Revolutionary War. They used treaties to draw a legal boundary between these enclaves and Indian communities, and annuities and military force to keep Indians on their side of the line. The Jeffersonian Republicans adopted a more expansive plan of development, coupled with the promotion of Native American dependency. Treaty commissioners persuaded chiefs to cede road easements and riverfront acreage that the government used to link and develop dispersed white settlements. Meanwhile, the War Department built trading factories whose cheap merchandise would lure Indians into commercial dependency, and agents offered Indian families agricultural equipment and training, hoping that Native American farmers would no longer need “extensive forests” to support themselves. These pressures helped engender nativist movements in the Old Northwest and southeast, and Indian men from both regions fought the United States in the War of 1812, reinforcing frontier settlers’ view that Indians were a security threat. After this war’s end, the United States adopted a strategy of containment, pressuring Indian leaders to cede most of their peoples’ lands, confining Indians to enclaves, financing vocational schooling for Indian children, and encouraging Native peoples voluntarily to move west of the Mississippi. This policy, however, proved too respectful of Indian autonomy for the frontier settlers and politicians steadily gaining influence in the national government. After these settlers elected one of their own, Andrew Jackson, to the presidency, American Indian policy would enter a much more coercive and violent phase, as white Americans redefined the nation-state as a domain of white supremacy ethnically cleansed of indigenous peoples.
Oil played a central role in shaping US policy toward Iraq over the course of the 20th century. The United States first became involved in Iraq in the 1920s as part of an effort secure a role for American companies in Iraq’s emerging oil industry. As a result of State Department efforts, American companies gained a 23.75 percent ownership share of the Iraq Petroleum Company in 1928. In the 1940s, US interest in the country increased as a result of the Cold War with the Soviet Union. To defend against a perceived Soviet threat to Middle East oil, the US supported British efforts to “secure” the region. After nationalist officers overthrew Iraq’s British-supported Hashemite monarchy in 1958 and established friendly relations with the Soviet Union, the United States cultivated an alliance with the Iraqi Baath Party as an alternative to the Soviet-backed regime. The effort to cultivate an alliance with the Baath foundered as a result the Baath’s perceived support for Arab claims against Israel. The breakdown of US-Baath relations led the Baath to forge an alliance with the Soviet Union. With Soviet support, the Baath nationalized the Iraq Petroleum Company in 1972. Rather than resulting in a “supply cutoff,” Soviet economic and technical assistance allowed for a rapid expansion of the Iraqi oil industry and an increase in Iraqi oil flowing to world markets. As Iraq experienced a dramatic oil boom in the 1970s, the United States looked to the country as a lucrative market for US exports goods and adopted a policy of accommodation with regard to Baath. This policy of accommodation gave rise to close strategic and military cooperation throughout the 1980s as Iraq waged war against Iran. When Iraq invaded Kuwait and seized control of its oil fields in 1990, the United States shifted to a policy of Iraqi containment. The United States organized an international coalition that quickly ejected Iraqi forces from Kuwait, but chose not to pursue regime change for fear of destabilizing the country and wider region. Throughout the 1990s, the United States adhered to a policy of Iraqi containment but came under increasing pressure to overthrow the Baath and dismantle its control over the Iraqi oil industry. In 2003, the United States seized upon the 9/11 terrorist attacks as an opportunity to implement this policy of regime change and oil reprivatization.
Olivia L. Sohns
Moral, political, and strategic factors have contributed to the emergence and durability of the U.S.-Israel alliance. It took decades for American support for Israel to evolve from “a moral stance” to treating Israel as a “strategic asset” to adopting a policy of “strategic cooperation.” The United States supported Israel’s creation in 1948 not only because of the lobbying efforts of American Jews but also due to humanitarian considerations stemming from the Holocaust. Beginning in the 1950s, Israel sought to portray itself as an ally of the United States on grounds that America and Israel were fellow liberal democracies and shared a common Judeo-Christian cultural heritage. By the mid-1960s, Israel was considered a strategic proxy of American power in the Middle East in the Cold War, while the Soviet Union armed the radical Arab nationalist states and endorsed a Palestinian “people’s wars of national liberation” against Israel. Over the subsequent decades, Israel repeatedly sought to demonstrate that it was allied with the United States in opposing instability in the region that might threaten U.S. interests. Israel also sought to portray itself as a liberal democracy despite its continued occupation of territories that it conquered in the Arab-Israeli War of 1967. After the terrorist attacks of September 11, 2001, and the rise of regional instability and radicalism in the Middle East following the 2003 U.S. invasion of Iraq and the Arab Spring of 2011, Israel’s expertise in the realms of counterterrorism and homeland security provided a further basis for U.S.-Israel military-strategic cooperation. Although American and Israeli interests are not identical, and there have been disagreements between the two countries regarding the best means to secure comprehensive Arab-Israeli and Israeli-Palestinian peace, the foundations of the relationship are strong enough to overcome crises that would imperil a less robust alliance.
Jennifer M. Miller
Over the past 150 years, the United States and Japan have developed one of the United States’ most significant international relationships, marked by a potent mix of cooperation and rivalry. After a devastating war, these two states built a lasting alliance that stands at the center of US diplomacy, security, and economic policy in the Pacific and beyond. Yet this relationship is not simply the product of economic or strategic calculations. Japan has repeatedly shaped American understandings of empire, hegemony, race, democracy, and globalization, because these two states have often developed in remarkable parallel with one another. From the edges of the international order in the 1850s and 1860s, both entered a period of intense state-building at home and imperial expansion abroad in the late 19th and early 20th centuries. These imperial ambitions violently collided in the 1940s in an epic contest to determine the Pacific geopolitical order. After its victory in World War II, the United States embarked on an unprecedented occupation designed to transform Japan into a stable and internationally cooperative democracy. The two countries also forged a diplomatic and security alliance that offered crucial logistical, political, and economic support to the United States’ Cold War quest to prevent the spread of communism. In the 1970s and 1980s, Japan’s rise as the globe’s second-largest economy caused significant tension in this relationship and forced Americans to confront the changing nature of national power and economic growth in a globalizing world. However, in recent decades, rising tensions in the Asia-Pacific have served to focus this alliance on the construction of a stable trans-Pacific economic and geopolitical order.
Relations between the United States and Mexico have rarely been easy. Ever since the United States invaded its southern neighbor and seized half of its national territory in the 19th century, the two countries have struggled to establish a relationship based on mutual trust and respect. Over the two centuries since Mexico’s independence, the governments and citizens of both countries have played central roles in shaping each other’s political, economic, social, and cultural development. Although this process has involved—even required—a great deal of cooperation, relations between the United States and Mexico have more often been characterized by antagonism, exploitation, and unilateralism. This long history of tensions has contributed to the three greatest challenges that these countries face together today: economic development, immigration, and drug-related violence.
The United States–Mexico War was the first war in which the United States engaged in a conflict with a foreign nation for the purpose of conquest. It was also the first conflict in which trained soldiers (from West Point) played a large role. The war’s end transformed the United States into a continental nation as it acquired a vast portion of Mexico’s northern territories. In addition to shaping U.S.–Mexico relations into the present, the conflict also led to the forcible incorporation of Mexicans (who became Mexican Americans) as the nation’s first Latinos. Yet, the war has been identified as the nation’s “forgotten war” because few Americans know the causes and consequences of this conflict. Within fifteen years of the war’s end, the conflict faded from popular memory, but it did not disappear, due to the outbreak of the U.S. Civil War. By contrast, the U.S.–Mexico War is prominently remembered in Mexico as having caused the loss of half of the nation’s territory, and as an event that continues to shape Mexico’s relationship with the United States. Official memories (or national histories) of war affect international relations, and also shape how each nation’s population views citizens of other countries. Not surprisingly, there is a stark difference in the ways that American citizens and Mexican citizens remember and forget the war (e.g., Americans refer to the “Mexican American War” or the “U.S.–Mexican War,” for example, while Mexicans identify the conflict as the “War of North American Intervention”).
On April 4, 1949, twelve nations signed the North Atlantic Treaty: the United States, Canada, Iceland, the United Kingdom, Belgium, the Netherlands, Luxembourg, France, Portugal, Italy, Norway, and Denmark. For the United States, the North Atlantic Treaty signaled a major shift in foreign policy. Gone was the traditional aversion to “entangling alliances,” dating back to George Washington’s farewell address. The United States had entered into a collective security arrangement designed to preserve peace in Europe.
With the creation of the North Atlantic Treaty Organization (NATO), the United States took on a clear leadership role on the European continent. Allied defense depended on US military power, most notably the nuclear umbrella. Reliance on the United States unsurprisingly created problems. Doubts about the strength of the transatlantic partnership and rumors of a NATO in shambles were (and are) commonplace, as were anxieties about the West’s strength in comparison to NATO’s Eastern counterpart, the Warsaw Pact. NATO, it turned out, was more than a Cold War institution. After the fall of the Berlin Wall and the collapse of the Soviet Union, the Alliance remained vital to US foreign policy objectives. The only invocation of Article V, the North Atlantic Treaty’s collective defense clause, came in the wake of the September 11, 2001 terrorist attacks. Over the last seven decades, NATO has symbolized both US power and its challenges.
At the dawn of the 20th century, the region that would become the Democratic Republic of Congo fell to the brutal colonialism of Belgium’s King Leopold. Except for a brief moment when anti-imperialists decried the crimes of plantation slavery, the United States paid little attention to Congo before 1960. But after winning its independence from Belgium in June 1960, Congo suddenly became engulfed in a crisis of decolonization and the Cold War, a time when the United States and the Soviet Union competed for resources and influence. The confrontation in Congo was kept limited by a United Nations (UN) peacekeeping force, which ended the secession of the province of Katanga in 1964. At the same time, the CIA (Central Intelligence Agency) intervened to help create a pro-Western government and eliminate the Congo’s first prime minister, Patrice Lumumba. Ironically, the result would be a growing reliance on the dictatorship of Joseph Mobutu throughout the 1980s. In 1997 a rebellion succeeded in toppling Mobutu from power. Since 2001 President Joseph Kabila has ruled Congo. The United States has supported long-term social and economic growth but has kept its distance while watching Kabila fight internal opponents and insurgents in the east. A UN peacekeeping force returned to Congo and helped limit unrest. Despite serving out two full terms that ended in 2016, Kabila was slow to call elections amid rising turmoil.
Timothy Andrews Sayle
In March 2003 US and coalition forces invaded Iraq. US forces withdrew in December 2008. Approximately 4,400 US troops were killed and 31,900 wounded during the initial invasion and the subsequent war. Estimates of Iraqi casualties vary widely, ranging from roughly 100,000 to more than half a million. The invasion was launched as part of the US strategic response to the terror attacks of September 11, 2001, and ended the rule of Iraqi President Saddam Hussein. After the collapse of the regime, Iraq experienced significant violence as former regime loyalists launched insurgent attacks against US forces, and al-Qaeda in Iraq (AQI), a group linked to al-Qaeda, also attacked US forces and sought to precipitate sectarian civil war. Simultaneously with the increasing violence, Iraq held a series of elections that resulted in a new Constitution and an elected parliament and government. In 2007, the United States deployed more troops to Iraq to quell the insurgency and sectarian strife. The temporary increase in troops was known as “the Surge.” In November 2008, the US and Iraqi governments agreed that all US troops would withdraw from Iraq by December 2011. In 2014, AQI, now calling itself the Islamic State of Iraq and the Levant (ISIL), attacked and captured large swaths of Iraq, including several large cities. That year, the United States and allied states launched new military operations in Iraq called Operation Inherent Resolve. The government of Iraq declared victory over ISIL in 2017.
Little Saigon is the preferred name of Vietnamese refugee communities throughout the world. This article focuses primarily on the largest such community, in Orange County, California. This suburban ethnic enclave is home to the largest concentration of overseas Vietnamese, nearly 200,000, or 10 percent of the Vietnamese American population. Because of its size, location, and demographics, Little Saigon is also home to some of the most influential intellectuals, entertainers, businesspeople, and politicians in the Vietnamese diaspora, many of whom are invested in constructing Little Saigon as a transnational oppositional party to the government of Vietnam. Unlike traditional immigrant ethnic enclaves, Little Saigon is a refugee community whose formation and development emerged in large part from America’s efforts to atone for its epic defeat in Vietnam by at least sparing some of its wartime allies a life under communism. Much of Little Saigon’s cultural politics revolve around this narrative of rescue, although the number guilt-ridden Americans grows smaller and more conservative, while the loyalists of the pre-1975 Saigon regime struggle to instill in the younger generation of Vietnamese an appreciation of their refugee roots.
Mark W. Deets
Since the founding of the United States of America, coinciding with the height of the Atlantic slave trade, U.S. officials have based their relations with West Africa primarily on economic interests. Initially, these interests were established on the backs of slaves, as the Southern plantation economy quickly vaulted the United States to prominence in the Atlantic world. After the U.S. abolition of the slave trade in 1808, however, American relations with West Africa focused on the establishment of the American colony of Liberia as a place of “return” for formerly enslaved persons. Following the turn to “legitimate commerce” in the Atlantic and the U.S. Civil War, the United States largely withdrew from large-scale interaction with West Africa. Liberia remained the notable exception, where prominent Pan-African leaders like Edward Blyden, W. E. B. DuBois, and Marcus Garvey helped foster cultural and intellectual ties between West Africa and the Diaspora in the early 1900s. These ties to Liberia were deepened in the 1920s when Firestone Rubber Corporation of Akron, Ohio established a long-term lease to harvest rubber. World War II marked a significant increase in American presence and influence in West Africa. Still focused on Liberia, the war years saw the construction of infrastructure that would prove essential to Allied war efforts and to American security interests during the Cold War. After 1945, the United States competed with the Soviet Union in West Africa for influence and access to important economic and national security resources as African nations ejected colonial regimes across most of the continent. West African independence quickly demonstrated a turn from nationalism to ethnic nationalism, as civil wars engulfed several countries in the postcolonial, and particularly the post-Cold War, era. After a decade of withdrawal, American interest in West Africa revived with the need for alternative sources of petroleum and concerns about transnational terrorism following the attacks of September 11, 2001.
In the early 20th century, West Virginia coal miners and mine operators fought a series of bloody battles that raged for two decades and prompted national debates over workers’ rights. Miners in the southern part of the state lived in towns wholly owned by coal companies and attempted to join the United Mine Workers of America (UMWA) to negotiate better working conditions but most importantly to restore their civil liberties. Mine operators saw unionization as a threat to their businesses and rights and hired armed guards to patrol towns and prevent workers from organizing. The operators’ allies in local and state government used their authority to help break strikes by sending troops to strike districts, declaring martial law, and jailing union organizers in the name of law and order. Observers around the country were shocked at the levels of violence as well as the conditions that fueled the battles. The Mine Wars include the Paint Creek–Cabin Creek Strike of 1912–1913, the so-called 1920 Matewan Massacre, the 1920 Three Days Battle, and the 1921 Battle of Blair Mountain. In this struggle over unionism, the coal operators prevailed, and West Virginia miners continued to work in nonunion mines and live in company towns through the 1920s.
An overview of Euro-American internal migration in the United States between 1940 and 1980 explores the overall population movement away from rural areas to cities and suburban areas. Although focused on white Americans and their migrations, there are similarities to the Great Migration of African Americans, who continued to move out of the South during the mid-20th century. In the early period, the industrial areas in the North and West attracted most of the migrants. Mobilization for World War II loosened rural dwellers who were long kept in place by low wages, political disfranchisement, and low educational attainment. The war also attracted significant numbers of women to urban centers in the North and West. After the war, migration increased, enticing white Americans to become not just less rural but also increasingly suburban. The growth of suburbs throughout the country was prompted by racial segregation in housing that made many suburban areas white and earmarked many urban areas for people of color. The result was incredible growth in suburbia: from 22 million living in those areas in 1940 to triple that in 1970. Later in the period, as the Steelbelt rusted, the rise of the West as a migration magnet was spurred by development strategies, federal investment in infrastructure, and military bases. Sunbelt areas were making investments that stood ready to recruit industries and of course people, especially from Rustbelt areas in the North. By the dawn of the 21st century, half of the American population resided in suburbs.
An ungainly word, it has proven tenacious. Since the early Cold War, “Wilsonianism” has been employed by historians and analysts of US foreign policy to denote two historically related but ideologically and operationally distinct approaches to world politics. One is the foreign policy of the term’s eponym, President Woodrow Wilson, during and after World War I—in particular his efforts to engage the United States and other powerful nations in the cooperative maintenance of order and peace through a League of Nations. The other is the tendency of later administrations and political elites to deem an assertive, interventionist, and frequently unilateralist foreign policy necessary to advance national interests and preserve domestic institutions. Both versions of Wilsonianism have exerted massive impacts on US and international politics and culture. Yet both remain difficult to assess or even define. As historical phenomena they are frequently conflated; as philosophical labels they are ideologically freighted. Perhaps the only consensus is that the term implies the US government’s active rather than passive role in the international order.
It is nevertheless important to distinguish Wilson’s “Wilsonianism” from certain doctrines and practices later attributed to him or traced to his influence. The major reasons are two. First, misconceptions surrounding the aims and outcomes of Wilson’s international policies continue to distort historical interpretation in multiple fields, including American political, cultural, and diplomatic history and the history of international relations. Second, these distortions encourage the conflation of Wilsonian internationalism with subsequent yet distinct developments in American foreign policy. The confused result promotes ideological over historical readings of the nation’s past, which in turn constrain critical and creative thinking about its present and future as a world power.