You are looking at 281-300 of 326 articles
While colonial New Englanders gathered around town commons, settlers in the Southern colonials sprawled out on farms and plantations. The distinctions had more to do with the varying objectives of these colonial settlements and the geography of deep-flowing rivers in the South than with any philosophical predilections. The Southern colonies did indeed sprout towns, but these were places of planters’ residences, planters’ enslaved Africans, and the plantation economy, an axis that would persist through the antebellum period. Still, the aspirations of urban Southerners differed little from their Northern counterparts in the decades before the Civil War. The institution of slavery and an economy emphasizing commercial agriculture hewed the countryside close to the urban South, not only in economics, but also in politics. The devastation of the Civil War rendered the ties between city and country in the South even tighter. The South participated in the industrial revolution primarily to the extent of processing crops. Factories were often located in small towns and did not typically contribute to urbanization. City boosters aggressively sought and subsidized industrial development, but a poorly educated labor force and the scarcity of capital restricted economic development. Southern cities were more successful in legalizing the South’s culture of white supremacy through legal segregation and the memorialization of the Confederacy. But the dislocations triggered by World War II and the billions of federal dollars poured into Southern urban infrastructure and industries generated hope among civic leaders for a postwar boom. The civil rights movement after 1950, with many of its most dramatic moments focused on the South’s cities, loosened the connection between Southern city and region as cities chose development rather than the stagnation that was certain to occur without a moderation of race relations. The predicted economic bonanza occurred. Young people left the rural areas and small towns of the South for the larger cities to find work in the postindustrial economy and, for the first time in over a century, the urban South received migrants in appreciable numbers from other parts of the country and the world. The lingering impact of spatial distinctions and historical differences (particularly those related to the Civil War) linger in Southern cities, but exceptionalism is a fading characteristic.
Between 1880 and 1929, industrialization and urbanization expanded in the United States faster than ever before. Industrialization, meaning manufacturing in factory settings using machines plus a labor force with unique, divided tasks to increase production, stimulated urbanization, meaning the growth of cities in both population and physical size. During this period, urbanization spread out into the countryside and up into the sky, thanks to new methods of building taller buildings. Having people concentrated into small areas accelerated economic activity, thereby producing more industrial growth. Industrialization and urbanization thus reinforced one another, augmenting the speed with which such growth would have otherwise occurred.
Industrialization and urbanization affected Americans everywhere, but especially in the Northeast and Midwest. Technological developments in construction, transportation, and illumination, all connected to industrialization, changed cities forever, most immediately those north of Washington, DC and east of Kansas City. Cities themselves fostered new kinds of industrial activity on large and small scales. Cities were also the places where businessmen raised the capital needed to industrialize the rest of the United States. Later changes in production and transportation made urbanization less acute by making it possible for people to buy cars and live further away from downtown areas in new suburban areas after World War II ended.
James J. Connolly
The convergence of mass politics and the growth of cities in 19th-century America produced sharp debates over the character of politics in urban settings. The development of what came to be called machine politics, primarily in the industrial cities of the East and Midwest, generated sharp criticism of its reliance on the distribution of patronage and favor trading, its emphatic partisanship, and the plebian character of the “bosses” who practiced it. Initially, upper- and middle-class businessmen spearheaded opposition to this kind of politics, but during the late nineteenth and early 20th centuries, labor activists, women reformers, and even some ethnic spokespersons confronted “boss rule” as well. These challenges did not succeed in bringing an end to machine politics where it was well established, but the reforms they generated during the Progressive Era reshaped local government in most cities. In the West and Southwest, where cities were younger and partisan organizations less entrenched, business leaders implemented Progressive municipal reforms to consolidate their power. Whether dominated by reform regime or a party machine, urban politics and governance became more centralized by 1940 and less responsive to the concerns and demands of workers and immigrants.
Urban politics provides a means to understand the major political and economic trends and transformations of the last seventy years in American cities. The growth of the federal government; the emergence of new powerful identity- and neighborhood-based social movements; and large-scale economic restructuring have characterized American cities since 1945. The postwar era witnessed the expansion of scope and scale of the federal government, which had a direct impact on urban space and governance, particularly as urban renewal fundamentally reshaped the urban landscape and power configurations. Urban renewal and liberal governance, nevertheless, spawned new and often violent tensions and powerful opposition movements among old and new residents. These movements engendered a generation of city politicians who assumed power in the 1970s. Yet all of these figures were forced to grapple with the larger forces of capital flight, privatization, the war on drugs, mass incarceration, immigration, and gentrification. This confluence of factors meant that as many American cities and their political representatives became demographically more diverse by the 1980s and 1990s, they also became increasingly separated by neighborhood boundaries and divided by the forces of class and economic inequality.
Rioting in the United States since 1800 has adhered to three basic traditions: regulating communal morality, defending community from outside threats, and protesting government abuse of power. Typically, crowds have had the shared interests of class, group affiliation, geography, or a common enemy. Since American popular disorder has frequently served as communal policing, the state—especially municipal police—has had an important role in facilitating, constraining, or motivating unrest.
Rioting in the United States retained strong legitimacy and popular resonance from 1800 to the 1960s. In the decades after the founding, Americans adapted English traditions of restrained mobbing to more diverse, urban conditions. During the 19th century, however, rioting became more violent and ambitious as Americans—especially white men—asserted their right to use violence to police heterogeneous public space. In the 1840s and 1850s, whites combined the lynch mob with the disorderly crowd to create a lethal and effective instrument of white settler sovereignty both in the western territories and in the states. From the 1860s to the 1930s, white communities across the country, particularly in the South, used racial killings and pogroms to seize political power and establish and enforce Jim Crow segregation. Between the 1910s and the 1970s, African Americans and Latinos, increasingly living in cities, rioted to defend their communities against civilian and police violence. The frequency of rioting declined after the urban rebellions of the 1960s, partly due to the militarization of local police. Yet the continued use of aggressive police tactics against racial minorities has contributed to a surge in rioting in US cities in the early 21st century.
J. Mark Souther
Prior to the railroad age, American cities generally lacked reputations as tourist travel destinations. As railroads created fast, reliable, and comfortable transportation in the 19th century, urban tourism emerged in many cities. Luxury hotels, tour companies, and guidebooks were facilitating and shaping tourists’ experience of cities by the turn of the 20th century. Many cities hosted regional or international expositions that served as significant tourist attractions from the 1870s to 1910s. Thereafter, cities competed more keenly to attract conventions. Tourism promotion, once handled chiefly by railroad companies, became increasingly professionalized with the formation of convention and visitor bureaus. The rise of the automobile spurred the emergence of motels and theme parks on the suburban periphery, but renewed interest in historic urban core areas spurred historic preservation activism and adaptive reuse of old structures for dining, shopping, and entertainment. Although a few cities, especially Las Vegas, had relied heavily on tourism almost from their inception, by the last few decades of the 20th century few cities could afford to ignore tourism development. New waterfront parks, aquariums, stadiums, and other tourist and leisure attractions facilitated the symbolic transformation of cities from places of production to sites of consumption. Long aimed at the a mass market, especially affluent and middle-class whites, tourism promotion embraced market segmentation in the closing years of the 20th century, and a number of attractions and tours appealed to African Americans or LGBTQ communities. If social commentators often complained that cities were developing “tourist bubbles” that concentrated the advantages of tourism in too-small areas and in too few hands, recent trends point to a greater willingness to disperse tourist activity more widely in cities. By the 21st century, urban tourism was indispensable to many cities even as it continued to contribute to uneven development.
Relations between the United States and Argentina can be best described as a cautious embrace punctuated by moments of intense frustration. Although never the center of U.S.–Latin American relations, Argentina has attempted to create a position of influence in the region. As a result, the United States has worked with Argentina and other nations of the Southern Cone—the region of South America that comprises Uruguay, Paraguay, Argentina, Chile, and southern Brazil—on matters of trade and economic development as well as hemispheric security and leadership. While Argentina has attempted to assert its position as one of Latin America’s most developed nations and therefore a regional leader, the equal partnership sought from the United States never materialized for the Southern Cone nation. Instead, competition for markets and U.S. interventionist and unilateral tendencies kept Argentina from attaining the influence and wealth it so desired. At the same time, the United States saw Argentina as an unreliable ally too sensitive to the pull of its volatile domestic politics. The two nations enjoyed moments of cooperation in World War I, the Cold War, and the 1990s, when Argentine leaders could balance this particular external partnership with internal demands. Yet at these times Argentine leaders found themselves walking a fine line as detractors back home saw cooperation with the United States as a violation of their nation’s sovereignty and autonomy. There has always been potential for a productive partnership, but each side’s intransigence and unique concerns limited this relationship’s accomplishments and led to a historical imbalance of power.
James F. Siekmeier
Throughout the 19th and 20th centuries, U.S. officials often viewed Bolivia as both a potential “test case” for U.S. economic foreign policy and a place where Washington’s broad visions for Latin America might be implemented relatively easily. After World War II, Washington leaders sought to show both Latin America and the nonindustrialized world that a relatively open economy could produce significant economic wealth for Bolivia’s working and middle classes, thus giving the United States a significant victory in the Cold War. Washington sought a Bolivia widely open to U.S. influence, and Bolivia often seemed an especially pliable country. In order to achieve their goals in Bolivia, U.S. leaders dispensed a large amount of economic assistance to Bolivia in the 1950s—a remarkable development in two senses. First, the U.S. government, generally loath to aid Third World nations, gave this assistance to a revolutionary regime. Second, the U.S. aid program for Bolivia proved to be a precursor to the Alliance for Progress, the massive aid program for Latin America in the 1960s that comprised the largest U.S. economic aid program in the Third World. Although U.S. leaders achieved their goal of a relatively stable, noncommunist Bolivia, the decision in the late 1950s to significantly increase U.S. military assistance to Bolivia’s relatively small military emboldened that military, which staged a coup in 1964, snuffing out democracy for nearly two decades. The country’s long history of dependency in both export markets and public- and private-sector capital investment led Washington leaders to think that dependency would translate into leverage over Bolivian policy. However, the historical record is mixed in this regard. Some Bolivian governments have accommodated U.S. demands; others have successfully resisted them.
Evan D. McCormick
Since gaining independence in 1823, the states comprising Central America have had a front seat to the rise of the United States as a global superpower. Indeed, more so than anywhere else, the United States has sought to use its power to shape Central America into a system that heeds US interests and abides by principles of liberal democratic capitalism. Relations have been characterized by US power wielded freely by officials and non-state actors alike to override the aspirations of Central American actors in favor of US political and economic objectives: from the days of US filibusterers invading Nicaragua in search of territory; to the occupations of the Dollar Diplomacy era, designed to maintain financial and economic stability; to the covert interventions of the Cold War era. For their part, the Central American states have, at various times, sought to challenge the brunt of US hegemony, most effectively when coordinating their foreign policies to balance against US power. These efforts—even when not rejected by the United States—have generally been short-lived, hampered by economic dependency and political rivalries. The result is a history of US-Central American relations that wavers between confrontation and cooperation, but is remarkable for the consistency of its main element: US dominance.
Patrick William Kelly
The relationship between Chile and the United States pivoted on the intertwined questions of how much political and economic influence Americans would exert over Chile and the degree to which Chileans could chart their own path. Given Chile’s tradition of constitutional government and relative economic development, it established itself as a regional power player in Latin America. Unencumbered by direct US military interventions that marked the history of the Caribbean, Central America, and Mexico, Chile was a leader in movements to promote Pan-Americanism, inter-American solidarity, and anti-imperialism. But the advent of the Cold War in the 1940s, and especially after the 1959 Cuban Revolution, brought an increase in bilateral tensions. The United States turned Chile into a “model democracy” for the Alliance for Progress, but frustration over its failures to enact meaningful social and economic reform polarized Chilean society, resulting in the election of Marxist Salvador Allende in 1970. The most contentious period in US-Chilean relations was during the Nixon administration when it worked, alongside anti-Allende Chileans, to destabilize Allende’s government, which the Chilean military overthrew on September 11, 1973. The Pinochet dictatorship (1973–1990), while anti-Communist, clashed with the United States over Pinochet’s radicalization of the Cold War and the issue of Chilean human rights abuses. The Reagan administration—which came to power on a platform that reversed the Carter administration’s critique of Chile—reversed course and began to support the return of democracy to Chile, which took place in 1990. Since then, Pinochet’s legacy of neoliberal restructuring of the Chilean economy looms large, overshadowed perhaps only by his unexpected role in fomenting a global culture of human rights that has ended the era of impunity for Latin American dictators.
Alfred P. Flores
Following the Spanish-American War of 1898 and the illegal overthrow and annexation of Hawai‘i, the US government transplanted its colonial education program to places in the Caribbean and the Pacific Islands. Specifically, American Sāmoa, Guam, Hawai‘i, Puerto Rico, the Philippines, and the US Virgin Islands would all have some aspect of the native boarding school system implemented. In many ways, the colonial education system in Guam was emblematic and exceptional to native boarding schools in the continental United States. Utilizing Guam as a case study reveals that the US government was invested in using schools as a site to eliminate and remake indigenous people in the continental United States and in its new overseas territories.
Jason C. Parker
The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded.
Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.
Ronald Reagan’s foreign policy legacy remains hotly contested, and as new archival sources come to light, those debates are more likely to intensify than to recede into the background. In dealings with the Soviet Union, the Reagan administration set the superpowers on a course for the (largely) peaceful end of the Cold War. Reagan began his outreach to Soviet leaders almost immediately after taking office and enjoyed some success, even if the dominant theme of the period remains fears of Reagan as a “button-pusher” in the public’s perception. Mikhail Gorbachev’s election to the post of General Secretary proved the turning point. Reagan, now confident in US strength, and Gorbachev, keen to reduce the financial burden of the arms race, ushered in a new, cooperative phase of the Cold War. Elsewhere, in particular Latin America, the administration’s focus on fighting communism led it to support human rights–abusing regimes at the same time as it lambasted Moscow’s transgressions in that regard. But even so, over the course of the 1980s, the United States began pushing for democratization around the world, even where Reagan and his advisors had initially resisted it, fearing a communist takeover. In part, this was a result of public pressure, but the White House recognized and came to support the rising tide of democratization. When Reagan left office, a great many countries that had been authoritarian were no longer, often at least in part because of US policy. US–Soviet relations had improved to such an extent that Reagan’s successor, Vice President George H. W. Bush, worried that they had gone too far in working with Gorbachev and been hoodwinked.
International law is the set of rules, formally agreed by treaty or understood as customary, by which nation-states interact with each other in a form of international society. Across the history of U.S. foreign relations, international law has provided both an animating vision, or ideology, for various American projects of world order, and a practical tool for the advancement of U.S. power and interests. As the American role in the world changed since the late 18th century, so too did the role of international law in U.S. foreign policy. Initially, international law was a source of authority to which the weak American government could appeal on questions of independence, sovereignty, and neutrality. As U.S. power grew in the 19th and early 20th centuries, international law became variously a liberal project for the advancement of peace, a civilizational discourse for justifying violence and dispossession, and a bureaucratic and commercial tool for the expansion of empire. With the advent of formal inter-governmental organizations in the 20th century, the traditional American focus on neutrality faded, to be replaced by an emphasis on collective security. But as the process of decolonization diluted the strength of the United States and its allies in the parliamentary chambers of the world’s international organizations, Washington increasingly advanced its own interpretations of international law, and opted out of a number of international legal regimes. At the same time, Americans increasingly came to perceive of international law as a vehicle to advance the human rights of individuals over the sovereign rights of states.
Economic nationalism tended to dominate U.S. foreign trade policy throughout the long 19th century, from the end of the American Revolution to the beginning of World War I, owing to a pervasive American sense of economic and geopolitical insecurity and American fear of hostile powers, especially the British but also the French and Spanish and even the Barbary States. Following the U.S. Civil War, leading U.S. protectionist politicians sought to curtail European trade policies and to create a U.S.-dominated customs union in the Western Hemisphere. American proponents of trade liberalization increasingly found themselves outnumbered in the halls of Congress, as the “American System” of economic nationalism grew in popularity alongside the perceived need for foreign markets. Protectionist advocates in the United States viewed the American System as a panacea that not only promised to provide the federal government with revenue but also to artificially insulate American infant industries from undue foreign-market competition through high protective tariffs and subsidies, and to retaliate against real and perceived threats to U.S. trade.
Throughout this period, the United States itself underwent a great struggle over foreign trade policy. By the late 19th century, the era’s boom-and-bust global economic system led to a growing perception that the United States needed more access to foreign markets as an outlet for the country’s surplus goods and capital. But whether the United States would obtain foreign market access through free trade or through protectionism led to a great debate over the proper course of U.S. foreign trade policy. By the time that the United States acquired a colonial empire from the Spanish in 1898, this same debate over U.S. foreign trade policy had effectively merged into debates over the course of U.S. imperial expansion. The country’s more expansionist-minded economic nationalists came out on top. The overwhelming 1896 victory of William McKinley—the Republican party’s “Napoleon of Protection”—marked the beginning of substantial expansion of U.S. foreign trade through a mixture of protectionism and imperialism in the years leading up to World War I.
Kathryn C. Statler
U.S.-French relations are long-standing, complex, and primarily cooperative in nature. Various crises have punctuated long periods of stability in the alliance, but after each conflict the Franco-American friendship emerged stronger than ever. Official U.S.-French relations began during the early stages of the American Revolution, when Louis XVI’s regime came to America’s aid by providing money, arms, and military advisers. French assistance, best symbolized by the Marquis de Lafayette, was essential in the revolution’s success. The subsequent French Revolution and Napoleon Bonaparte’s rise to power also benefitted the United States when Napoleon’s woes in Europe and the Caribbean forced him to sell the entire Louisiana territory to the United States, in 1803. Franco-American economic and cultural contacts increased throughout the 19th century, as trade between the two countries prospered and as Americans flocked to France to study art, architecture, music, and medicine. The French gift of the Statue of Liberty in the late 19th century solidified Franco-American bonds, which became even more secure during World War I. Indeed, during the war, the United States provided France with trade, loans, military assistance, and millions of soldiers, viewing such aid as repayment for French help during the American Revolution. World War II once again saw the United States fighting in France to liberate the country from Nazi control. The Cold War complicated the Franco-American relationship in new ways as American power waxed and French power waned. Washington and Paris clashed over military conflict in Vietnam, the Suez Crisis, and European security (the North Atlantic Treaty Organization or NATO, in particular) during the 1950s and 1960s. Ultimately, after French President Charles de Gaulle’s retirement, the Franco-American alliance stabilized by the mid-1970s and has flourished ever since, despite brief moments of crisis, such as the 2003 Second Gulf War in Iraq.
U.S. imperialism took a variety of forms in the early 20th century, ranging from colonies in Puerto Rico and the Philippines to protectorates in Cuba, Panama, and other countries in Latin America, and open door policies such as that in China. Formal colonies would be ruled with U.S.-appointed colonial governors and supported by U.S. troops. Protectorates and open door policies promoted business expansion overseas through American oversight of foreign governments and, in the case of threats to economic and strategic interests, the deployment of U.S. marines. In all of these imperial forms, U.S. empire-building both reflected and shaped complex social, cultural, and political histories with ramifications for both foreign nations and America itself.
David A. Nichols
From 1783 to 1830, American Indian policy reflected the new American nation-state’s desire to establish its own legitimacy and authority, by controlling Native American peoples and establishing orderly and prosperous white settlements in the continental interior. The Federalists focused on securing against Native American claims and attacks several protected enclaves of white settlement (Ohio, Kentucky, Tennessee), established—often violently—during the Revolutionary War. They used treaties to draw a legal boundary between these enclaves and Indian communities, and annuities and military force to keep Indians on their side of the line. The Jeffersonian Republicans adopted a more expansive plan of development, coupled with the promotion of Native American dependency. Treaty commissioners persuaded chiefs to cede road easements and riverfront acreage that the government used to link and develop dispersed white settlements. Meanwhile, the War Department built trading factories whose cheap merchandise would lure Indians into commercial dependency, and agents offered Indian families agricultural equipment and training, hoping that Native American farmers would no longer need “extensive forests” to support themselves. These pressures helped engender nativist movements in the Old Northwest and southeast, and Indian men from both regions fought the United States in the War of 1812, reinforcing frontier settlers’ view that Indians were a security threat. After this war’s end, the United States adopted a strategy of containment, pressuring Indian leaders to cede most of their peoples’ lands, confining Indians to enclaves, financing vocational schooling for Indian children, and encouraging Native peoples voluntarily to move west of the Mississippi. This policy, however, proved too respectful of Indian autonomy for the frontier settlers and politicians steadily gaining influence in the national government. After these settlers elected one of their own, Andrew Jackson, to the presidency, American Indian policy would enter a much more coercive and violent phase, as white Americans redefined the nation-state as a domain of white supremacy ethnically cleansed of indigenous peoples.
Oil played a central role in shaping US policy toward Iraq over the course of the 20th century. The United States first became involved in Iraq in the 1920s as part of an effort secure a role for American companies in Iraq’s emerging oil industry. As a result of State Department efforts, American companies gained a 23.75 percent ownership share of the Iraq Petroleum Company in 1928. In the 1940s, US interest in the country increased as a result of the Cold War with the Soviet Union. To defend against a perceived Soviet threat to Middle East oil, the US supported British efforts to “secure” the region. After nationalist officers overthrew Iraq’s British-supported Hashemite monarchy in 1958 and established friendly relations with the Soviet Union, the United States cultivated an alliance with the Iraqi Baath Party as an alternative to the Soviet-backed regime. The effort to cultivate an alliance with the Baath foundered as a result the Baath’s perceived support for Arab claims against Israel. The breakdown of US-Baath relations led the Baath to forge an alliance with the Soviet Union. With Soviet support, the Baath nationalized the Iraq Petroleum Company in 1972. Rather than resulting in a “supply cutoff,” Soviet economic and technical assistance allowed for a rapid expansion of the Iraqi oil industry and an increase in Iraqi oil flowing to world markets. As Iraq experienced a dramatic oil boom in the 1970s, the United States looked to the country as a lucrative market for US exports goods and adopted a policy of accommodation with regard to Baath. This policy of accommodation gave rise to close strategic and military cooperation throughout the 1980s as Iraq waged war against Iran. When Iraq invaded Kuwait and seized control of its oil fields in 1990, the United States shifted to a policy of Iraqi containment. The United States organized an international coalition that quickly ejected Iraqi forces from Kuwait, but chose not to pursue regime change for fear of destabilizing the country and wider region. Throughout the 1990s, the United States adhered to a policy of Iraqi containment but came under increasing pressure to overthrow the Baath and dismantle its control over the Iraqi oil industry. In 2003, the United States seized upon the 9/11 terrorist attacks as an opportunity to implement this policy of regime change and oil reprivatization.
Olivia L. Sohns
Moral, political, and strategic factors have contributed to the emergence and durability of the U.S.-Israel alliance. It took decades for American support for Israel to evolve from “a moral stance” to treating Israel as a “strategic asset” to adopting a policy of “strategic cooperation.” The United States supported Israel’s creation in 1948 not only because of the lobbying efforts of American Jews but also due to humanitarian considerations stemming from the Holocaust. Beginning in the 1950s, Israel sought to portray itself as an ally of the United States on grounds that America and Israel were fellow liberal democracies and shared a common Judeo-Christian cultural heritage. By the mid-1960s, Israel was considered a strategic proxy of American power in the Middle East in the Cold War, while the Soviet Union armed the radical Arab nationalist states and endorsed a Palestinian “people’s wars of national liberation” against Israel. Over the subsequent decades, Israel repeatedly sought to demonstrate that it was allied with the United States in opposing instability in the region that might threaten U.S. interests. Israel also sought to portray itself as a liberal democracy despite its continued occupation of territories that it conquered in the Arab-Israeli War of 1967. After the terrorist attacks of September 11, 2001, and the rise of regional instability and radicalism in the Middle East following the 2003 U.S. invasion of Iraq and the Arab Spring of 2011, Israel’s expertise in the realms of counterterrorism and homeland security provided a further basis for U.S.-Israel military-strategic cooperation. Although American and Israeli interests are not identical, and there have been disagreements between the two countries regarding the best means to secure comprehensive Arab-Israeli and Israeli-Palestinian peace, the foundations of the relationship are strong enough to overcome crises that would imperil a less robust alliance.