You are looking at 301-320 of 364 articles
C. J. Alvarez
The region that today constitutes the United States–Mexico borderland has evolved through various systems of occupation over thousands of years. Beginning in time immemorial, the land was used and inhabited by ancient peoples whose cultures we can only understand through the archeological record and the beliefs of their living descendants. Spain, then Mexico and the United States after it, attempted to control the borderlands but failed when confronted with indigenous power, at least until the late 19th century when American capital and police established firm dominance. Since then, borderland residents have often fiercely contested this supremacy at the local level, but the borderland has also, due to the primacy of business, expressed deep harmonies and cooperation between the U.S. and Mexican federal governments. It is a majority minority zone in the United States, populated largely by Mexican Americans. The border is both a porous membrane across which tremendous wealth passes and a territory of interdiction in which noncitizens and smugglers are subject to unusually concentrated police attention. All of this exists within a particularly harsh ecosystem characterized by extreme heat and scarce water.
After World War II, the United States backed multinational private oil companies known as the “Seven Sisters”—five American companies (including Standard Oil of New Jersey and Texaco), one British (British Petroleum), and one Anglo-Dutch (Shell)—in their efforts to control Middle East oil and feed rising demand for oil products in the West. In 1960 oil-producing states in Latin America and the Middle East formed the Organization of the Petroleum Exporting Countries (OPEC) to protest what they regarded as the inequitable dominance of the private oil companies. Between 1969 and 1973 changing geopolitical and economic conditions shifted the balance of power from the Seven Sisters to OPEC. Following the first “oil shock” of 1973–1974, OPEC assumed control over the production and price of oil, ending the rule of the companies and humbling the United States, which suddenly found itself dependent upon OPEC for its energy security. Yet this dependence was complicated by a close relationship between the United States and major oil producers such as Saudi Arabia, which continued to adopt pro-US strategic positions even as they squeezed out the companies. Following the Iranian Revolution (1978–1979), the Iran–Iraq War (1980–1988), and the First Iraq War (1990–1991), the antagonism that colored US relations with OPEC evolved into a more comfortable, if wary, recognition of the new normal, where OPEC supplied the United States with crude oil while acknowledging the United States’ role in maintaining the security of the international energy system.
Michael R. Anderson
American strategy in the Asia Pacific over the past two centuries has been marked by strong and often contradictory impulses. On the one hand, the western Pacific has served as a fertile ground for Christian missionaries, an alluring destination for American commercial enterprises, and eventually a critical launchpad for U.S. global power projection. Yet on the other hand, American policymakers at times have subordinated Asian strategy to European-based interests, or have found themselves embroiled in area conflicts that have hampered efforts to extend U.S. regional hegemony. Furthermore, leading countries in the Asia-Pacific region at times have challenged U.S. economic and military objectives, and the assertion of “Asian values” in recent years has undermined efforts to expand Western political and cultural norms. The United States’s professed “pivot to Asia” has opened a new chapter in a centuries-long relationship, one that will determine the geopolitical fault lines of the 21st century.
Risa L. Goluboff and Adam Sorensen
The crime of vagrancy has deep historical roots in American law and legal culture. Originating in 16th-century England, vagrancy laws came to the New World with the colonists and soon proliferated throughout the British colonies and, later, the United States. Vagrancy laws took myriad forms, generally making it a crime to be poor, idle, dissolute, immoral, drunk, lewd, or suspicious. Vagrancy laws often included prohibitions on loitering—wandering around without any apparent lawful purpose—though some jurisdictions criminalized loitering separately. Taken together, vaguely worded vagrancy, loitering, and suspicious persons laws targeted objectionable “out of place” people rather than any particular conduct. They served as a ubiquitous tool for maintaining hierarchy and order in American society. Their application changed alongside perceived threats to the social fabric, at different times and places targeting the unemployed, labor activists, radical orators, cultural and sexual nonconformists, racial and religious minorities, civil rights protesters, and the poor. By the mid-20th century, vagrancy laws served as the basis for hundreds of thousands of arrests every year. But over the course of just two decades, the crime of vagrancy, virtually unquestioned for four hundred years, unraveled. Profound social upheaval in the 1960s produced a concerted effort against the vagrancy regime, and in 1972, the US Supreme Court invalidated the laws. Local authorities have spent the years since looking for alternatives to the many functions vagrancy laws once served.
Jeffrey F. Taffet
In the first half of the 20th century, and more actively in the post–World War II period, the United States government used economic aid programs to advance its foreign policy interests. US policymakers generally believed that support for economic development in poorer countries would help create global stability, which would limit military threats and strengthen the global capitalist system. Aid was offered on a country-by-country basis to guide political development; its implementation reflected views about how humanity had advanced in richer countries and how it could and should similarly advance in poorer regions. Humanitarianism did play a role in driving US aid spending, but it was consistently secondary to political considerations. Overall, while funding varied over time, amounts spent were always substantial. Between 1946 and 2015, the United States offered almost $757 billion in economic assistance to countries around the world—$1.6 trillion in inflation-adjusted 2015 dollars. Assessing the impact of this spending is difficult; there has long been disagreement among scholars and politicians about how much economic growth, if any, resulted from aid spending and similar disputes about its utility in advancing US interests. Nevertheless, for most political leaders, even without solid evidence of successes, aid often seemed to be the best option for constructively engaging poorer countries and trying to create the kind of world in which the United States could be secure and prosperous.
The transformation of post-industrial American life in the late 20th and early 21st centuries includes several economically robust metropolitan centers that stand as new models of urban and economic life, featuring well-educated populations that engage in professional practices in education, medical care, design and legal services, and artistic and cultural production. By the early 21st century, these cities dominated the nation’s consciousness economically and culturally, standing in for the most dynamic and progressive sectors of the economy, driven by collections of technical and creative spark. The origins of these academic and knowledge centers are rooted in the political economy, including investments shaped by federal policy and philanthropic ambition. Education and health care communities were and remain frequently economically robust but also rife with racial, economic, and social inequality, and riddled with resulting political tensions over development. These information communities fundamentally incubated and directed the proceeds of the new economy, but also constrained who accessed this new mode of wealth in the knowledge economy.
Christopher P. Loss
Until World War II, American universities were widely regarded as good but not great centers of research and learning. This changed completely in the press of wartime, when the federal government pumped billions into military research, anchored by the development of the atomic bomb and radar, and into the education of returning veterans under the GI Bill of 1944. The abandonment of decentralized federal–academic relations marked the single most important development in the history of the modern American university. While it is true that the government had helped to coordinate and fund the university system prior to the war—most notably the country’s network of public land-grant colleges and universities—government involvement after the war became much more hands-on, eventually leading to direct financial support to and legislative interventions on behalf of core institutional activities, not only the public land grants but the nation’s mix of private institutions as well. However, the reliance on public subsidies and legislative and judicial interventions of one kind or another ended up being a double-edged sword: state action made possible the expansion in research and in student access that became the hallmarks of the post-1945 American university; but it also created a rising tide of expectations for continued support that has proven challenging in fiscally stringent times and in the face of ongoing political fights over the government’s proper role in supporting the sector.
Megan Kate Nelson
During the American Civil War, Union and Confederate commanders made the capture and destruction of enemy cities a central feature of their military campaigns. They did so for two reasons. First, most mid-19th-century cities had factories, foundries, and warehouses within their borders, churning out and storing war materiel; military officials believed that if they interrupted or incapacitated the enemy’s ability to arm or clothe themselves, the war would end. Second, it was believed that the widespread destruction of property—especially in major or capital cities—would also damage civilians’ morale, undermining their political convictions and decreasing their support for the war effort.
Both Union and Confederate armies bombarded and burned cities with these goals in mind. Sometimes they fought battles on city streets but more often, Union troops initiated long-term sieges in order to capture Confederate cities and demoralize their inhabitants. Soldiers on both sides were motivated by vengeance when they set fire to city businesses and homes; these acts were controversial, as was defensive burning—the deliberate destruction of one’s own urban center in order to keep its war materiel out of the hands of the enemy.
Urban destruction, particularly long-term sieges, took a psychological toll on (mostly southern) city residents. Many were wounded, lost property, or were forced to become refugees. Because of this, the destruction of cities during the American Civil War provoked widespread discussions about the nature of “civilized warfare” and the role that civilians played in military strategy. Both soldiers and civilians tried to make sense of the destruction of cities in writing, and also in illustrations and photographs; images in particular shaped both northern and southern memories of the war and its costs.
While colonial New Englanders gathered around town commons, settlers in the Southern colonials sprawled out on farms and plantations. The distinctions had more to do with the varying objectives of these colonial settlements and the geography of deep-flowing rivers in the South than with any philosophical predilections. The Southern colonies did indeed sprout towns, but these were places of planters’ residences, planters’ enslaved Africans, and the plantation economy, an axis that would persist through the antebellum period. Still, the aspirations of urban Southerners differed little from their Northern counterparts in the decades before the Civil War. The institution of slavery and an economy emphasizing commercial agriculture hewed the countryside close to the urban South, not only in economics, but also in politics. The devastation of the Civil War rendered the ties between city and country in the South even tighter. The South participated in the industrial revolution primarily to the extent of processing crops. Factories were often located in small towns and did not typically contribute to urbanization. City boosters aggressively sought and subsidized industrial development, but a poorly educated labor force and the scarcity of capital restricted economic development. Southern cities were more successful in legalizing the South’s culture of white supremacy through legal segregation and the memorialization of the Confederacy. But the dislocations triggered by World War II and the billions of federal dollars poured into Southern urban infrastructure and industries generated hope among civic leaders for a postwar boom. The civil rights movement after 1950, with many of its most dramatic moments focused on the South’s cities, loosened the connection between Southern city and region as cities chose development rather than the stagnation that was certain to occur without a moderation of race relations. The predicted economic bonanza occurred. Young people left the rural areas and small towns of the South for the larger cities to find work in the postindustrial economy and, for the first time in over a century, the urban South received migrants in appreciable numbers from other parts of the country and the world. The lingering impact of spatial distinctions and historical differences (particularly those related to the Civil War) linger in Southern cities, but exceptionalism is a fading characteristic.
Between 1880 and 1929, industrialization and urbanization expanded in the United States faster than ever before. Industrialization, meaning manufacturing in factory settings using machines plus a labor force with unique, divided tasks to increase production, stimulated urbanization, meaning the growth of cities in both population and physical size. During this period, urbanization spread out into the countryside and up into the sky, thanks to new methods of building taller buildings. Having people concentrated into small areas accelerated economic activity, thereby producing more industrial growth. Industrialization and urbanization thus reinforced one another, augmenting the speed with which such growth would have otherwise occurred.
Industrialization and urbanization affected Americans everywhere, but especially in the Northeast and Midwest. Technological developments in construction, transportation, and illumination, all connected to industrialization, changed cities forever, most immediately those north of Washington, DC and east of Kansas City. Cities themselves fostered new kinds of industrial activity on large and small scales. Cities were also the places where businessmen raised the capital needed to industrialize the rest of the United States. Later changes in production and transportation made urbanization less acute by making it possible for people to buy cars and live further away from downtown areas in new suburban areas after World War II ended.
James J. Connolly
The convergence of mass politics and the growth of cities in 19th-century America produced sharp debates over the character of politics in urban settings. The development of what came to be called machine politics, primarily in the industrial cities of the East and Midwest, generated sharp criticism of its reliance on the distribution of patronage and favor trading, its emphatic partisanship, and the plebian character of the “bosses” who practiced it. Initially, upper- and middle-class businessmen spearheaded opposition to this kind of politics, but during the late nineteenth and early 20th centuries, labor activists, women reformers, and even some ethnic spokespersons confronted “boss rule” as well. These challenges did not succeed in bringing an end to machine politics where it was well established, but the reforms they generated during the Progressive Era reshaped local government in most cities. In the West and Southwest, where cities were younger and partisan organizations less entrenched, business leaders implemented Progressive municipal reforms to consolidate their power. Whether dominated by reform regime or a party machine, urban politics and governance became more centralized by 1940 and less responsive to the concerns and demands of workers and immigrants.
Urban politics provides a means to understand the major political and economic trends and transformations of the last seventy years in American cities. The growth of the federal government; the emergence of new powerful identity- and neighborhood-based social movements; and large-scale economic restructuring have characterized American cities since 1945. The postwar era witnessed the expansion of scope and scale of the federal government, which had a direct impact on urban space and governance, particularly as urban renewal fundamentally reshaped the urban landscape and power configurations. Urban renewal and liberal governance, nevertheless, spawned new and often violent tensions and powerful opposition movements among old and new residents. These movements engendered a generation of city politicians who assumed power in the 1970s. Yet all of these figures were forced to grapple with the larger forces of capital flight, privatization, the war on drugs, mass incarceration, immigration, and gentrification. This confluence of factors meant that as many American cities and their political representatives became demographically more diverse by the 1980s and 1990s, they also became increasingly separated by neighborhood boundaries and divided by the forces of class and economic inequality.
Rioting in the United States since 1800 has adhered to three basic traditions: regulating communal morality, defending community from outside threats, and protesting government abuse of power. Typically, crowds have had the shared interests of class, group affiliation, geography, or a common enemy. Since American popular disorder has frequently served as communal policing, the state—especially municipal police—has had an important role in facilitating, constraining, or motivating unrest.
Rioting in the United States retained strong legitimacy and popular resonance from 1800 to the 1960s. In the decades after the founding, Americans adapted English traditions of restrained mobbing to more diverse, urban conditions. During the 19th century, however, rioting became more violent and ambitious as Americans—especially white men—asserted their right to use violence to police heterogeneous public space. In the 1840s and 1850s, whites combined the lynch mob with the disorderly crowd to create a lethal and effective instrument of white settler sovereignty both in the western territories and in the states. From the 1860s to the 1930s, white communities across the country, particularly in the South, used racial killings and pogroms to seize political power and establish and enforce Jim Crow segregation. Between the 1910s and the 1970s, African Americans and Latinos, increasingly living in cities, rioted to defend their communities against civilian and police violence. The frequency of rioting declined after the urban rebellions of the 1960s, partly due to the militarization of local police. Yet the continued use of aggressive police tactics against racial minorities has contributed to a surge in rioting in US cities in the early 21st century.
J. Mark Souther
Prior to the railroad age, American cities generally lacked reputations as tourist travel destinations. As railroads created fast, reliable, and comfortable transportation in the 19th century, urban tourism emerged in many cities. Luxury hotels, tour companies, and guidebooks were facilitating and shaping tourists’ experience of cities by the turn of the 20th century. Many cities hosted regional or international expositions that served as significant tourist attractions from the 1870s to 1910s. Thereafter, cities competed more keenly to attract conventions. Tourism promotion, once handled chiefly by railroad companies, became increasingly professionalized with the formation of convention and visitor bureaus. The rise of the automobile spurred the emergence of motels and theme parks on the suburban periphery, but renewed interest in historic urban core areas spurred historic preservation activism and adaptive reuse of old structures for dining, shopping, and entertainment. Although a few cities, especially Las Vegas, had relied heavily on tourism almost from their inception, by the last few decades of the 20th century few cities could afford to ignore tourism development. New waterfront parks, aquariums, stadiums, and other tourist and leisure attractions facilitated the symbolic transformation of cities from places of production to sites of consumption. Long aimed at the a mass market, especially affluent and middle-class whites, tourism promotion embraced market segmentation in the closing years of the 20th century, and a number of attractions and tours appealed to African Americans or LGBTQ communities. If social commentators often complained that cities were developing “tourist bubbles” that concentrated the advantages of tourism in too-small areas and in too few hands, recent trends point to a greater willingness to disperse tourist activity more widely in cities. By the 21st century, urban tourism was indispensable to many cities even as it continued to contribute to uneven development.
Relations between the United States and Argentina can be best described as a cautious embrace punctuated by moments of intense frustration. Although never the center of U.S.–Latin American relations, Argentina has attempted to create a position of influence in the region. As a result, the United States has worked with Argentina and other nations of the Southern Cone—the region of South America that comprises Uruguay, Paraguay, Argentina, Chile, and southern Brazil—on matters of trade and economic development as well as hemispheric security and leadership. While Argentina has attempted to assert its position as one of Latin America’s most developed nations and therefore a regional leader, the equal partnership sought from the United States never materialized for the Southern Cone nation. Instead, competition for markets and U.S. interventionist and unilateral tendencies kept Argentina from attaining the influence and wealth it so desired. At the same time, the United States saw Argentina as an unreliable ally too sensitive to the pull of its volatile domestic politics. The two nations enjoyed moments of cooperation in World War I, the Cold War, and the 1990s, when Argentine leaders could balance this particular external partnership with internal demands. Yet at these times Argentine leaders found themselves walking a fine line as detractors back home saw cooperation with the United States as a violation of their nation’s sovereignty and autonomy. There has always been potential for a productive partnership, but each side’s intransigence and unique concerns limited this relationship’s accomplishments and led to a historical imbalance of power.
The war against Japan (1941–1945) gave rise to a uniquely enduring alliance between the United States, Australia, and New Zealand. Rooted in overlapping geopolitical interests and shared Western traditions, tripartite relationships forged in the struggles against fascism in World War II deepened as Cold War conflicts erupted in East and Southeast Asia. War in Korea drew the three Pacific democracies into a formal alliance, ANZUS. In the aftermath of defeat in Vietnam, however, American hegemony confronted new challenges, regionally and globally. A more fluid geopolitical environment replaced the alliance certainties of the early Cold War. ANZUS splintered but was not permanently broken. Thus the ebb and flow of tripartite relationships from the attack on Pearl Harbor to the first decades of the “Pacific Century” shifted as the “war on terror” and, in a very different way, the “rise of China,” revitalized trilateral cooperation and resuscitated the ANZUS agreement.
James F. Siekmeier
Throughout the 19th and 20th centuries, U.S. officials often viewed Bolivia as both a potential “test case” for U.S. economic foreign policy and a place where Washington’s broad visions for Latin America might be implemented relatively easily. After World War II, Washington leaders sought to show both Latin America and the nonindustrialized world that a relatively open economy could produce significant economic wealth for Bolivia’s working and middle classes, thus giving the United States a significant victory in the Cold War. Washington sought a Bolivia widely open to U.S. influence, and Bolivia often seemed an especially pliable country. In order to achieve their goals in Bolivia, U.S. leaders dispensed a large amount of economic assistance to Bolivia in the 1950s—a remarkable development in two senses. First, the U.S. government, generally loath to aid Third World nations, gave this assistance to a revolutionary regime. Second, the U.S. aid program for Bolivia proved to be a precursor to the Alliance for Progress, the massive aid program for Latin America in the 1960s that comprised the largest U.S. economic aid program in the Third World. Although U.S. leaders achieved their goal of a relatively stable, noncommunist Bolivia, the decision in the late 1950s to significantly increase U.S. military assistance to Bolivia’s relatively small military emboldened that military, which staged a coup in 1964, snuffing out democracy for nearly two decades. The country’s long history of dependency in both export markets and public- and private-sector capital investment led Washington leaders to think that dependency would translate into leverage over Bolivian policy. However, the historical record is mixed in this regard. Some Bolivian governments have accommodated U.S. demands; others have successfully resisted them.
Although never enemies, the United States and Brazil have a complex history stemming primarily from the significant imbalance in power between the Western Hemisphere’s two largest nations. The bedrock of the relationship, trade, was established in the 19th century due to the rapid growth in US demand for Brazilian coffee, and since then commercial disputes have been a constant feature of the relationship. Brazil’s periodic attempts to use cooperation with Washington to enhance its own economic and diplomatic status during the 20th century generally fell short of expectations due to the relative lack of weight the United States gave to Brazilian objectives. Consequently, Brazilian foreign policy has swung between advocating closer ties with the United States and asserting the country’s autonomy from the colossus to the north. American support for the 1964 military coup left a persistent legacy of suspicion. In the early 21st century, the two countries enjoy relatively good relations.
Brazil and the United States also have a rich history of transnational interactions, encompassing areas such as culture, race, business, trade unionism, and human rights. Both countries’ processes of racial and national identity formation have been influenced by the other. US business figures have at different times attempted to shape Brazil’s economic development along their preferred lines, while US culture has been used to further Washington’s political objectives. During the dictatorship, transnational actors worked together to push back against the regime and US national security policy. This history of transnational relations has become an increasingly important part of the scholarship on the United States and Brazil.
The United States has shared an intricate and turbulent history with Caribbean islands and nations since its inception. In its relations with the Caribbean, the United States has displayed the dueling tendencies of imperialism and anticolonialism that characterized its foreign policy with South America and the rest of the world. For nearly two and a half centuries, the Caribbean has stood at the epicenter of some of the US government’s most controversial and divisive foreign policies. After the American Revolution severed political ties between the United States and the British West Indies, US officials and traders hoped to expand their political and economic influence in the Caribbean. US trade in the Caribbean played an influential role in the events that led to the War of 1812. The Monroe Doctrine provided a blueprint for reconciling imperial ambitions in the Caribbean with anti-imperial sentiment. During the mid-19th century, Americans debated the propriety of annexing Caribbean islands, especially Cuba. After the Spanish-American War of 1898, the US government took an increasingly imperialist approach to its relations with the Caribbean, acquiring some islands as federal territories and augmenting its political, military, and economic influence in others. Contingents of the US population and government disapproved of such imperialistic measures, and beginning in the 1930s the US government softened, but did not relinquish, its influence in the Caribbean. Between the 1950s and the end of the Cold War, US officials wrestled with how to exert influence in the Caribbean in a postcolonial world. Since the end of the Cold War, the United States has intervened in Caribbean domestic politics to enhance democracy, continuing its oscillation between democratic and imperial impulses.
Evan D. McCormick
Since gaining independence in 1823, the states comprising Central America have had a front seat to the rise of the United States as a global superpower. Indeed, more so than anywhere else, the United States has sought to use its power to shape Central America into a system that heeds US interests and abides by principles of liberal democratic capitalism. Relations have been characterized by US power wielded freely by officials and non-state actors alike to override the aspirations of Central American actors in favor of US political and economic objectives: from the days of US filibusterers invading Nicaragua in search of territory; to the occupations of the Dollar Diplomacy era, designed to maintain financial and economic stability; to the covert interventions of the Cold War era. For their part, the Central American states have, at various times, sought to challenge the brunt of US hegemony, most effectively when coordinating their foreign policies to balance against US power. These efforts—even when not rejected by the United States—have generally been short-lived, hampered by economic dependency and political rivalries. The result is a history of US-Central American relations that wavers between confrontation and cooperation, but is remarkable for the consistency of its main element: US dominance.