You are looking at 281-300 of 353 articles
David M. Robinson
New England transcendentalism is the first significant literary movement in American history, notable principally for the influential works of Ralph Waldo Emerson, Margaret Fuller, and Henry David Thoreau. The movement emerged in the 1830s as a religious challenge to New England Unitarianism. Building on the writings of the Unitarian leader William Ellery Channing, Emerson and others such as Frederic Henry Hedge, George Ripley, James Freeman Clarke, and Theodore Parker developed a theology based on interior, intuitive experience rather than the historical truth of the Bible. By 1836 transcendentalist books from several important religious thinkers began to appear, including Emerson’s Nature, which employed idealist philosophy and Romantic symbolism to examine human interaction with the natural world. Emerson’s Harvard addresses, “The American Scholar” (1837) and the controversial “Divinity School Address” (1838), gave transcendental ideas a wider prominence, and also generated strong resistance that added an element of experiment and danger to the movement’s reputation. In 1840 the transcendentalists founded a journal for their work, and Fuller became the Dial’s first editor, a position that gave her an important role in the movement and a crucial outlet for her own work in literary criticism and women’s rights.
Though it had begun as a religious movement, by the middle 1840s transcendentalism could be better described as a literary movement with growing political engagements on several fronts. Emerson proclaimed it as an era of reform and aligned the transcendentalists with those who resisted the social and political status quo. In her feminist manifesto Woman in the Nineteenth Century (1845), Fuller called for the removal of both legal and social barriers to women’s full potential. In 1845 Henry David Thoreau went to live in the woods by Walden Pond; his memoir of his experience, Walden (1854), became a founding text of modern environmental thinking. Antislavery also became a key concern for many of the transcendentalists, who condemned the Fugitive Slave Act of 1850 and actively resisted the execution of the law after its passage. The transcendentalists, a nineteenth-century cultural avant-garde, continue to exert cultural influence through the durability of their writings, works that shaped many aspects of American national development.
Paul D. Miller
Afghanistan has twice been thrust front and center of US national security concerns in the past half-century: first, during the Soviet-Afghan War, when Afghanistan served as a proxy for American efforts to combat Soviet influence; and second, as the frontline state and host for America’s global response to al-Qaida’s terrorist attacks of 2001. In both instances, American involvement swung from intensive investment and engagement to withdrawal and neglect. In both cases, American involvement reflected US concerns more than Afghan realities. And both episodes resulted in short-term successes for American security with long-term consequences for Afghanistan and its people. The signing of a strategic partnership agreement between the two countries in 2012 and a bilateral security agreement in 2013 created the possibility of a steadier and more forward-looking relationship—albeit one that the American and Afghan people may be less inclined to pursue as America’s longest war continues to grind on.
Gregg A. Brazinsky
Throughout the 19th and 20th centuries, America’s relationship with China ran the gamut from friendship and alliance to enmity and competition. Americans have long believed in China’s potential to become an important global actor, primarily in ways that would benefit the United States. The Chinese have at times embraced, at times rejected, and at times adapted to the US agenda. While there have been some consistent themes in this relationship, Sino-American interactions unquestionably increased their breadth in the 20th century. Trade with China grew from its modest beginnings in the 19th and early 20th centuries into a critical part of the global economy by the 21st century. While Americans have often perceived China as a country that offered significant opportunities for mutual benefit, China has also been seen as a threat and rival. During the Cold War, the two competed vigorously for influence in Asia and Africa. Today we see echoes of this same competition as China continues to grow economically while expanding its influence abroad. The history of Sino-American relations illustrates a complex dichotomy of cooperation and competition; this defines the relationship today and has widespread ramifications for global politics.
Although the League of Nations was the first permanent organization established with the purpose of maintaining international peace, it built on the work of a series of 19th-century intergovernmental institutions. The destructiveness of World War I led American and British statesmen to champion a league as a means of maintaining postwar global order. In the United States, Woodrow Wilson followed his predecessors, Theodore Roosevelt and William Howard Taft, in advocating American membership of an international peace league, although Wilson’s vision for reforming global affairs was more radical. In Britain, public opinion had begun to coalesce in favor of a league from the outset of the war, though David Lloyd George and many of his Cabinet colleagues were initially skeptical of its benefits. However, Lloyd George was determined to establish an alliance with the United States and warmed to the league idea when Jan Christian Smuts presented a blueprint for an organization that served that end.
The creation of the League was a predominantly British and American affair. Yet Wilson was unable to convince Americans to commit themselves to membership in the new organization. The Franco-British-dominated League enjoyed some early successes. Its high point was reached when Europe was infused with the “Spirit of Locarno” in the mid-1920s and the United States played an economically crucial, if politically constrained, role in advancing Continental peace. This tenuous basis for international order collapsed as a result of the economic chaos of the early 1930s, as the League proved incapable of containing the ambitions of revisionist powers in Europe and Asia. Despite its ultimate limitations as a peacekeeping body, recent scholarship has emphasized the League’s relative successes in stabilizing new states, safeguarding minorities, managing the evolution of colonies into notionally sovereign states, and policing transnational trafficking; in doing so, it paved the way for the creation of the United Nations.
For almost a century and a half, successive American governments adopted a general policy of neutrality on the world stage, eschewing involvement in European conflicts and, after the Quasi War with France, alliances with European powers. Neutrality, enshrined as a core principle of American foreign relations by the outgoing President George Washington in 1796, remained such for more than a century.
Finally, in the 20th century, the United States emerged as a world power and a belligerent in the two world wars and the Cold War. This article explores the modern conflict between traditional American attitudes toward neutrality and the global agenda embraced by successive U.S. governments, beginning with entry in the First World War. With the United States immersed in these titanic struggles, the traditional U.S. support for neutrality eroded considerably. During the First World War, the United States showed some sympathy for the predicaments of the remaining neutral powers. In the Second World War it applied considerable pressure to those states still trading with Germany. During the Cold War, the United States was sometimes impatient with the choices of states to remain uncommitted in the global struggle, while at times it showed understanding for neutrality and pursued constructive relations with neutral states. The wide varieties of neutrality in each of these conflicts complicated the choices of U.S. policy makers. Americans remained torn between memory of their own long history of neutrality and a capacity to understand its potential value, on one hand, and a predilection to approach conflicts as moral struggles, on the other.
The U.S. relationship with Southeast Asia has always reflected the state of U.S. interactions with the three major powers that surround the region: Japan, China, and, to a lesser extent, India. Initially, Americans looked at Southeast Asia as an avenue to the rich markets that China and India seemed to offer, while also finding trading opportunities in the region itself. Later, American missionaries sought to save Southeast Asian souls, while U.S. officials often viewed Southeast Asia as a region that could tip the overall balance of power in East Asia if its enormous resources fell under the control of a hostile power.
American interest expanded enormously with the annexation of the Philippines in 1899, an outgrowth of the Spanish-American War. That acquisition resulted in a nearly half-century of American colonial rule, while American investors increased their involvement in exploiting the region’s raw materials, notably tin, rubber, and petroleum, and missionaries expanded into areas previously closed to them.
American occupation of the Philippines heightened tensions with Japan, which sought the resources of Southeast Asia, particularly in French Indochina, Malaya, and the Dutch East Indies (today’s Indonesia). Eventually, clashing ambitions and perceptions brought the United States into World War II. Peeling those territories away from Japan during the war was a key American objective. Americans resisted the Japanese in the Philippines and in Burma, but after Japan quickly subdued Southeast Asia, there was little contact in the region until the reconquest began in 1944. American forces participated in the liberation of Burma and also fought in the Dutch Indies and the Philippines before the war ended in 1945.
After the war, the United States had to face the independence struggles in several Southeast Asian countries, even as the Grand Alliance fell apart and the Cold War emerged, which for the next several decades overshadowed almost everything. American efforts to prevent communist expansion in the region inhibited American support for decolonization and led to war in Vietnam and Laos and covert interventions elsewhere.
With the end of the Cold War in 1991, relations with most of Southeast Asia have generally been normal, except for Burma/Myanmar, where a brutal military junta ruled. The opposition, led by the charismatic Aung San Suu Kyi, found support in the United States. More recently American concerns with China’s new assertiveness, particularly in the South China Sea, have resulted in even closer U.S. relations with Southeast Asian countries.
R. Joseph Parrott
The United States never sought to build an empire in Africa in the 19th and 20th centuries, as did European nations from Britain to Portugal. However, economic, ideological, and cultural affinities gradually encouraged the development of relations with the southern third of the continent (the modern Anglophone nations of South Africa, Zimbabwe, Zambia, Namibia, the former Portuguese colonies of Mozambique and Angola, and a number of smaller states). With official ties limited for decades, missionaries and business concerns built a small but influential American presence mostly in the growing European settler states. This state of affairs made the United State an important trading partner during the 20th century, but it also reinforced the idea of a white Christian civilizing mission as justification for the domination of black peoples. The United States served as a comparison point for the construction of legal systems of racial segregation in southern Africa, even as it became more politically involved in the region as part of its ideological competition with the Soviet Union.
As Europe’s empires dissolved after World War II, official ties to white settler states such as South Africa, Angola, and Rhodesia (modern Zimbabwe) brought the United States into conflict with mounting demands for decolonization, self-determination, and racial equality—both international and domestic. Southern Africa illustrated the gap between a Cold War strategy predicated on Euro-American preponderance and national traditions of liberty and democracy, eliciting protests from civil and human rights groups that culminated in the successful anti-apartheid movement of the 1980s. Though still a region of low priority at the beginning of the 21st century, American involvement in southern Africa evolved to emphasize the pursuit of social and economic improvement through democracy promotion, emergency relief, and health aid—albeit with mixed results. The history of U.S. relations with southern Africa therefore illustrates the transformation of trans-Atlantic racial ideologies and politics over the last 150 years, first in the construction of white supremacist governance and later in the eventual rejection of this model.
The relationship between the United States and the island of Ireland combines nostalgic sentimentality and intervention in the sectarian conflict known as the “Troubles.” Irish migration to the United States remains a celebrated and vital part of the American saga, while Irish American interest—and involvement—in the “Troubles” during the second half of the 20th century was a problematic issue in transatlantic relations and for those seeking to establish a peaceful political consensus on the Irish question. Paradoxically, much of the historiography of American–Irish relations addresses the social, economic, and cultural consequences of the Irish in America, yet the major political issue—namely the United States’ approach to the “Troubles”—has only recently become subject of thorough historiographical inquiry. As much as the Irish have contributed to developments in American history, the American contribution to the Anglo-Irish process, and ultimate peace process, in order to end conflict in Northern Ireland is an example of the peacemaking potential of US foreign policy.
The United States was heavily involved in creating the United Nations in 1945 and drafting its charter. The United States continued to exert substantial clout in the organization after its founding, though there have been periods during which U.S. officials have met with significant opposition inside the United Nations, in Congress, and in American electoral politics, all of which produced struggles to gain support for America’s international policy goals. U.S. influence in the international organization has thus waxed and waned. The early postwar years witnessed the zenith of American prestige on the global stage. Starting in the mid- to late 1950s, as decolonization and the establishment of newly independent nations quickened, the United States began to lose influence in the United Nations owing to the spreading perception that its alliances with the European colonial powers placed it on the wrong side of history. As U.N. membership skyrocketed, the organization became more responsive to the needs and interests of the decolonizing states. During the 1970s and early 1980s, the American public responded to declining U.S. influence in the United Nations with calls to defund the organization and to pursue a unilateral approach to international challenges. The role of the United States in the United Nations was shaped by the politics of the Cold War competition with the Soviet Union. Throughout the nearly five decades of the Cold War, the United Nations served as a forum for the political and ideological rivalry between the United States and the Soviet Union, which frequently inhibited the organization from fulfilling what most considered to be its primary mission: the maintenance of global security and stability. After the collapse of the Soviet Union and the peaceful end of the Cold War, the United States enjoyed a brief period of unrivaled global hegemony. During this period, U.S. officials pursued a closer relationship with the United Nations and sought to use the organization to build support for its international policy agenda and military interventionism.
James Graham Wilson
The Cold War may have ended on the evening of November 9, 1989, when East German border guards opened up checkpoints and allowed their fellow citizens to stream into West Berlin; it certainly was over by January 28, 1992, when U.S. president George H. W. Bush delivered his annual State of the Union Address one month after President Mikhail Gorbachev had announced his resignation and the end of the Soviet Union. After the Berlin Wall came down, Bush and Gorbachev spoke of the Cold War in the past tense in person and on the telephone. The reunification of Germany and U.S. military campaign in the Persian Gulf confirmed that reality. In January 1991, polls indicated that, for the first time, a majority of Americans believed that the Cold War was over. However, the poll results obscured the substantial foreign and domestic crises, challenges, and opportunities created by the end of the Cold War that occupied President Bush and his national-security team between November 1989 and Bush’s defeat in the 1992 presidential inauguration and the inauguration of William Jefferson Clinton as America’s first post–Cold War president in January 1993.
President Abraham Lincoln signed the law that established the Department of Agriculture in 1862 and in 1889, President Grover Cleveland signed the law that raised the Department to Cabinet status. Thus, by 1900 the US Department of Agriculture had been established for nearly four decades, had been a Cabinet-level department for one, and was recognized as a rising star among agricultural science institutions. Over the first half of the next century, the USDA would grow beyond its scientific research roots to assume a role in supporting rural and farm life more broadly, with a presence that reached across the nation. The Department acquired regulatory responsibilities in plant and animal health and food safety and quality, added research in farm management and agricultural economics, provided extension services to reach farms and rural communities in all regions, and created conservation and forestry programs to protect natural resources and prevent soil erosion and flooding across the geographical diversity of rural America. The Department gained additional responsibility for delivering credit, price supports, supply management, and rural rehabilitation programs during the severe economic depression that disrupted the agricultural economy and rural life from 1920 to 1940, while building efficient systems for encouraging production and facilitating distribution of food during the crises of World War I and World War II that bounded those decades. In the process, the Department became a pioneer in developing the regulatory state as well as in piloting programs and bureaucratic systems that empowered cooperative leadership at the federal, state, and local levels and democratic participation in implementing programs in local communities.
Lindsay M. Chervinsky
From 1775 to 1815, empire served as the most pressing foreign relationship problem for the United States. Would the new nation successfully break free from the British Empire? What would an American empire look like? How would it be treated by other empires? And could Americans hold their own against European superpowers? These questions dominated the United States’ first few decades of existence and shaped its interactions with American Indian, Haitian, Spanish, British, and French peoples. The US government—first the Continental Congress, then the Confederation Congress, and finally the federal administration under the new Constitution—grappled with five key issues. First, they sought international recognition of their independence and negotiated trade deals during the Revolutionary War to support the war effort. Second, they obtained access to the Mississippi River and Port of New Orleans from Spain and France to facilitate trade and western settlement. Third, they grappled with ongoing conflict with Indian nations over white settlement on Indian lands and demands from white communities for border security. Fourth, they defined and protected American neutrality, negotiated a trade policy that required European recognition of American independence, and denied recognition to Haiti. Lastly, they fought a quasi-war with France and real war with Great Britain in 1812.
Paul V. Murphy
Americans grappled with the implications of industrialization, technological progress, urbanization, and mass immigration with startling vigor and creativity in the 1920s even as wide numbers kept their eyes as much on the past as on the future. American industrial engineers and managers were global leaders in mass production, and millions of citizens consumed factory-made products, including electric refrigerators and vacuum cleaners, technological marvels like radios and phonographs, and that most revolutionary of mass-produced durables, the automobile. They flocked to commercial amusements (movies, sporting events, amusement parks) and absorbed mass culture in their homes, through the radio and commercial recordings. In the major cities, skyscrapers drew Americans upward while thousands of new miles of roads scattered them across the country. Even while embracing the dynamism of modernity, Americans repudiated many of the progressive impulses of the preceding era. The transition from war to peace in 1919 and 1920 was tumultuous, marked by class conflict, a massive strike wave, economic crisis, and political repression. Exhausted by reform, war, and social experimentation, millions of Americans recoiled from central planning and federal power and sought determinedly to bypass traditional politics in the 1920s. This did not mean a retreat from active and engaged citizenship; Americans fought bitterly over racial equality, immigration, religion, morals, Prohibition, economic justice, and politics. In a greatly divided nation, citizens experimented with new forms of nationalism, cultural identity, and social order that could be alternatively exclusive and pluralistic. Whether repressive or tolerant, such efforts held the promise of unity amid diversity; even those in the throes of reaction sought new ways of integration. The result was a nation at odds with itself, embracing modernity, sometimes heedlessly, while seeking desperately to retain a grip on the past.
C. J. Alvarez
The region that today constitutes the United States–Mexico borderland has evolved through various systems of occupation over thousands of years. Beginning in time immemorial, the land was used and inhabited by ancient peoples whose cultures we can only understand through the archeological record and the beliefs of their living descendants. Spain, then Mexico and the United States after it, attempted to control the borderlands but failed when confronted with indigenous power, at least until the late 19th century when American capital and police established firm dominance. Since then, borderland residents have often fiercely contested this supremacy at the local level, but the borderland has also, due to the primacy of business, expressed deep harmonies and cooperation between the U.S. and Mexican federal governments. It is a majority minority zone in the United States, populated largely by Mexican Americans. The border is both a porous membrane across which tremendous wealth passes and a territory of interdiction in which noncitizens and smugglers are subject to unusually concentrated police attention. All of this exists within a particularly harsh ecosystem characterized by extreme heat and scarce water.
After World War II, the United States backed multinational private oil companies known as the “Seven Sisters”—five American companies (including Standard Oil of New Jersey and Texaco), one British (British Petroleum), and one Anglo-Dutch (Shell)—in their efforts to control Middle East oil and feed rising demand for oil products in the West. In 1960 oil-producing states in Latin America and the Middle East formed the Organization of the Petroleum Exporting Countries (OPEC) to protest what they regarded as the inequitable dominance of the private oil companies. Between 1969 and 1973 changing geopolitical and economic conditions shifted the balance of power from the Seven Sisters to OPEC. Following the first “oil shock” of 1973–1974, OPEC assumed control over the production and price of oil, ending the rule of the companies and humbling the United States, which suddenly found itself dependent upon OPEC for its energy security. Yet this dependence was complicated by a close relationship between the United States and major oil producers such as Saudi Arabia, which continued to adopt pro-US strategic positions even as they squeezed out the companies. Following the Iranian Revolution (1978–1979), the Iran–Iraq War (1980–1988), and the First Iraq War (1990–1991), the antagonism that colored US relations with OPEC evolved into a more comfortable, if wary, recognition of the new normal, where OPEC supplied the United States with crude oil while acknowledging the United States’ role in maintaining the security of the international energy system.
Michael R. Anderson
American strategy in the Asia Pacific over the past two centuries has been marked by strong and often contradictory impulses. On the one hand, the western Pacific has served as a fertile ground for Christian missionaries, an alluring destination for American commercial enterprises, and eventually a critical launchpad for U.S. global power projection. Yet on the other hand, American policymakers at times have subordinated Asian strategy to European-based interests, or have found themselves embroiled in area conflicts that have hampered efforts to extend U.S. regional hegemony. Furthermore, leading countries in the Asia-Pacific region at times have challenged U.S. economic and military objectives, and the assertion of “Asian values” in recent years has undermined efforts to expand Western political and cultural norms. The United States’s professed “pivot to Asia” has opened a new chapter in a centuries-long relationship, one that will determine the geopolitical fault lines of the 21st century.
Risa L. Goluboff and Adam Sorensen
The crime of vagrancy has deep historical roots in American law and legal culture. Originating in 16th-century England, vagrancy laws came to the New World with the colonists and soon proliferated throughout the British colonies and, later, the United States. Vagrancy laws took myriad forms, generally making it a crime to be poor, idle, dissolute, immoral, drunk, lewd, or suspicious. Vagrancy laws often included prohibitions on loitering—wandering around without any apparent lawful purpose—though some jurisdictions criminalized loitering separately. Taken together, vaguely worded vagrancy, loitering, and suspicious persons laws targeted objectionable “out of place” people rather than any particular conduct. They served as a ubiquitous tool for maintaining hierarchy and order in American society. Their application changed alongside perceived threats to the social fabric, at different times and places targeting the unemployed, labor activists, radical orators, cultural and sexual nonconformists, racial and religious minorities, civil rights protesters, and the poor. By the mid-20th century, vagrancy laws served as the basis for hundreds of thousands of arrests every year. But over the course of just two decades, the crime of vagrancy, virtually unquestioned for four hundred years, unraveled. Profound social upheaval in the 1960s produced a concerted effort against the vagrancy regime, and in 1972, the US Supreme Court invalidated the laws. Local authorities have spent the years since looking for alternatives to the many functions vagrancy laws once served.
Jeffrey F. Taffet
In the first half of the 20th century, and more actively in the post–World War II period, the United States government used economic aid programs to advance its foreign policy interests. US policymakers generally believed that support for economic development in poorer countries would help create global stability, which would limit military threats and strengthen the global capitalist system. Aid was offered on a country-by-country basis to guide political development; its implementation reflected views about how humanity had advanced in richer countries and how it could and should similarly advance in poorer regions. Humanitarianism did play a role in driving US aid spending, but it was consistently secondary to political considerations. Overall, while funding varied over time, amounts spent were always substantial. Between 1946 and 2015, the United States offered almost $757 billion in economic assistance to countries around the world—$1.6 trillion in inflation-adjusted 2015 dollars. Assessing the impact of this spending is difficult; there has long been disagreement among scholars and politicians about how much economic growth, if any, resulted from aid spending and similar disputes about its utility in advancing US interests. Nevertheless, for most political leaders, even without solid evidence of successes, aid often seemed to be the best option for constructively engaging poorer countries and trying to create the kind of world in which the United States could be secure and prosperous.
The transformation of post-industrial American life in the late 20th and early 21st centuries includes several economically robust metropolitan centers that stand as new models of urban and economic life, featuring well-educated populations that engage in professional practices in education, medical care, design and legal services, and artistic and cultural production. By the early 21st century, these cities dominated the nation’s consciousness economically and culturally, standing in for the most dynamic and progressive sectors of the economy, driven by collections of technical and creative spark. The origins of these academic and knowledge centers are rooted in the political economy, including investments shaped by federal policy and philanthropic ambition. Education and health care communities were and remain frequently economically robust but also rife with racial, economic, and social inequality, and riddled with resulting political tensions over development. These information communities fundamentally incubated and directed the proceeds of the new economy, but also constrained who accessed this new mode of wealth in the knowledge economy.
Christopher P. Loss
Until World War II, American universities were widely regarded as good but not great centers of research and learning. This changed completely in the press of wartime, when the federal government pumped billions into military research, anchored by the development of the atomic bomb and radar, and into the education of returning veterans under the GI Bill of 1944. The abandonment of decentralized federal–academic relations marked the single most important development in the history of the modern American university. While it is true that the government had helped to coordinate and fund the university system prior to the war—most notably the country’s network of public land-grant colleges and universities—government involvement after the war became much more hands-on, eventually leading to direct financial support to and legislative interventions on behalf of core institutional activities, not only the public land grants but the nation’s mix of private institutions as well. However, the reliance on public subsidies and legislative and judicial interventions of one kind or another ended up being a double-edged sword: state action made possible the expansion in research and in student access that became the hallmarks of the post-1945 American university; but it also created a rising tide of expectations for continued support that has proven challenging in fiscally stringent times and in the face of ongoing political fights over the government’s proper role in supporting the sector.