1-20 of 39 Results  for:

  • 20th Century: Pre-1945 x
  • Late 19th-Century History x
Clear all

Article

During the Holocene, the present geological epoch, an increasing portion of humans began to manipulate the reproduction of plants and animals in a series of environmental practices known as agriculture. No other ecological relationship sustains as many humans as farming; no other has transformed the landscape to the same extent. The domestication of plants by American Indians followed the end of the last glacial maximum (the Ice Age). About eight thousand years ago, the first domesticated maize and squash arrived from central Mexico, spreading to every region and as far north as the subarctic boreal forest. The incursion of Europeans into North America set off widespread deforestation, soil depletion, and the spread of settlement, followed by the introduction of industrial machines and chemicals. A series of institutions sponsored publically funded research into fertilizers and insecticides. By the late 19th century, writers and activists criticized the technological transformation of farming as destructive to the environment and rural society. During the 20th century, wind erosion contributed to the depopulation of much of the Great Plains. Vast projects in environmental engineering transformed deserts into highly productive regions of intensive fruit and vegetable production. Throughout much of the 19th and 20th centuries, access to land remained limited to whites, with American Indians, African Americans, Latinas/os, Chinese, and peoples of other ethnicities attempting to gain farms or hold on to the land they owned. Two broad periods describe the history of agriculture and the environment in that portion of North America that became the United States. In the first, the environment dominated, forcing humans to adapt during the end of thousands of years of extreme climate variability. In the second, institutional and technological change became more significant, though the environment remained a constant factor against which American agriculture took shape. A related historical pattern within this shift was the capitalist transformation of the United States. For thousands of years, households sustained themselves and exchanged some of what they produced for money. But during the 19th century among a majority of American farmers, commodities took over the entire purpose of agriculture, transforming environments to reflect commercial opportunity.

Article

Jennifer Hoyt

Relations between the United States and Argentina can be best described as a cautious embrace punctuated by moments of intense frustration. Although never the center of U.S.–Latin American relations, Argentina has attempted to create a position of influence in the region. As a result, the United States has worked with Argentina and other nations of the Southern Cone—the region of South America that comprises Uruguay, Paraguay, Argentina, Chile, and southern Brazil—on matters of trade and economic development as well as hemispheric security and leadership. While Argentina has attempted to assert its position as one of Latin America’s most developed nations and therefore a regional leader, the equal partnership sought from the United States never materialized for the Southern Cone nation. Instead, competition for markets and U.S. interventionist and unilateral tendencies kept Argentina from attaining the influence and wealth it so desired. At the same time, the United States saw Argentina as an unreliable ally too sensitive to the pull of its volatile domestic politics. The two nations enjoyed moments of cooperation in World War I, the Cold War, and the 1990s, when Argentine leaders could balance this particular external partnership with internal demands. Yet at these times Argentine leaders found themselves walking a fine line as detractors back home saw cooperation with the United States as a violation of their nation’s sovereignty and autonomy. There has always been potential for a productive partnership, but each side’s intransigence and unique concerns limited this relationship’s accomplishments and led to a historical imbalance of power.

Article

Black internationalism describes the political culture and intellectual practice forged in response to slavery, colonialism, and white imperialism. It is a historical and ongoing collective struggle against racial oppression rooted in global consciousness. While the expression of black internationalism has certainly changed across time and place, black liberation through collaboration has been and remains its ultimate goal. Since the emergence of black internationalism as a result of the transatlantic slave trade and during the Age of Revolutions, black women such as the poet Phyllis Wheatley and evangelist Rebecca Protten have been at its forefront. Their writings and activism espoused an Afro-diasporic, global consciousness and promoted the cause of universal emancipation. During the 19th century, black women internationalists included abolitionists, missionaries, and clubwomen. They built on the work of their predecessors while laying the foundations for succeeding black women internationalists in the early 20th century. By World War I, a new generation of black women activists and intellectuals remained crucial parts of the International Council of Women, an organization founded by white suffragists from the United States, and the Universal Negro Improvement Association, a global organization formally led by Jamaican pan-Africanist Marcus Garvey. But they also formed an independent organization, the International Council of Women of the Darker Races (ICWDR). Within and outside of the ICWDR, black women from Africa and the African Diaspora faced and challenged discrimination on the basis of their sex and race. Their activism and intellectual work set a powerful precedent for a subsequent wave of black internationalism shaped by self-avowed black feminists.

Article

“Corporate social responsibility” is a term that first began to circulate widely in the late 1960s and early 1970s. Though it may seem to be a straightforward concept, the phrase can imply a range of activities, from minority hiring initiatives and environmentally sound operations, to funding local nonprofits and cultural institutions. The idea appeared to have developed amid increasing demands made of corporations by a number of different groups, such as the consumer movement. However, American business managers engaged in many of these practices well before that phrase was coined. As far back as the early 19th century, merchants and business owners envisioned a larger societal role. However, broader political, social, and economic developments, from the rise of Gilded Age corporations to the onset of the Cold War, significantly influenced understandings of business social responsibility. Likewise, different managers and corporations have had different motives for embracing social responsibility initiatives. Some embraced social responsibility rhetoric as a public relations tool. Others saw the concept as a way to prevent government regulation. Still others undertook social responsibility efforts because they fit well with their own socially progressive ethos. Though the terms and understandings of a business’s social responsibilities have shifted over time, the basic idea has been a perennial feature of commercial life in the United States.

Article

Tyson Reeder

The United States has shared an intricate and turbulent history with Caribbean islands and nations since its inception. In its relations with the Caribbean, the United States has displayed the dueling tendencies of imperialism and anticolonialism that characterized its foreign policy with South America and the rest of the world. For nearly two and a half centuries, the Caribbean has stood at the epicenter of some of the US government’s most controversial and divisive foreign policies. After the American Revolution severed political ties between the United States and the British West Indies, US officials and traders hoped to expand their political and economic influence in the Caribbean. US trade in the Caribbean played an influential role in the events that led to the War of 1812. The Monroe Doctrine provided a blueprint for reconciling imperial ambitions in the Caribbean with anti-imperial sentiment. During the mid-19th century, Americans debated the propriety of annexing Caribbean islands, especially Cuba. After the Spanish-American War of 1898, the US government took an increasingly imperialist approach to its relations with the Caribbean, acquiring some islands as federal territories and augmenting its political, military, and economic influence in others. Contingents of the US population and government disapproved of such imperialistic measures, and beginning in the 1930s the US government softened, but did not relinquish, its influence in the Caribbean. Between the 1950s and the end of the Cold War, US officials wrestled with how to exert influence in the Caribbean in a postcolonial world. Since the end of the Cold War, the United States has intervened in Caribbean domestic politics to enhance democracy, continuing its oscillation between democratic and imperial impulses.

Article

Patrick William Kelly

The relationship between Chile and the United States pivoted on the intertwined questions of how much political and economic influence Americans would exert over Chile and the degree to which Chileans could chart their own path. Given Chile’s tradition of constitutional government and relative economic development, it established itself as a regional power player in Latin America. Unencumbered by direct US military interventions that marked the history of the Caribbean, Central America, and Mexico, Chile was a leader in movements to promote Pan-Americanism, inter-American solidarity, and anti-imperialism. But the advent of the Cold War in the 1940s, and especially after the 1959 Cuban Revolution, brought an increase in bilateral tensions. The United States turned Chile into a “model democracy” for the Alliance for Progress, but frustration over its failures to enact meaningful social and economic reform polarized Chilean society, resulting in the election of Marxist Salvador Allende in 1970. The most contentious period in US-Chilean relations was during the Nixon administration when it worked, alongside anti-Allende Chileans, to destabilize Allende’s government, which the Chilean military overthrew on September 11, 1973. The Pinochet dictatorship (1973–1990), while anti-Communist, clashed with the United States over Pinochet’s radicalization of the Cold War and the issue of Chilean human rights abuses. The Reagan administration—which came to power on a platform that reversed the Carter administration’s critique of Chile—reversed course and began to support the return of democracy to Chile, which took place in 1990. Since then, Pinochet’s legacy of neoliberal restructuring of the Chilean economy looms large, overshadowed perhaps only by his unexpected role in fomenting a global culture of human rights that has ended the era of impunity for Latin American dictators.

Article

The US working class and the institutional labor movement was shaped by anticommunism. Anticommunism preceded the founding of the Soviet Union and the Cold War, and this early history affected the later experience. It reinforced conservative positions on union issues even in the period before the Cold War, and forged the alliances that influenced the labor movement’s direction, including the campaign to organize the South, the methods and structures of unions, and US labor’s foreign policy positions. While the Communist Party of the USA (CP) was a hierarchical organization straitjacketed by an allegiance to the Soviet Union, the unions it fostered cultivated radical democratic methods, while anticommunism often justified opposition to militancy and obstructed progressive policies. In the hottest moments of the postwar development of domestic anticommunism, unions and their members were vilified and purged from the labor movement, forced to take loyalty oaths, and fired for their association with the CP. The Cold War in the working class removed critical perspectives on capitalism, reinforced a moderate and conservative labor officialdom, and led to conformity with the state on foreign policy issues.

Article

The first credit reporting organizations emerged in the United States during the 19th century to address problems of risk and uncertainty in an expanding market economy. Early credit reporting agencies assisted merchant lenders by collecting and centralizing information about the business activities and reputations of unknown borrowers throughout the country. These agencies quickly evolved into commercial surveillance networks, amassing huge archives of personal information about American citizens and developing credit rating systems to rank them. Shortly after the Civil War, separate credit reporting organizations devoted to monitoring consumers, rather than businesspeople, also began to emerge to assist credit-granting retailers. By the early 20th century, hundreds of local credit bureaus dissected the personal affairs of American consumers, forming the genesis of a national consumer credit surveillance infrastructure. The history of American credit reporting reveals fundamental links between the development of modern capitalism and contemporary surveillance society. These connections became increasingly apparent during the late 20th century as technological advances in computing and networked communication fueled the growth of new information industries, raising concerns about privacy and discrimination. These connections and concerns, however, are not new. They can be traced to 19th-century credit reporting organizations, which turned personal information into a commodity and converted individual biographies into impersonal financial profiles and risk metrics. As these disembodied identities and metrics became authoritative representations of one’s reputation and worth, they exerted real effects on one’s economic life chances and social legitimacy. While drawing attention to capitalism’s historical twin, surveillance, the history of credit reporting illuminates the origins of surveillance-based business models that became ascendant during the 21st century.

Article

Michael K. Rosenow

In the broader field of thanatology, scholars investigate rituals of dying, attitudes toward death, evolving trajectories of life expectancy, and more. Applying a lens of social class means studying similar themes but focusing on the men, women, and children who worked for wages in the United States. Working people were more likely to die from workplace accidents, occupational diseases, or episodes of work-related violence. In most periods of American history, it was more dangerous to be a wage worker than it was to be a soldier. Battlegrounds were not just the shop floor but also the terrain of labor relations. American labor history has been filled with violent encounters between workers asserting their views of economic justice and employers defending their private property rights. These clashes frequently turned deadly. Labor unions and working-class communities extended an ethos of mutualism and solidarity from the union halls and picket lines to memorial services and gravesites. They lauded martyrs to movements for human dignity and erected monuments to honor the fallen. Aspects of ethnicity, race, and gender added layers of meaning that intersected with and refracted through individuals’ economic positions. Workers’ encounters with death and the way they made sense of loss and sacrifice in some ways overlapped with Americans from other social classes in terms of religious custom, ritual practice, and material consumption. Their experiences were not entirely unique but diverged in significant ways.

Article

Vincent J. Cannato

The Ellis Island Immigration Station, located in New York Harbor, opened in 1892 and closed in 1954. During peak years from the 1890s until the 1920s, the station processed an estimated twelve million immigrants. Roughly 75 percent of all immigrants arriving in America during this period passed through Ellis Island. The station was run by the federal Immigration Service and represented a new era of federal control over immigration. Officials at Ellis Island were tasked with regulating the flow of immigration by enforcing a growing body of federal laws that barred various categories of “undesirable” immigrants. As the number of immigrants coming to America increased, so did the size of the inspection facility. In 1907, Ellis Island processed more than one million immigrants. The quota laws of the 1920s slowed immigration considerably and the rise of the visa system meant that Ellis Island no longer served as the primary immigrant inspection facility. For the next three decades, Ellis Island mostly served as a detention center for those ordered deported from the country. After Ellis Island closed in 1954, the facility fell into disrepair. During a period of low immigration and a national emphasis on assimilation, the immigrant inspection station was forgotten by most Americans. With a revival of interest in ethnicity in the 1970s, Ellis Island attracted more attention, especially from the descendants of immigrants who entered the country through its doors. In the 1980s, large-scale fundraising for the restoration of the neighboring Statue of Liberty led to a similar effort to restore part of Ellis Island. In 1990, the Main Building was reopened to the public as an immigration museum under the National Park Service. Ellis Island has evolved into an iconic national monument with deep meaning for the descendants of the immigrants who arrived there, as well as a contested symbol to other Americans grappling with the realities of contemporary immigration.

Article

Christoph Nitschke and Mark Rose

U.S. history is full of frequent and often devastating financial crises. They have coincided with business cycle downturns, but they have been rooted in the political design of markets. Financial crises have also drawn from changes in the underpinning cultures, knowledge systems, and ideologies of marketplace transactions. The United States’ political and economic development spawned, guided, and modified general factors in crisis causation. Broadly viewed, the reasons for financial crises have been recurrent in their form but historically specific in their configuration: causation has always revolved around relatively sudden reversals of investor perceptions of commercial growth, stock market gains, monetary availability, currency stability, and political predictability. The United States’ 19th-century financial crises, which happened in rapid succession, are best described as disturbances tied to market making, nation building, and empire creation. Ongoing changes in America’s financial system aided rapid national growth through the efficient distribution of credit to a spatially and organizationally changing economy. But complex political processes—whether Western expansion, the development of incorporation laws, or the nation’s foreign relations—also underlay the easy availability of credit. The relationship between systemic instability and ideas and ideals of economic growth, politically enacted, was then mirrored in the 19th century. Following the “Golden Age” of crash-free capitalism in the two decades after the Second World War, the recurrence of financial crises in American history coincided with the dominance of the market in statecraft. Banking and other crises were a product of political economy. The Global Financial Crisis of 2007–2008 not only once again changed the regulatory environment in an attempt to correct past mistakes, but also considerably broadened the discursive situation of financial crises as academic topics.

Article

Legal aid organizations were first created by a variety of private groups during the Civil War to provide legal advice in civil cases to the poor. The growing need for legal aid was deeply connected to industrialization, urbanization, and immigration. A variety of groups created legal aid organizations in response to labor unrest, the increasing number of women in the workforce, the founding of women’s clubs, and the slow and incomplete professionalization of the legal bar. In fact, before women could practice law, or were accepted into the legal profession, a variety of middle-class women’s groups using lay lawyers provided legal aid to poor women. Yet, this rich story of women’s work was later suppressed by leaders of the bar attempting to claim credit for legal aid, assert a monopoly over the practice of law, and professionalize legal assistance. Across time, the largest number of claims brought to legal aid providers involved workers trying to collect wages, domestic relations cases, and landlord tenant issues. Until the 1960s, legal aid organizations were largely financed through private donations and philanthropic organizations. After the 1960s, the federal government provided funding to support legal aid, creating significant controversy among lawyers, legal aid providers, and activists as to what types of cases legal aid organizations could take, what services could be provided, and who was eligible. Unlike in many other countries or in criminal cases, in the United States there is no constitutional right to have free counsel in civil cases. This leaves many poor and working-class people without legal advice or access to justice. Organizations providing free civil legal services to the poor are ubiquitous across the United States. They are so much part of the modern legal landscape that it is surprising that little historical scholarship exists on such organizations. Yet the history of organized legal aid, which began during the Civil War, is a rich story that brings into view a unique range of historical actors including women’s organizations, lawyers, social workers, community organizations, the state and federal government, and the millions of poor clients who over the last century and a half have sought legal assistance. This history of the development of legal aid is also very much a story about gender, race, professionalization, the development of the welfare state, and ultimately its slow dismantlement. In other words, the history of legal aid provides a window into the larger history of the United States while producing its own series of historical tensions, ironies, and contradictions. Although this narrative demonstrates change over time and various ruptures with the past, there are also important continuities in the history of free legal aid. Deceptively simple questions have plagued legal aid for almost a century and have also driven much of the historical scholarship on legal aid. These include: who should provide legal aid services, who should receive free legal aid, what types of cases should legal aid organizations handle, who should fund legal aid, and who benefits from legal aid.

Article

While American gambling has a historical association with the lawlessness of the frontier and with the wasteful leisure practices of Southern planters, it was in large cities where American gambling first flourished as a form of mass leisure, and as a commercial enterprise of significant scale. In the urban areas of the Mid-Atlantic, the Northeast, and the upper Mid-West, for the better part of two centuries the gambling economy was deeply intertwined with municipal politics and governance, the practices of betting were a prominent feature of social life, and controversies over the presence of gambling both legal and illegal, were at the center of public debate. In New York and Chicago in particular, but also in Cleveland, Pittsburgh, Detroit, Baltimore, and Philadelphia, gambling channeled money to municipal police forces and sustained machine politics. In the eyes of reformers, gambling corrupted governance and corroded social and economic interactions. Big city gambling has changed over time, often in a manner reflecting important historical processes and transformations in economics, politics, and demographics. Yet irrespective of such change, from the onset of Northern urbanization during the 19th century, through much of the 20th century, gambling held steady as a central feature of city life and politics. From the poolrooms where recently arrived Irish New Yorkers bet on horseracing after the Civil War, to the corner stores where black and Puerto Rican New Yorkers bet on the numbers game in the 1960s, the gambling activity that covered the urban landscape produced argument and controversy, particularly with respect to drawing the line between crime and leisure, and over the question of where and to what ends the money of the gambling public should be directed.

Article

Throughout US history, Americans have used ideas about gender to understand power, international relations, military behavior, and the conduct of war. Since Joan Wallach Scott called on scholars in 1986 to consider gender a “useful category of analysis,” historians have looked beyond traditional diplomatic and military sources and approaches to examine cultural sources, the media, and other evidence to try to understand the ideas that Americans have relied on to make sense of US involvement in the world. From casting weak nations as female to assuming that all soldiers are heterosexual males, Americans have deployed mainstream assumptions about men’s and women’s proper behavior to justify US diplomatic and military interventions in the world. State Department pamphlets describing newly independent countries in the 1950s and 1960s featured gendered imagery like the picture of a young Vietnamese woman on a bicycle that was meant to symbolize South Vietnam, a young nation in need of American guidance. Language in news reports and government cables, as well as film representations of international affairs and war, expressed gendered dichotomies such as protector and protected, home front and battlefront, strong and weak leadership, and stable and rogue states. These and other episodes illustrate how thoroughly gender shaped important dimensions about the character and the making of US foreign policy and historians’ examinations of diplomatic and military history.

Article

A fear of foreignness shaped the immigration foreign policies of the United States up to the end of World War II. US leaders perceived nonwhite peoples of Latin America, Asia, and Europe as racially inferior, and feared that contact with them, even annexation of their territories, would invite their foreign mores, customs, and ideologies into US society. This belief in nonwhite peoples’ foreignness also influenced US immigration policy, as Washington codified laws that prohibited the immigration of nonwhite peoples to the United States, even as immigration was deemed a net gain for a US economy that was rapidly industrializing from the late 19th century to the first half of the 20th century. Ironically, this fear of foreignness fostered an aggressive US foreign policy for many of the years under study, as US leaders feared that European intervention into Latin America, for example, would undermine the United States’ regional hegemony. The fear of foreignness that seemed to oblige the United States to shore up its national security interests vis-à-vis European empires also demanded US intervention into the internal affairs of nonwhite nations. For US leaders, fear of foreignness was a two-sided coin: European aggression was encouraged by the internal instability of nonwhite nations, and nonwhite nations were unstable—and hence ripe pickings for Europe’s empires—because their citizens were racially inferior. To forestall both of these simultaneous foreign threats, the United States increasingly embedded itself into the political and economic affairs of foreign nations. The irony of opportunity, of territorial acquisitions as well as immigrants who fed US labor markets, and fear, of European encroachment and the racial inferiority of nonwhite peoples, lay at the root of the immigration and foreign policies of the United States up to 1945.

Article

Between 1820 and 1924, nearly thirty-six million immigrants entered the United States. Prior to the Civil War, the vast majority of immigrants were northern and western Europeans, though the West Coast received Chinese immigration from the late 1840s onward. In mid-century, the United States received an unprecedented influx of Irish and German immigrants, who included a large number of Catholics and the poor. At the turn of the 20th century, the major senders of immigrants shifted to southern and eastern Europe, and Asians and Mexicans made up a growing portion of newcomers. Throughout the long 19th century, urban settlement remained a popular option for immigrants, and they contributed to the social, cultural, political, economic, and physical growth of the cities they resided in. Foreign-born workers also provided much-needed labor for America’s industrial development. At the same time, intense nativism emerged in cities in opposition to the presence of foreigners, who appeared to be unfit for American society, threats to Americans’ jobs, or sources of urban problems such as poverty. Anti-immigrant sentiment resulted in the introduction of state and federal laws for preventing the immigration of undesirable foreigners, such as the poor, southern and eastern Europeans, and Asians. Cities constituted an integral part of the 19th-century American immigration experience.

Article

The United States has engaged with Indigenous nations on a government-to-government basis via federal treaties representing substantial international commitments since the origins of the republic. The first treaties sent to the Senate for ratification under the Constitution of 1789 were treaties with Indigenous nations. Treaties with Indigenous nations provided the means by which approximately one billion acres of land entered the national domain of the United States prior to 1900, at an average price of seventy-five cents per acre – the United States confiscated or claimed another billion acres of Indigenous land without compensation. Despite subsequent efforts of American federal authorities to alter these arrangements, the weight of evidence indicates that the relationship remains primarily one of a nation-to-nation association. Integration of the history of federal relations with Indigenous nations with American foreign relations history sheds important new light on the fundamental linkages between these seemingly distinct state practices from the beginnings of the American republic.

Article

Jamie L. Pietruska

The term “information economy” first came into widespread usage during the 1960s and 1970s to identify a major transformation in the postwar American economy in which manufacturing had been eclipsed by the production and management of information. However, the information economy first identified in the mid-20th century was one of many information economies that have been central to American industrialization, business, and capitalism for over two centuries. The emergence of information economies can be understood in two ways: as a continuous process in which information itself became a commodity, as well as an uneven and contested—not inevitable—process in which economic life became dependent on various forms of information. The production, circulation, and commodification of information has historically been essential to the growth of American capitalism and to creating and perpetuating—and at times resisting—structural racial, gender, and class inequities in American economy and society. Yet information economies, while uneven and contested, also became more bureaucratized, quantified, and commodified from the 18th century to the 21st century. The history of information economies in the United States is also characterized by the importance of systems, networks, and infrastructures that link people, information, capital, commodities, markets, bureaucracies, technologies, ideas, expertise, laws, and ideologies. The materiality of information economies is historically inextricable from production of knowledge about the economy, and the concepts of “information” and “economy” are themselves historical constructs that change over time. The history of information economies is not a teleological story of progress in which increasing bureaucratic rationality, efficiency, predictability, and profit inevitably led to the 21st-century age of Big Data. Nor is it a singular story of a single, coherent, uniform information economy. The creation of multiple information economies—at different scales in different regions—was a contingent, contested, often inequitable process that did not automatically democratize access to objective information.

Article

By serving travelers and commerce, roads and streets unite people and foster economic growth. But as they develop, roads and streets also disrupt old patterns, upset balances of power, and isolate some as they serve others. The consequent disagreements leave historical records documenting social struggles that might otherwise be overlooked. For long-distance travel in America before the middle of the 20th century, roads were generally poor alternatives, resorted to when superior means of travel, such as river and coastal vessels, canal boats, or railroads were unavailable. Most roads were unpaved, unmarked, and vulnerable to the effects of weather. Before the railroads, for travelers willing to pay the toll, rare turnpikes and plank roads could be much better. Even in towns, unpaved streets were common until the late 19th century, and persisted into the 20th. In the late 19th century, rapid urban growth, rural free delivery of the mails, and finally the proliferation of electric railways and bicycling contributed to growing pressure for better roads and streets. After 1910, the spread of the automobile accelerated the trend, but only with great controversy, especially in cities. Partly in response to the controversy, advocates of the automobile organized to promote state and county motor highways funded substantially by gasoline taxes; such roads were intended primarily for motor vehicles. In the 1950s, massive federal funds accelerated the trend; by then, motor vehicles were the primary transportation mode for both long and short distances. The consequences have been controversial, and alternatives have been attracting growing interest.

Article

Sophie Cooper

Irish and American histories are intertwined as a result of migration, mercantile and economic connections, and diplomatic pressures from governments and nonstate actors. The two fledgling nations were brought together by their shared histories of British colonialism, but America’s growth as an imperial power complicated any natural allegiances that were invoked across the centuries. Since the beginnings of that relationship in 1607 with the arrival of Irish migrants in America (both voluntary and forced) and the building of a transatlantic linen trade, the meaning of “Irish” has fluctuated in America, mirroring changes in both migrant patterns and international politics. The 19th century saw Ireland enter into Anglo-American diplomacy on both sides of the Atlantic, while the 20th century saw Ireland emerge from Britain’s shadow with the establishment of separate diplomatic connections between the United States and Ireland. American recognition of the newly independent Irish Free State was vital for Irish politicians on the world stage; however the Free State’s increasingly isolationist policies during the 1930s to 1950s alienated its American allies. The final decade of the century, however, brought America and Ireland (including both Northern Ireland and the Republic of Ireland) closer than ever before. Throughout their histories, the Irish diasporas—both Protestant and Catholic—in America have played vital roles as pressure groups and fundraisers. The history of American–Irish relations therefore brings together governmental and nonstate organizations and unites political, diplomatic, social, cultural, and economic histories which are still relevant today.