1-20 of 45 Results

  • Keywords: policy x
Clear all

Article

The transformation of post-industrial American life in the late 20th and early 21st centuries includes several economically robust metropolitan centers that stand as new models of urban and economic life, featuring well-educated populations that engage in professional practices in education, medical care, design and legal services, and artistic and cultural production. By the early 21st century, these cities dominated the nation’s consciousness economically and culturally, standing in for the most dynamic and progressive sectors of the economy, driven by collections of technical and creative spark. The origins of these academic and knowledge centers are rooted in the political economy, including investments shaped by federal policy and philanthropic ambition. Education and health care communities were and remain frequently economically robust but also rife with racial, economic, and social inequality, and riddled with resulting political tensions over development. These information communities fundamentally incubated and directed the proceeds of the new economy, but also constrained who accessed this new mode of wealth in the knowledge economy.

Article

Relations between the British colonies in North America and the three Scandinavian countries—Norway, Denmark, and Sweden—predate American independence. Government-level interaction was rather limited until WWII, but cultural links emerged through the extensive settlement of Swedish, Norwegian, and Danish immigrants in mid- and later nineteenth-century America, especially in the American Midwest. During WWII, the United States and Norway became allies in 1941, Denmark became a de facto Allied nation in 1944, and Sweden remained formally neutral while becoming a non-belligerent on the Allied side in 1944–1945. By the end of the war, the United States emerged as a superpower. After initial disinterest, America strived to integrate Scandinavia into the US–led Western security system. Norway and Denmark became US allies and joined NATO as founding members in 1949. Sweden remained non-aligned, but formed close military ties to the United States in 1949–1952. Throughout the Cold War, US–Scandinavian relations were characterized by ambivalence. America and Scandinavia shared the perception of the Soviet Union as a threat and cooperated militarily, but the Scandinavian countries limited the cooperation in important respects. For example, Sweden never joined NATO, and Denmark and Norway did not allow foreign bases or nuclear weapons on their territories in peacetime. America was often frustrated with these limitations but nevertheless accepted them. The Scandinavian restrictions were partially founded on a desire to reduce the risk of a Soviet attack, but there were also fears of being controlled or dominated by the American superpower. Broader ideological factors also played a role. Mainstream Scandinavian attitudes to America, both among policymakers and the general public, ranged from strongly pro-American to highly skeptical. Americans and Scandinavians shared democratic values, but they organized their societies differently in important respects. Scandinavians were exposed to American ideas and products, of which they rejected some and accepted some. After the Cold War, US–Scandinavian relations were increasingly defined by issues outside Western Europe. Denmark abandoned its Cold War reservations toward America and aligned itself closely with the United States when it came to participation in expeditionary military operations. Norway and Sweden have also participated, but to a more limited extent than Denmark. For Sweden, cooperating closely and openly with the United States and NATO nevertheless contrasted with its non-aligned tradition and often conflicted Cold War relations with the United States. After the Russian invasion of Crimea, questions about territorial defense again became more prominent in US–Scandinavian relations. Under the Trump administration, US–Scandinavian relations have been characterized by turbulence and great uncertainty, even though cooperation continues in many areas.

Article

Tanvi Madan

Policymakers and analysts have traditionally described US relations with India as moving from estrangement during the Cold War and immediate post–Cold War period to engagement after 1999. The reality has been more complex, interspersing periods of estrangement, indifference, and engagement, with the latter dominating the first two decades of the 21st century. The nature of the relationship has been determined by a variety of factors and actors, with American perceptions of India shaped by strategic and economic considerations as well as the exchange of ideas and people. The overall state of the US relationship with India after 1947 has been determined by where that country has fit into Washington’s strategic framework, and Delhi’s ability and willingness to play the role envisioned for it. When American and Indian policymakers have seen the other country as important and useful, they have sought to strengthen US-India relations. In those periods, they have also been more willing to manage the differences that have always existed between the two countries at the global, regional, and bilateral levels. But when strategic convergence between the two countries is missing, differences have taken center stage.

Article

Public opinion has been part of US foreign relations in two key ways. As one would expect in a democracy, the American public has shaped the foreign policy of its government. No less significantly, the United States has sought to influence foreign public opinion as a tool of its diplomacy, now known as public diplomacy. The US public has also been a target of foreign attempts at influence with varying degrees of success. While analysis across the span of US history reveals a continuity of issues and approaches, issues of public opinion gained unprecedented salience in the second decade of the 21st century. This salience was not matched by scholarship.

Article

The Federalist Era (1788–1800) witnessed the birth of the new American Constitution and ushered in a period of a strong Federal government headed by a president and a bicameral Congress. The new American government sought to protect American interests in a turbulent time. From threats from Barbary pirates in the Mediterranean Sea to the turmoil in Revolutionary France and to the slave revolt in Haiti, the young republic had to navigate difficult political waters in order to protect itself. Furthermore, it also had to deal with the British and Spanish, who remained in American territory, without starting another war. Additionally, the United States had to engage with various Native American tribes in the interior of the continent to end the threat of war on the American frontier. Later in the time period, tensions between the United States and the new French Republic became strained, which led to the diplomatic embarrassment of the XYZ Affair and an undeclared naval war between the United States and France. American foreign policy during the Federalist Era was a matter of trial and error because there had been no standard protocol for dealing with international incidents under the old government. George Washington, the first president under the new Constitution, shouldered the burden of creating the new American foreign policy. Washington, along with cabinet members such as Secretary of State Thomas Jefferson and Secretary of the Treasury Alexander Hamilton, helped shape US foreign policy in the Federalist Era. Washington was succeeded by his vice president, John Adams, who guided America through tense times, which included conflict with France. With the creation of the American Constitution, Washington and other Federalist leaders had the difficult task of creating a new nation, which included forging a foreign policy. The goal of the fledgling American republic’s foreign policy was to protect American sovereignty in an era of perpetual threats.

Article

Thomas P. Cavanna

In its most general sense, grand strategy can be defined as the overarching vision that shapes a state’s foreign policy and approach to national security. Like any strategy, it requires the coherent articulation of the state’s ends and means, which necessitates prioritizing vital interests, identifying key threats and opportunities, and (within certain limits) adapting to circumstances. What makes it truly “grand” is that it encompasses both wartime and peacetime, harnesses immediate realities to long-term objectives, and requires the coordination of all instruments of power (military, economic, etc.). Although American leaders have practiced grand strategic thinking since the early days of the Republic, the concept of grand strategy itself only started to emerge during World War I due to the expansion and diversification of the state’s resources and prerogatives, the advent of industrial warfare, and the growing role of populations in domestic politics and international conflicts. Moreover, it was only during World War II that it detached itself from military strategy and gained real currency among decision-makers. The contours, desirability, and very feasibility of grand strategy have inspired lively debates. However, many scholars and leaders consider it a worthy (albeit complex) endeavor that can reduce the risk of resource-squandering, signal intentions to both allies and enemies, facilitate adjustments to international upheavals, and establish a baseline for accountability. America’s grand strategy evolved from relative isolationism to full-blown liberal internationalism after 1945. Yet its conceptualization and implementation are inherently contentious processes because of political/bureaucratic infighting and recurrent dilemmas such as the uncertain geographic delimitation of US interests, the clash of ideals and Realpolitik, and the tension between unilateralism and multilateralism. The end of the Cold War, the 9/11 attacks, China’s rise, and other challenges have further compounded those lines of fracture.

Article

Evan D. McCormick

Since gaining independence in 1823, the states comprising Central America have had a front seat to the rise of the United States as a global superpower. Indeed, more so than anywhere else, the United States has sought to use its power to shape Central America into a system that heeds US interests and abides by principles of liberal democratic capitalism. Relations have been characterized by US power wielded freely by officials and non-state actors alike to override the aspirations of Central American actors in favor of US political and economic objectives: from the days of US filibusterers invading Nicaragua in search of territory; to the occupations of the Dollar Diplomacy era, designed to maintain financial and economic stability; to the covert interventions of the Cold War era. For their part, the Central American states have, at various times, sought to challenge the brunt of US hegemony, most effectively when coordinating their foreign policies to balance against US power. These efforts—even when not rejected by the United States—have generally been short-lived, hampered by economic dependency and political rivalries. The result is a history of US-Central American relations that wavers between confrontation and cooperation, but is remarkable for the consistency of its main element: US dominance.

Article

In the 20th century, US policymakers often attempted to solve domestic agricultural oversupply problems by extending food aid to foreign recipients. In some instances, the United States donated food in times of natural disasters. In other instances, the United States offered commodities to induce foreign governments to support US foreign policy aims or to spur agricultural modernization. These efforts coalesced during the 1950s with the enactment of Public Law 480, commonly known as the Food for Peace program, which provided for a formal, bureaucratic mechanism for the disbursement of commodities. Throughout the second half of the 20th century, successive presidential administrations continued to deploy commodities in advance of their often disparate foreign policy objectives.

Article

Richard N. L. Andrews

Between 1964 and 2017, the United States adopted the concept of environmental policy as a new focus for a broad range of previously disparate policy issues affecting human interactions with the natural environment. These policies ranged from environmental health, pollution, and toxic exposure to management of ecosystems, resources, and use of the public lands, environmental aspects of urbanization, agricultural practices, and energy use, and negotiation of international agreements to address global environmental problems. In doing so, it nationalized many responsibilities that had previously been considered primarily state or local matters. It changed the United States’ approach to federalism by authorizing new powers for the federal government to set national minimum environmental standards and regulatory frameworks with the states mandated to participate in their implementation and compliance. Finally, it explicitly formalized administrative procedures for federal environmental decision-making with stricter requirements for scientific and economic justification rather than merely administrative discretion. In addition, it greatly increased public access to information and opportunities for input, as well as for judicial review, thus allowing citizen advocates for environmental protection and appreciative uses equal legitimacy with commodity producers to voice their preferences for use of public environmental resources. These policies initially reflected widespread public demand and broad bipartisan support. Over several decades, however, they became flashpoints, first, between business interests and environmental advocacy groups and, subsequently, between increasingly ideological and partisan agendas concerning the role of the federal government. Beginning in the 1980s, the long-standing Progressive ideal of the “public interest” was increasingly supplanted by a narrative of “government overreach,” and the 1990s witnessed campaigns to delegitimize the underlying evidence justifying environmental policies by labeling it “junk science” or a “hoax.” From the 1980s forward, the stated priorities of environmental policy vacillated repeatedly between presidential administrations and Congresses supporting continuation and expansion of environmental protection and preservation policies versus those seeking to weaken or even reverse protections in favor of private-property rights and more damaging uses of resources. Yet despite these apparent shifts, the basic environmental laws and policies enacted during the 1970s remained largely in place: political gridlock, in effect, maintained the status quo, with the addition of a very few innovations such as “cap and trade” policies. One reason was that environmental policies retained considerable latent public support: in electoral campaigns, they were often overshadowed by economic and other issues, but they still aroused widespread support in their defense when threatened. Another reason was that decisions by the courts also continued to reaffirm many existing policies and to reject attempts to dismantle them. With the election of Donald Trump in 2016, along with conservative majorities in both houses of Congress, US environmental policy came under the most hostile and wide-ranging attack since its origins. More than almost any other issue, the incoming president targeted environmental policy for rhetorical attacks and budget cuts, and sought to eradicate the executive policies of his predecessor, weaken or rescind protective regulations, and undermine the regulatory and even the scientific capacity of the federal environmental agencies. In the early 21st century, it is as yet unclear how much of his agenda will actually be accomplished, or whether, as in past attempts, much of it will ultimately be blocked by Congress, the courts, public backlash, and business and state government interests seeking stable policy expectations rather than disruptive deregulation.

Article

The United States was extremely reluctant to get drawn into the wars that erupted in Asia in 1937 and Europe in 1939. Deeply disillusioned with the experience of World War I, when the large number of trench warfare casualties had resulted in a peace that many American believed betrayed the aims they had fought for, the United States sought to avoid all forms of entangling alliances. Deeply embittered by the Depression, which was widely blamed on international bankers and businessmen, Congress enacted legislation that sought to prevent these actors from drawing the country into another war. The American aim was neutrality, but the underlying strength of the United States made it too big to be impartial—a problem that Roosevelt had to grapple with as Germany, Italy, and Japan began to challenge international order in the second half of the 1930s.

Article

Thomas Jefferson was a key architect of early American foreign policy. He had a clear vision of the place of the new republic in the world, which he articulated in a number of writings and state papers. The key elements to his strategic vision were geographic expansion and free trade. Throughout his long public career Jefferson sought to realize these ends, particularly during his time as US minister to France, secretary of state, vice president, and president. He believed that the United States should expand westward and that its citizens should be free to trade globally. He sought to maintain the right of the United States to trade freely during the wars arising from the French Revolution and its aftermath. This led to his greatest achievement, the Louisiana Purchase, but also to conflicts with the Barbary States and, ultimately, Great Britain. He believed that the United States should usher in a new world of republican diplomacy and that it would be in the vanguard of the global republican movement. In the literature on US foreign policy, historians have tended to identify two main schools of practice dividing practitioners into idealists and realists. Jefferson is often regarded as the founder of the idealist tradition. This somewhat misreads him. While he pursued clear idealistic ends—a world dominated by republics freely trading with each other—he did so using a variety of methods including diplomacy, war, and economic coercion.

Article

While American gambling has a historical association with the lawlessness of the frontier and with the wasteful leisure practices of Southern planters, it was in large cities where American gambling first flourished as a form of mass leisure, and as a commercial enterprise of significant scale. In the urban areas of the Mid-Atlantic, the Northeast, and the upper Mid-West, for the better part of two centuries the gambling economy was deeply intertwined with municipal politics and governance, the practices of betting were a prominent feature of social life, and controversies over the presence of gambling both legal and illegal, were at the center of public debate. In New York and Chicago in particular, but also in Cleveland, Pittsburgh, Detroit, Baltimore, and Philadelphia, gambling channeled money to municipal police forces and sustained machine politics. In the eyes of reformers, gambling corrupted governance and corroded social and economic interactions. Big city gambling has changed over time, often in a manner reflecting important historical processes and transformations in economics, politics, and demographics. Yet irrespective of such change, from the onset of Northern urbanization during the 19th century, through much of the 20th century, gambling held steady as a central feature of city life and politics. From the poolrooms where recently arrived Irish New Yorkers bet on horseracing after the Civil War, to the corner stores where black and Puerto Rican New Yorkers bet on the numbers game in the 1960s, the gambling activity that covered the urban landscape produced argument and controversy, particularly with respect to drawing the line between crime and leisure, and over the question of where and to what ends the money of the gambling public should be directed.

Article

As places of dense habitation, cities have always required coordination and planning. City planning has involved the design and construction of large-scale infrastructure projects to provide basic necessities such as a water supply and drainage. By the 1850s, immigration and industrialization were fueling the rise of big cities, creating immense, collective problems of epidemics, slums, pollution, gridlock, and crime. From the 1850s to the 1900s, both local governments and utility companies responded to this explosive physical and demographic growth by constructing a “networked city” of modern technologies such as gaslight, telephones, and electricity. Building the urban environment also became a wellspring of innovation in science, medicine, and administration. In 1909–1910, a revolutionary idea—comprehensive city planning—opened a new era of professionalization and institutionalization in the planning departments of city halls and universities. Over the next thirty-five years, however, wars and depression limited their influence. From 1945 to 1965, in contrast, represents the golden age of formal planning. During this unprecedented period of peace and prosperity, academically trained experts played central roles in the modernization of the inner cities and the sprawl of the suburbs. But the planners’ clean-sweep approach to urban renewal and the massive destruction caused by highway construction provoked a revolt of the grassroots. Beginning in the Watts district of Los Angeles in 1965, mass uprisings escalated over the next three years into a national crisis of social disorder, racial and ethnic inequality, and environmental injustice. The postwar consensus of theory and practice was shattered, replaced by a fragmented profession ranging from defenders of top-down systems of computer-generated simulations to proponents of advocacy planning from the bottom up. Since the late 1980s, the ascendency of public-private partnerships in building the urban environment has favored the planners promoting systems approaches, who promise a future of high-tech “smart cities” under their complete control.

Article

D. Bradford Hunt

Public housing emerged during the New Deal as a progressive effort to end the scourge of dilapidated housing in American cities. Reformers argued that the private market had failed to provide decent, safe, and affordable housing, and they convinced Congress to provide deep subsidies to local housing authorities to build and manage modern, low-cost housing projects for the working poor. Well-intentioned but ultimately misguided policy decisions encouraged large-scale developments, concentrated poverty and youth, and starved public housing of needed resources. Further, the antipathy of private interests to public competition and the visceral resistance of white Americans to racial integration saddled public housing with many enemies and few friends. While residents often formed tight communities and fought for improvements, stigmatization and neglect undermined the success of many projects; a sizable fraction became disgraceful and tangible symbols of systemic racism toward the nation’s African American poor. Federal policy had few answers and retreated in the 1960s, eventually making a neoliberal turn to embrace public-private partnerships for delivering affordable housing. Housing vouchers and tax credits effectively displaced the federal public housing program. In the 1990s, the Clinton administration encouraged the demolition and rebuilding of troubled projects using vernacular “New Urbanist” designs to house “mixed-income” populations. Policy problems, political weakness, and an ideology of homeownership in the United States meant that a robust, public-centered program of housing for use rather than profit could not be sustained.

Article

Michael Patrick Cullinane

Between 1897 and 1901 the administration of Republican President William McKinley transformed US foreign policy traditions and set a course for empire through interconnected economic policies and an open aspiration to achieve greater US influence in global affairs. The primary changes he undertook as president included the arrangement of inter-imperial agreements with world powers, a willingness to use military intervention as a political solution, the establishment of a standing army, and the adoption of a “large policy” that extended American jurisdiction beyond the North American continent. Opposition to McKinley’s policies coalesced around the annexation of the Philippines and the suppression of the Boxer Rebellion in China. Anti-imperialists challenged McKinley’s policies in many ways, but despite fierce debate, the president’s actions and advocacy for greater American power came to define US policymaking for generations to come. McKinley’s administration merits close study.

Article

Between 1820 and 1924, nearly thirty-six million immigrants entered the United States. Prior to the Civil War, the vast majority of immigrants were northern and western Europeans, though the West Coast received Chinese immigration from the late 1840s onward. In mid-century, the United States received an unprecedented influx of Irish and German immigrants, who included a large number of Catholics and the poor. At the turn of the 20th century, the major senders of immigrants shifted to southern and eastern Europe, and Asians and Mexicans made up a growing portion of newcomers. Throughout the long 19th century, urban settlement remained a popular option for immigrants, and they contributed to the social, cultural, political, economic, and physical growth of the cities they resided in. Foreign-born workers also provided much-needed labor for America’s industrial development. At the same time, intense nativism emerged in cities in opposition to the presence of foreigners, who appeared to be unfit for American society, threats to Americans’ jobs, or sources of urban problems such as poverty. Anti-immigrant sentiment resulted in the introduction of state and federal laws for preventing the immigration of undesirable foreigners, such as the poor, southern and eastern Europeans, and Asians. Cities constituted an integral part of the 19th-century American immigration experience.

Article

Federal housing policy has been primarily devoted to maintaining the economic stability and profitability of the private sector real estate, household finance, and home-building and supply industries since the administration of President Franklin D. Roosevelt (1933–1945). Until the 1970s, federal policy encouraged speculative residential development in suburban areas and extended segregation by race and class. The National Association of Home Builders, the National Association of Realtors, and other allied organizations strenuously opposed federal programs seeking to assist low- and middle-income households and the homeless by forcing recalcitrant suburbs to permit the construction of open-access, affordable dwellings and encouraging the rehabilitation of urban housing. During the 1980s, President Ronald Reagan, a Republican from California, argued it was the government, not the private sector, that was responsible for the gross inequities in social and economic indicators between residents of city, inner ring, and outlying suburban communities. The civic, religious, consumer, labor, and other community-based organizations that tried to mitigate the adverse effects of the “Reagan Revolution” on the affordable housing market lacked a single coherent view or voice. Since that time, housing has become increasingly unaffordable in many metropolitan areas, and segregation by race, income, and ethnicity is on the rise once again. If the home mortgage crisis that began in 2007 is any indication, housing will continue to be a divisive political, economic, and social issue in the foreseeable future. The national housing goal of a “decent home in a suitable living environment for every American family” not only has yet to be realized, but many law makers now favor eliminating or further restricting federal commitment to its realization.

Article

A fear of foreignness shaped the immigration foreign policies of the United States up to the end of World War II. US leaders perceived nonwhite peoples of Latin America, Asia, and Europe as racially inferior, and feared that contact with them, even annexation of their territories, would invite their foreign mores, customs, and ideologies into US society. This belief in nonwhite peoples’ foreignness also influenced US immigration policy, as Washington codified laws that prohibited the immigration of nonwhite peoples to the United States, even as immigration was deemed a net gain for a US economy that was rapidly industrializing from the late 19th century to the first half of the 20th century. Ironically, this fear of foreignness fostered an aggressive US foreign policy for many of the years under study, as US leaders feared that European intervention into Latin America, for example, would undermine the United States’ regional hegemony. The fear of foreignness that seemed to oblige the United States to shore up its national security interests vis-à-vis European empires also demanded US intervention into the internal affairs of nonwhite nations. For US leaders, fear of foreignness was a two-sided coin: European aggression was encouraged by the internal instability of nonwhite nations, and nonwhite nations were unstable—and hence ripe pickings for Europe’s empires—because their citizens were racially inferior. To forestall both of these simultaneous foreign threats, the United States increasingly embedded itself into the political and economic affairs of foreign nations. The irony of opportunity, of territorial acquisitions as well as immigrants who fed US labor markets, and fear, of European encroachment and the racial inferiority of nonwhite peoples, lay at the root of the immigration and foreign policies of the United States up to 1945.

Article

Brooke Bauer

The Catawba Indian Nation of the 1750s developed from the integration of diverse Piedmont Indian people who belonged to and lived in autonomous communities along the Catawba River of North and South Carolina. Catawban-speaking Piedmont Indians experienced many processes of coalescence, where thinly populated groups joined the militarily strong Iswą Indians (Catawba proper) for protection and survival. Over twenty-five groups of Indians merged with the Iswą, creating an alliance or confederation of tribal communities. They all worked together building a unified community through kinship, traditional customs, and a shared history to form a nation, despite the effects of colonialism, which included European settlement, Indian slavery, warfare, disease, land loss, and federal termination. American settler colonialism, therefore, functions to erase and exterminate Native societies through biological warfare (intentional or not), military might, seizure of Native land, and assimilation. In spite of these challenges, the Catawbas’ nation-building efforts have been constant, but in 1960 the federal government terminated its relationship with the Nation. In the 1970s, the Catawba Indian Nation filed a suit to reclaim their land and their federal recognition status. Consequently, the Nation received federal recognition in 1993 and became the only federally recognized tribe in the state of South Carolina. The Nation has land seven miles east of the city of Rock Hill along the Catawba River. Tribal citizenship consists of 3,400 Catawbas including 2,400 citizens of voting age. The tribe holds elections every four years to fill five executive positions—Chief, Assistant Chief, Secretary/Treasurer, and two at-large positions. Scholarship on Southeastern Indians focuses less on the history of the Catawba Indian Nation and more on the historical narratives of the Five Civilized Tribes, which obscures the role Catawbas filled in the history of the development of the South. Finally, a comprehensive Catawba Nation history explains how the people became Catawba and, through persistence, ensured the survival of the Nation and its people.

Article

The United States never sought to build an empire in Africa in the 19th and 20th centuries, as did European nations from Britain to Portugal. However, economic, ideological, and cultural affinities gradually encouraged the development of relations with the southern third of the continent (the modern Anglophone nations of South Africa, Zimbabwe, Zambia, Namibia, the former Portuguese colonies of Mozambique and Angola, and a number of smaller states). With official ties limited for decades, missionaries and business concerns built a small but influential American presence mostly in the growing European settler states. This state of affairs made the United State an important trading partner during the 20th century, but it also reinforced the idea of a white Christian civilizing mission as justification for the domination of black peoples. The United States served as a comparison point for the construction of legal systems of racial segregation in southern Africa, even as it became more politically involved in the region as part of its ideological competition with the Soviet Union. As Europe’s empires dissolved after World War II, official ties to white settler states such as South Africa, Angola, and Rhodesia (modern Zimbabwe) brought the United States into conflict with mounting demands for decolonization, self-determination, and racial equality—both international and domestic. Southern Africa illustrated the gap between a Cold War strategy predicated on Euro-American preponderance and national traditions of liberty and democracy, eliciting protests from civil and human rights groups that culminated in the successful anti-apartheid movement of the 1980s. Though still a region of low priority at the beginning of the 21st century, American involvement in southern Africa evolved to emphasize the pursuit of social and economic improvement through democracy promotion, emergency relief, and health aid—albeit with mixed results. The history of U.S. relations with southern Africa therefore illustrates the transformation of trans-Atlantic racial ideologies and politics over the last 150 years, first in the construction of white supremacist governance and later in the eventual rejection of this model.