1-11 of 11 Results  for:

  • Political History x
  • 20th Century: Pre-1945 x
  • 20th Century: Post-1945 x
Clear all

Article

The NAACP, established in 1909, was formed as an integrated organization to confront racism in the United States rather than seeing the issue as simply a southern problem. It is the longest running civil rights organization and continues to operate today. The original name of the organization was The National Negro League, but this was changed to the NAACP on May 30, 1910. Organized to promote racial equality and integration, the NAACP pursued this goal via legal cases, political lobbying, and public campaigns. Early campaigns involved lobbying for national anti-lynching legislation, pursuing through the US Supreme Court desegregation in areas such as housing and higher education, and the pursuit of voting rights. The NAACP is renowned for the US Supreme Court case of Brown v. Board of Education (1954) that desegregated primary and secondary schools and is seen as a catalyst for the civil rights movement (1955–1968). It also advocated public education by promoting African American achievements in education and the arts to counteract racial stereotypes. The organization published a monthly journal, The Crisis, and promoted African American art forms and culture as another means to advance equality. NAACP branches were established all across the United States and became a network of information, campaigning, and finance that underpinned activism. Youth groups and university branches mobilized younger members of the community. Women were also invaluable to the NAACP in local, regional, and national decision-making processes and campaigning. The organization sought to integrate African Americans and other minorities into the American social, political, and economic model as codified by the US Constitution.

Article

Ivón Padilla-Rodríguez

Child migration has garnered widespread media coverage in the 21st century, becoming a central topic of national political discourse and immigration policymaking. Contemporary surges of child migrants are part of a much longer history of migration to the United States. In the first half of the 20th century, millions of European and Asian child migrants passed through immigration inspection stations in the New York harbor and San Francisco Bay. Even though some accompanied and unaccompanied European child migrants experienced detention at Ellis Island, most were processed and admitted into the United States fairly quickly in the early 20th century. Few of the European child migrants were deported from Ellis Island. Predominantly accompanied Chinese and Japanese child migrants, however, like Latin American and Caribbean migrants in recent years, were more frequently subjected to family separation, abuse, detention, and deportation at Angel Island. Once inside the United States, both European and Asian children struggled to overcome poverty, labor exploitation, educational inequity, the attitudes of hostile officials, and public health problems. After World War II, Korean refugee “orphans” came to the United States under the Refugee Relief Act of 1953 and the Immigration and Nationality Act. European, Cuban, and Indochinese refugee children were admitted into the United States through a series of ad hoc programs and temporary legislation until the 1980 Refugee Act created a permanent mechanism for the admission of refugee and unaccompanied children. Exclusionary immigration laws, the hardening of US international boundaries, and the United States preference for refugees who fled Communist regimes made unlawful entry the only option for thousands of accompanied and unaccompanied Mexican, Central American, and Haitian children in the second half of the 20th century. Black and brown migrant and asylum-seeking children were forced to endure educational deprivation, labor trafficking, mandatory detention, deportation, and deadly abuse by US authorities and employers at US borders and inside the country.

Article

Patrick William Kelly

The relationship between Chile and the United States pivoted on the intertwined questions of how much political and economic influence Americans would exert over Chile and the degree to which Chileans could chart their own path. Given Chile’s tradition of constitutional government and relative economic development, it established itself as a regional power player in Latin America. Unencumbered by direct US military interventions that marked the history of the Caribbean, Central America, and Mexico, Chile was a leader in movements to promote Pan-Americanism, inter-American solidarity, and anti-imperialism. But the advent of the Cold War in the 1940s, and especially after the 1959 Cuban Revolution, brought an increase in bilateral tensions. The United States turned Chile into a “model democracy” for the Alliance for Progress, but frustration over its failures to enact meaningful social and economic reform polarized Chilean society, resulting in the election of Marxist Salvador Allende in 1970. The most contentious period in US-Chilean relations was during the Nixon administration when it worked, alongside anti-Allende Chileans, to destabilize Allende’s government, which the Chilean military overthrew on September 11, 1973. The Pinochet dictatorship (1973–1990), while anti-Communist, clashed with the United States over Pinochet’s radicalization of the Cold War and the issue of Chilean human rights abuses. The Reagan administration—which came to power on a platform that reversed the Carter administration’s critique of Chile—reversed course and began to support the return of democracy to Chile, which took place in 1990. Since then, Pinochet’s legacy of neoliberal restructuring of the Chilean economy looms large, overshadowed perhaps only by his unexpected role in fomenting a global culture of human rights that has ended the era of impunity for Latin American dictators.

Article

The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded. Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.

Article

Kathryn Cramer Brownell

Hollywood has always been political. Since its early days, it has intersected with national, state, and local politics. As a new entertainment industry attempting to gain a footing in a society of which it sat firmly on the outskirts, the Jewish industry leaders worked hard to advance the merits of their industry to a Christian political establishment. At the local and state level, film producers faced threats of censorship and potential regulation of more democratic spaces they provided for immigrants and working class patrons in theaters. As Hollywood gained economic and cultural influence, the political establishment took note, attempting to shape silver screen productions and deploy Hollywood’s publicity innovations for its own purposes. Over the course of the 20th century, industry leaders forged political connections with politicians from both parties to promote their economic interests, and politically motivated actors, directors, writers, and producers across the ideological spectrum used their entertainment skills to advance ideas and messages on and off the silver screen. At times this collaboration generated enthusiasm for its ability to bring new citizens into the electoral process. At other times, however, it created intense criticism and fears abounded that entertainment would undermine the democratic process with a focus on style over substance. As Hollywood personalities entered the political realm—for personal, professional, and political gain—the industry slowly reshaped American political life, bringing entertainment, glamor, and emotion to the political process and transforming how Americans communicate with their elected officials and, indeed, how they view their political leaders.

Article

The United States and the Kingdom of Joseon (Korea) established formal diplomatic relations after signing a “Treaty of Peace, Commerce, Amity, and Navigation” in 1882. Relations between the two states were not close and the United States closed its legation in 1905 following the Japanese annexation of Korea subsequent to the Russo-Japanese War. No formal relations existed for the following forty-four years, but American interest in Korea grew following the 1907 Pyongyang Revival and the rapid growth of Christianity there. Activists in the Korean Independence movement kept the issue of Korea alive in the United States, especially during World War I and World War II, and pressured the American government to support the re-emergence of an independent Korea. Their activism, as well as a distrust of the Soviet Union, was among the factors that spurred the United States to suggest the joint occupation of the Korean peninsula in 1945, which subsequently led to the creation of the Republic of Korea (ROK) in the American zone and the Democratic People’s Republic of Korea (DPRK) in the Soviet zone. The United States withdrew from the ROK in 1948 only to return in 1950 to thwart the DPRK’s attempt to reunite the peninsula by force during the Korean War. The war ended in stalemate, with an armistice agreement in 1953. In the same year the United States and the ROK signed a military alliance and American forces have remained on the peninsula ever since. While the United States has enjoyed close political and security relations with the ROK, formal diplomatic relations have never been established between the United States and the DPRK, and the relationship between the two has been marked by increasing tensions over the latter’s nuclear program since the early 1990s.

Article

Benjamin C. Waterhouse

Political lobbying has always played a key role in American governance, but the concept of paid influence peddling has been marked by a persistent tension throughout the country’s history. On the one hand, lobbying represents a democratic process by which citizens maintain open access to government. On the other, the outsized clout of certain groups engenders corruption and perpetuates inequality. The practice of lobbying itself has reflected broader social, political, and economic changes, particularly in the scope of state power and the scale of business organization. During the Gilded Age, associational activity flourished and lobbying became increasingly the province of organized trade associations. By the early 20th century, a wide range at political reforms worked to counter the political influence of corporations. Even after the Great Depression and New Deal recast the administrative and regulatory role of the federal government, business associations remained the primary vehicle through which corporations and their designated lobbyists influenced government policy. By the 1970s, corporate lobbyists had become more effective and better organized, and trade associations spurred a broad-based political mobilization of business. Business lobbying expanded in the latter decades of the 20th century; while the number of companies with a lobbying presence leveled off in the 1980s and 1990s, the number of lobbyists per company increased steadily and corporate lobbyists grew increasingly professionalized. A series of high-profile political scandals involving lobbyists in 2005 and 2006 sparked another effort at regulation. Yet despite popular disapproval of lobbying and distaste for politicians, efforts to substantially curtail the activities of lobbyists and trade associations did not achieve significant success.

Article

While presidents have historically been the driving force behind foreign policy decision-making, Congress has used its constitutional authority to influence the process. The nation’s founders designed a system of checks and balances aimed at establishing a degree of equilibrium in foreign affairs powers. Though the president is the commander-in-chief of the armed forces and the country’s chief diplomat, Congress holds responsibility for declaring war and can also exert influence over foreign relations through its powers over taxation and appropriation, while the Senate possesses authority to approve or reject international agreements. This separation of powers compels the executive branch to work with Congress to achieve foreign policy goals, but it also sets up conflict over what policies best serve national interests and the appropriate balance between executive and legislative authority. Since the founding of the Republic, presidential power over foreign relations has accreted in fits and starts at the legislature’s expense. When core American interests have come under threat, legislators have undermined or surrendered their power by accepting presidents’ claims that defense of national interests required strong executive action. This trend peaked during the Cold War, when invocations of national security enabled the executive to amass unprecedented control over America’s foreign affairs.

Article

In 1835, Alexis de Tocqueville argued in Democracy in America that there were “two great nations in the world.” They had started from different historical points but seemed to be heading in the same direction. As expanding empires, they faced the challenges of defeating nature and constructing a civilization for the modern era. Although they adhered to different governmental systems, “each of them,” de Tocqueville declared, “seems marked out by the will of Heaven to sway the destinies of half the globe.” De Tocqueville’s words were prophetic. In the 19th century, Russian and American intellectuals and diplomats struggled to understand the roles that their countries should play in the new era of globalization and industrialization. Despite their differing understandings of how development should happen, both sides believed in their nation’s vital role in guiding the rest of the world. American adherents of liberal developmentalism often argued that a free flow of enterprise, trade, investment, information, and culture was the key to future growth. They held that the primary obligation of American foreign policy was to defend that freedom by pursuing an “open door” policy and free access to markets. They believed that the American model would work for everyone and that the United States had an obligation to share its system with the old and underdeveloped nations around it. A similar sense of mission developed in Russia. Russian diplomats had for centuries struggled to establish defensive buffers around the periphery of their empire. They had linked economic development to national security, and they had argued that their geographic expansion represented a “unification” of peoples as opposed to a conquering of them. In the 19th century, after the Napoleonic Wars and the failed Decembrist Revolution, tsarist policymakers fought to defend autocracy, orthodoxy, and nationalism from domestic and international critics. As in the United States, Imperial and later Soviet leaders envisioned themselves as the emissaries of the Enlightenment to the backward East and as protectors of tradition and order for the chaotic and revolutionary West. These visions of order clashed in the 20th century as the Soviet Union and the United States became superpowers. Conflicts began early, with the American intervention in the 1918–1921 Russian civil war. Tensions that had previously been based on differing geographic and strategic interests then assumed an ideological valence, as the fight between East and West became a struggle between the political economies of communism and capitalism. Foreign relations between the two countries experienced boom and bust cycles that took the world to the brink of nuclear holocaust and yet maintained a strategic balance that precluded the outbreak of global war for fifty years. This article will examine how that relationship evolved and how it shaped the modern world.

Article

The key pieces of antitrust legislation in the United States—the Sherman Antitrust Act of 1890 and the Clayton Act of 1914—contain broad language that has afforded the courts wide latitude in interpreting and enforcing the law. This article chronicles the judiciary’s shifting interpretations of antitrust law and policy over the past 125 years. It argues that jurists, law enforcement agencies, and private litigants have revised their approaches to antitrust to accommodate economic shocks, technological developments, and predominant economic wisdom. Over time an economic logic that prioritizes lowest consumer prices as a signal of allocative efficiency—known as the consumer welfare standard—has replaced the older political objectives of antitrust, such as protecting independent proprietors or small businesses, or reducing wealth transfers from consumers to producers. However, a new group of progressive activists has again called for revamping antitrust so as to revive enforcement against dominant firms, especially in digital markets, and to refocus attention on the political effects of antitrust law and policy. This shift suggests that antitrust may remain a contested field for scholarly and popular debate.

Article

Zoning is a legal tool employed by local governments to regulate land development. It determines the use, intensity, and form of development in localities through enforcement of the zoning ordinance, which consists of a text and an accompanying map that divides the locality into zones. Zoning is an exercise of the police powers by local governments, typically authorized through state statutes. Components of what became part of the zoning process emerged piecemeal in U.S. cities during the 19th century in response to development activities deemed injurious to the health, safety, and welfare of the community. American zoning was influenced by and drew upon models already in place in German cities early in the 20th century. Following the First National Conference on Planning and Congestion, held in Washington, DC in 1909, the zoning movement spread throughout the United States. The first attempt to apply a version of the German zoning model to a U.S. city was in New York City in 1916. In the landmark U.S. Supreme Court case, Ambler Realty v. Village of Euclid (1926), zoning was ruled as a constitutional exercise of the police power, a precedent-setting case that defined the perimeters of land use regulation the remainder of the 20th century. Zoning was explicitly intended to sanction regulation of real property use to serve the public interest, but frequently, it was used to facilitate social and economic segregation. This was most often accomplished by controlling the size and type of housing, where high density housing (for lower income residents) could be placed in relation to commercial and industrial uses, and in some cases through explicit use of racial zoning categories for zones. The U.S. Supreme Court ruled, in Buchanan v. Warley (1917), that a racial zoning plan of the city of Louisville, Kentucky violated the due process clause of the14th Amendment. The decision, however, did not directly address the discriminatory aspects of the law. As a result, efforts to fashion legally acceptable racial zoning schemes persisted late into the 1920s. These were succeeded by the use of restrictive covenants to prohibit black (and other minority) occupancy in certain white neighborhoods (until declared unconstitutional in the late 1940s). More widespread was the use of highly differentiated residential zoning schemes and real estate steering that imbedded racial and ethnic segregation into the residential fabric of American communities. The Standard State Zoning Enabling Act (SSZEA) of 1924 facilitated zoning. Disseminated by the U.S. Department of Commerce, the SSZEA created a relatively uniform zoning process in U.S. cities, although depending upon their size and functions, there were definite differences in the complexity and scope of zoning schemes. The reason why localities followed the basic form prescribed by the SSZEA was to minimize the chance of the zoning ordinance being struck down by the courts. Nonetheless, from the 1920s through the 1970s, thousands of court cases tested aspects of zoning, but only a few reached the federal courts, and typically, zoning advocates prevailed. In the 1950s and 1960s, critics of zoning charged that the fragmented city was an unintended consequence. This critique was a response to concerns that zoning created artificial separations among the various types of development in cities, and that this undermined their vitality. Zoning nevertheless remained a cornerstone of U.S. urban and suburban land regulation, and new techniques such as planned unit developments, overlay zones, and form-based codes introduced needed flexibility to reintegrate urban functions previously separated by conventional zoning approaches.