1-20 of 50 Results

  • Keywords: Cold War x
Clear all

Article

Thomas A. Reinstein

The United States has a rich history of intelligence in the conduct of foreign relations. Since the Revolutionary War, intelligence has been most relevant to U.S. foreign policy in two ways. Intelligence analysis helps to inform policy. Intelligence agencies also have carried out overt action—secret operations—to influence political, military, or economic conditions in foreign states. The American intelligence community has developed over a long period, and major changes to that community have often occurred because of contingent events rather than long-range planning. Throughout their history, American intelligence agencies have used intelligence gained from both human and technological sources to great effect. Often, U.S. intelligence agencies have been forced to rely on technological means of intelligence gathering for lack of human sources. Recent advances in cyberwarfare have made technology even more important to the American intelligence community. At the same time, the relationship between intelligence and national-security–related policymaking has often been dysfunctional. Indeed, though some American policymakers have used intelligence avidly, many others have used it haphazardly or not at all. Bureaucratic fights also have crippled the American intelligence community. Several high-profile intelligence failures tend to dominate the recent history of intelligence and U.S. foreign relations. Some of these failures were due to lack of intelligence or poor analytic tradecraft. Others came because policymakers failed to use the intelligence they had. In some cases, policymakers have also pressured intelligence officers to change their findings to better suit those policymakers’ goals. And presidents have often preferred to use covert action to carry out their preferred policies without paying attention to intelligence analysis. The result has been constant debate about the appropriate role of intelligence in U.S. foreign relations.

Article

The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded. Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.

Article

Thomas P. Cavanna

In its most general sense, grand strategy can be defined as the overarching vision that shapes a state’s foreign policy and approach to national security. Like any strategy, it requires the coherent articulation of the state’s ends and means, which necessitates prioritizing vital interests, identifying key threats and opportunities, and (within certain limits) adapting to circumstances. What makes it truly “grand” is that it encompasses both wartime and peacetime, harnesses immediate realities to long-term objectives, and requires the coordination of all instruments of power (military, economic, etc.). Although American leaders have practiced grand strategic thinking since the early days of the Republic, the concept of grand strategy itself only started to emerge during World War I due to the expansion and diversification of the state’s resources and prerogatives, the advent of industrial warfare, and the growing role of populations in domestic politics and international conflicts. Moreover, it was only during World War II that it detached itself from military strategy and gained real currency among decision-makers. The contours, desirability, and very feasibility of grand strategy have inspired lively debates. However, many scholars and leaders consider it a worthy (albeit complex) endeavor that can reduce the risk of resource-squandering, signal intentions to both allies and enemies, facilitate adjustments to international upheavals, and establish a baseline for accountability. America’s grand strategy evolved from relative isolationism to full-blown liberal internationalism after 1945. Yet its conceptualization and implementation are inherently contentious processes because of political/bureaucratic infighting and recurrent dilemmas such as the uncertain geographic delimitation of US interests, the clash of ideals and Realpolitik, and the tension between unilateralism and multilateralism. The end of the Cold War, the 9/11 attacks, China’s rise, and other challenges have further compounded those lines of fracture.

Article

The United States was heavily involved in creating the United Nations in 1945 and drafting its charter. The United States continued to exert substantial clout in the organization after its founding, though there have been periods during which U.S. officials have met with significant opposition inside the United Nations, in Congress, and in American electoral politics, all of which produced struggles to gain support for America’s international policy goals. U.S. influence in the international organization has thus waxed and waned. The early postwar years witnessed the zenith of American prestige on the global stage. Starting in the mid- to late 1950s, as decolonization and the establishment of newly independent nations quickened, the United States began to lose influence in the United Nations owing to the spreading perception that its alliances with the European colonial powers placed it on the wrong side of history. As U.N. membership skyrocketed, the organization became more responsive to the needs and interests of the decolonizing states. During the 1970s and early 1980s, the American public responded to declining U.S. influence in the United Nations with calls to defund the organization and to pursue a unilateral approach to international challenges. The role of the United States in the United Nations was shaped by the politics of the Cold War competition with the Soviet Union. Throughout the nearly five decades of the Cold War, the United Nations served as a forum for the political and ideological rivalry between the United States and the Soviet Union, which frequently inhibited the organization from fulfilling what most considered to be its primary mission: the maintenance of global security and stability. After the collapse of the Soviet Union and the peaceful end of the Cold War, the United States enjoyed a brief period of unrivaled global hegemony. During this period, U.S. officials pursued a closer relationship with the United Nations and sought to use the organization to build support for its international policy agenda and military interventionism.

Article

Since the late 19th century, the relationship between journalists and the makers of US foreign policy has been both cooperative and contentious. Reporters depend on government officials for information about policy decisions and their implementation. The White House, the State Department, and the Pentagon use the news media to build support for their policies and, at times, to communicate directly with allies and adversaries. Since World War I, presidential administrations have developed increasingly sophisticated methods to manage the news and influence public understanding of international affairs. Wartime censorship has been one tool of news management. Self-censorship, however, has also affected coverage of international affairs, as journalists have voluntarily refrained from publishing information for fear of impairing national security or undermining support for US wartime or Cold War policies. Allegations of bias and sensationalism became acrimonious during the Vietnam War and have continued to shape the debate about accurate, critical, and legitimate reporting. Arguments over “fake news,” which became commonplace during the presidency of Donald J. Trump, have many precursors, as both journalists and government officials have been responsible for misleading or distorted news coverage of international affairs since the Spanish–American War.

Article

Sworn in as the 33rd President of the United States following Franklin D. Roosevelt’s death in April 1945, Harry S. Truman faced the daunting tasks of winning the war and ensuring future peace and stability. Chided by critics for his lack of foreign policy experience but championed by supporters for his straightforward decision-making, Truman guided the United States from World War to Cold War. The Truman presidency marked a new era in American foreign relations, with the United States emerging from World War II unmatched in economic strength and military power. The country assumed a leadership position in a postwar world primarily shaped by growing antagonism with the Soviet Union. Truman pursued an interventionist foreign policy that took measures to contain Soviet influence in Europe and stem the spread of communism in Asia. Under his leadership, the United States witnessed the dawn of the atomic age, approved billions of dollars in economic aid to rebuild Europe, supported the creation of multilateral organizations such as the United Nations and North Atlantic Treaty Organization, recognized the state of Israel, and intervened in the Korean peninsula. The challenges Truman confronted and the policies he implemented laid the foundation for 20th-century US foreign relations throughout the Cold War and beyond.

Article

Franklin D. Roosevelt was US president in extraordinarily challenging times. The impact of both the Great Depression and World War II make discussion of his approach to foreign relations by historians highly contested and controversial. He was one of the most experienced people to hold office, having served in the Wilson administration as Assistant Secretary of the Navy, completed two terms as Governor of New York, and held a raft of political offices. At heart, he was an internationalist who believed in an engaged and active role for the United States in world. During his first two terms as president, Roosevelt had to temper his international engagement in response to public opinion and politicians wanting to focus on domestic problems and wary of the risks of involvement in conflict. As the world crisis deepened in the 1930s, his engagement revived. He adopted a gradualist approach to educating the American people in the dangers facing their country and led them to eventual participation in war and a greater role in world affairs. There were clearly mistakes in his diplomacy along the way and his leadership often appeared flawed, with an ambiguous legacy founded on political expediency, expanded executive power, vague idealism, and a chronic lack of clarity to prepare Americans for postwar challenges. Nevertheless, his policies to prepare the United States for the coming war saw his country emerge from years of depression to become an economic superpower. Likewise, his mobilization of his country’s enormous resources, support of key allies, and the holding together of a “Grand Alliance” in World War II not only brought victory but saw the United States become a dominant force in the world. Ultimately, Roosevelt’s idealistic vision, tempered with a sound appreciation of national power, would transform the global position of the United States and inaugurate what Henry Luce described as “the American Century.”

Article

Jennifer Hoyt

Relations between the United States and Argentina can be best described as a cautious embrace punctuated by moments of intense frustration. Although never the center of U.S.–Latin American relations, Argentina has attempted to create a position of influence in the region. As a result, the United States has worked with Argentina and other nations of the Southern Cone—the region of South America that comprises Uruguay, Paraguay, Argentina, Chile, and southern Brazil—on matters of trade and economic development as well as hemispheric security and leadership. While Argentina has attempted to assert its position as one of Latin America’s most developed nations and therefore a regional leader, the equal partnership sought from the United States never materialized for the Southern Cone nation. Instead, competition for markets and U.S. interventionist and unilateral tendencies kept Argentina from attaining the influence and wealth it so desired. At the same time, the United States saw Argentina as an unreliable ally too sensitive to the pull of its volatile domestic politics. The two nations enjoyed moments of cooperation in World War I, the Cold War, and the 1990s, when Argentine leaders could balance this particular external partnership with internal demands. Yet at these times Argentine leaders found themselves walking a fine line as detractors back home saw cooperation with the United States as a violation of their nation’s sovereignty and autonomy. There has always been potential for a productive partnership, but each side’s intransigence and unique concerns limited this relationship’s accomplishments and led to a historical imbalance of power.

Article

Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.

Article

Since the social sciences began to emerge as scholarly disciplines in the last quarter of the 19th century, they have frequently offered authoritative intellectual frameworks that have justified, and even shaped, a variety of U.S. foreign policy efforts. They played an important role in U.S. imperial expansion in the late 19th and early 20th centuries. Scholars devised racialized theories of social evolution that legitimated the confinement and assimilation of Native Americans and endorsed civilizing schemes in the Philippines, Cuba, and elsewhere. As attention shifted to Europe during and after World War I, social scientists working at the behest of Woodrow Wilson attempted to engineer a “scientific peace” at Versailles. The desire to render global politics the domain of objective, neutral experts intensified during World War II and the Cold War. After 1945, the social sciences became increasingly central players in foreign affairs, offering intellectual frameworks—like modernization theory—and bureaucratic tools—like systems analysis—that shaped U.S. interventions in developing nations, guided nuclear strategy, and justified the increasing use of the U.S. military around the world. Throughout these eras, social scientists often reinforced American exceptionalism—the notion that the United States stands at the pinnacle of social and political development, and as such has a duty to spread liberty and democracy around the globe. The scholarly embrace of conventional political values was not the result of state coercion or financial co-optation; by and large social scientists and policymakers shared common American values. But other social scientists used their knowledge and intellectual authority to critique American foreign policy. The history of the relationship between social science and foreign relations offers important insights into the changing politics and ethics of expertise in American public policy.

Article

For nearly a decade, American combat soldiers fought in South Vietnam to help sustain an independent, noncommunist nation in Southeast Asia. After U.S. troops departed in 1973, the collapse of South Vietnam in 1975 prompted a lasting search to explain the United States’ first lost war. Historians of the conflict and participants alike have since critiqued the ways in which civilian policymakers and uniformed leaders applied—some argued misapplied—military power that led to such an undesirable political outcome. While some claimed U.S. politicians failed to commit their nation’s full military might to a limited war, others contended that most officers fundamentally misunderstood the nature of the war they were fighting. Still others argued “winning” was essentially impossible given the true nature of a struggle over Vietnamese national identity in the postcolonial era. On their own, none of these arguments fully satisfy. Contemporary policymakers clearly understood the difficulties of waging a war in Southeast Asia against an enemy committed to national liberation. Yet the faith of these Americans in their power to resolve deep-seated local and regional sociopolitical problems eclipsed the possibility there might be limits to that power. By asking military strategists to simultaneously fight a war and build a nation, senior U.S. policymakers had asked too much of those crafting military strategy to deliver on overly ambitious political objectives. In the end, the Vietnam War exposed the limits of what American military power could achieve in the Cold War era.

Article

In December 1979, Soviet troops entered the small, poor, landlocked, Islamic nation of Afghanistan, assassinated the communist president, Hafizullah Amin, and installed a more compliant Afghan leader. For almost ten years, Soviet troops remained entrenched in Afghanistan before finally withdrawing in February 1989. During this period, the United States undertook a covert program to assist the anti-communist Afghan insurgents—the mujahideen—to resist the Soviet occupation. Beginning with President Jimmy Carter’s small-scale authorization in July 1979, the secret war became the largest in history under President Ronald Reagan, running up to $700 million per year. The Central Intelligence Agency (CIA) acted as the war’s quartermaster, arranging supplies of weapons for the mujahideen, which were funneled through Pakistan’s Inter-Services Intelligence directorate (ISI), in coordination with Saudi Arabia, China, Egypt, and others. No Americans were directly involved in the fighting, and the overall cost to the American taxpayer was in the region of $2 billion. The Afghan cost was much higher. Over a million Afghans were killed, a further two million wounded, and over six million refugees fled to neighboring Pakistan and Iran. For the Soviet Union, the ten-year war constituted its largest military action in the postwar era, and the long and protracted nature of the conflict and the failure of the Red Army to subdue the Afghans is partially responsible for the internal turmoil that contributed to the eventual breakup of the Soviet empire at the end of the 1980s. The defeat of the Soviet 40th Army in Afghanistan proved to be the final major superpower battle of the Cold War, but it also marked the beginning of a new era. The devastation and radicalization of Afghan society resulted in the subsequent decades of continued conflict and warfare and the rise of militant Islamic fundamentalism that has shaped the post-Cold War world.

Article

Canada has sometimes been called the United States’ attic: a useful feature, but one easily forgotten. Of all countries, it has historically resembled the United States the most closely, in terms of culture, geography, economy, society, politics, ideology and, especially, history. A shared culture—literary, social, legal, and political—is a crucial factor in Canadian-American relations. Geography is at least as important. It provides the United States with strategic insulation to the north and enhances geographic isolation to the east and west. North-south economic links are inevitable and very large. It has been a major recipient of American investment, and for most of the time since 1920 has been the United States’ principal trading partner. Prosperous and self-sufficient, it has seldom required American aid. There have been no overtly hostile official encounters since the end of the War of 1812, partly because many Americans tended to believe that Canadians would join the republic; when that did not occur, the United States accepted an independent but friendly Canada as a permanent, useful, and desirable neighbor—North America’s attic. The insulation the attic provided was a common belief in the rule of law, both domestic and international; liberal democracy; a federal constitution; liberal capitalism; and liberal international trade regimes. That said, the United States, with its large population, huge economy, and military power, insulates Canada from hostile external forces. An attack on Canada from outside the continent is hard to imagine without a simultaneous attack on the United States. Successive American and Canadian governments have reaffirmed the political status quo while favoring mutually beneficial economic and military linkages—bilateral and multilateral. Relations have traditionally been grounded in a negotiating style that is evidence-based, proceeding issue by issue. A sober diplomatic and political context sometimes frames irritations and exclamations, but even these have usually been defined and limited by familiarity. For example, there has always been anti-Americanism in Canada. Most often it consists of sentiments derived from the United States itself, channeled by cultural similarities. No American idea, good or bad, from liberalism to populism, fails to find an echo in Canada. How loud or how soft the echo makes the difference.

Article

In 1835, Alexis de Tocqueville argued in Democracy in America that there were “two great nations in the world.” They had started from different historical points but seemed to be heading in the same direction. As expanding empires, they faced the challenges of defeating nature and constructing a civilization for the modern era. Although they adhered to different governmental systems, “each of them,” de Tocqueville declared, “seems marked out by the will of Heaven to sway the destinies of half the globe.” De Tocqueville’s words were prophetic. In the 19th century, Russian and American intellectuals and diplomats struggled to understand the roles that their countries should play in the new era of globalization and industrialization. Despite their differing understandings of how development should happen, both sides believed in their nation’s vital role in guiding the rest of the world. American adherents of liberal developmentalism often argued that a free flow of enterprise, trade, investment, information, and culture was the key to future growth. They held that the primary obligation of American foreign policy was to defend that freedom by pursuing an “open door” policy and free access to markets. They believed that the American model would work for everyone and that the United States had an obligation to share its system with the old and underdeveloped nations around it. A similar sense of mission developed in Russia. Russian diplomats had for centuries struggled to establish defensive buffers around the periphery of their empire. They had linked economic development to national security, and they had argued that their geographic expansion represented a “unification” of peoples as opposed to a conquering of them. In the 19th century, after the Napoleonic Wars and the failed Decembrist Revolution, tsarist policymakers fought to defend autocracy, orthodoxy, and nationalism from domestic and international critics. As in the United States, Imperial and later Soviet leaders envisioned themselves as the emissaries of the Enlightenment to the backward East and as protectors of tradition and order for the chaotic and revolutionary West. These visions of order clashed in the 20th century as the Soviet Union and the United States became superpowers. Conflicts began early, with the American intervention in the 1918–1921 Russian civil war. Tensions that had previously been based on differing geographic and strategic interests then assumed an ideological valence, as the fight between East and West became a struggle between the political economies of communism and capitalism. Foreign relations between the two countries experienced boom and bust cycles that took the world to the brink of nuclear holocaust and yet maintained a strategic balance that precluded the outbreak of global war for fifty years. This article will examine how that relationship evolved and how it shaped the modern world.

Article

Leilah Danielson

Peace activism in the United States between 1945 and the 2010s focused mostly on opposition to U.S. foreign policy, efforts to strengthen and foster international cooperation, and support for nuclear nonproliferation and arms control. The onset of the Cold War between the United States and the Soviet Union marginalized a reviving postwar American peace movement emerging from concerns about atomic and nuclear power and worldwide nationalist politics that everywhere seemed to foster conflict, not peace. Still, peace activism continued to evolve in dynamic ways and to influence domestic politics and international relations. Most significantly, peace activists pioneered the use of Gandhian nonviolence in the United States and provided critical assistance to the African American civil rights movement, led the postwar antinuclear campaign, played a major role in the movement against the war in Vietnam, helped to move the liberal establishment (briefly) toward a more dovish foreign policy in the early 1970s, and helped to shape the political culture of American radicalism. Despite these achievements, the peace movement never regained the political legitimacy and prestige it held in the years before World War II, and it struggled with internal divisions about ideology, priorities, and tactics. Peace activist histories in the 20th century tended to emphasize organizational or biographical approaches that sometimes carried hagiographic overtones. More recently, historians have applied the methods of cultural history, examining the role of religion, gender, and race in structuring peace activism. The transnational and global turn in the historical discipline has also begun to make inroads in peace scholarship. These are promising new directions because they situate peace activism within larger historical and cultural developments and relate peace history to broader historiographical debates and trends.

Article

Since the founding of the United States of America, coinciding with the height of the Atlantic slave trade, U.S. officials have based their relations with West Africa primarily on economic interests. Initially, these interests were established on the backs of slaves, as the Southern plantation economy quickly vaulted the United States to prominence in the Atlantic world. After the U.S. abolition of the slave trade in 1808, however, American relations with West Africa focused on the establishment of the American colony of Liberia as a place of “return” for formerly enslaved persons. Following the turn to “legitimate commerce” in the Atlantic and the U.S. Civil War, the United States largely withdrew from large-scale interaction with West Africa. Liberia remained the notable exception, where prominent Pan-African leaders like Edward Blyden, W. E. B. DuBois, and Marcus Garvey helped foster cultural and intellectual ties between West Africa and the Diaspora in the early 1900s. These ties to Liberia were deepened in the 1920s when Firestone Rubber Corporation of Akron, Ohio established a long-term lease to harvest rubber. World War II marked a significant increase in American presence and influence in West Africa. Still focused on Liberia, the war years saw the construction of infrastructure that would prove essential to Allied war efforts and to American security interests during the Cold War. After 1945, the United States competed with the Soviet Union in West Africa for influence and access to important economic and national security resources as African nations ejected colonial regimes across most of the continent. West African independence quickly demonstrated a turn from nationalism to ethnic nationalism, as civil wars engulfed several countries in the postcolonial, and particularly the post-Cold War, era. After a decade of withdrawal, American interest in West Africa revived with the need for alternative sources of petroleum and concerns about transnational terrorism following the attacks of September 11, 2001.

Article

Andrew J. Gawthorpe

From 1965 to 1973, the United States attempted to prevent the absorption of the non-Communist state of South Vietnam by Communist North Vietnam as part of its Cold War strategy of containment. In doing so, the United States had to battle both the North Vietnamese military and guerrillas indigenous to South Vietnam. The Johnson administration entered the war without a well-thought-out strategy for victory, and the United States quickly became bogged down in a bloody stalemate. A major Communist assault in 1968 known as the Tet Offensive convinced US leaders of the need to seek a negotiated solution. This task fell to the Nixon administration, which carried on peace talks while simultaneously seeking ways to escalate the conflict and force North Vietnam to make concessions. Eventually it was Washington that made major concessions, allowing North Vietnam to keep its forces in the South and leaving South Vietnam in an untenable position. US troops left in 1973 and Hanoi successfully invaded the South in 1975. The two Vietnams were formally unified in 1976. The war devastated much of Vietnam and came at a huge cost to the United States in terms of lives, resources, and political division at home. It gave birth to the largest mass movement against a war in US history, motivated by opposition both to conscription and to the damage that protesters perceived the war was doing to the United States. It also raised persistent questions about the wisdom of both military intervention and nation-building as tools of US foreign policy. The war has remained a touchstone for national debate and partisan division even as the United States and Vietnam moved to normalize diplomatic relations with the end of the Cold War.

Article

After World War II, Okinawa was placed under U.S. military rule and administratively separated from mainland Japan. This occupation lasted from 1945 to 1972, and in these decades Okinawa became the “Keystone of the Pacific,” a leading strategic site in U.S. military expansionism in Asia and the Pacific. U.S. rule during this Cold War period was characterized by violence and coercion, resulting in an especially staggering scale of sexual violence against Okinawan women by U.S. military personnel. At the same time, the occupation also facilitated numerous cultural encounters between the occupiers and the occupied, leading to a flourishing cross-cultural grassroots exchange. A movement to establish American-style domestic science (i.e., home economics) in the occupied territory became a particularly important feature of this exchange, one that mobilized an assortment of women—home economists, military wives, club women, university students, homemakers—from the United States, Okinawa, and mainland Japan. The postwar domestic science movement turned Okinawa into a vibrant theater of Cold War cultural performance where women of diverse backgrounds collaborated to promote modern homemaking and build friendship across racial and national divides. As these women took their commitment to domesticity and multiculturalism into the larger terrain of the Pacific, they articulated the complex intertwining that occurred among women, domesticity, the military, and empire.

Article

Doug Rossinow

The decade of the 1980s represented a turning point in American history—a crucial era, marked by political conservatism and an individualistic ethos. The 1980s also witnessed a dramatic series of developments in U.S. foreign relations, first an intensification of the Cold War with the Soviet Union and then a sudden relaxation of tensions and the effective end of the Cold War with an American victory. All of these developments were advanced and symbolized in the presidential administration of Ronald Reagan (1981–1989), a polarizing figure but a highly successful political leader. Reagan dominates our memories of the 1980s like few other American leaders do other eras. Reagan and the political movement he led—Reaganism—are central to the history of the 1980s. Both their successes and their failures, which became widely acknowledged in the later years of the decade, should be noted. Reaganite conservatives won political victories by rolling back state power in many realms, most of all in terms of taxation and regulation. They also succeeded in putting America at the unquestioned pinnacle of the world order through a victory over the Soviet Union in the Cold War, although this was unforeseen by America’s Cold Warriors when the 1980s began. The failures of Reaganite conservatism include its handling of rising poverty levels, the HIV/AIDS crisis, and worsening racial tensions, all problems that either Reaganites did little to stem or to which they positively contributed. In foreign affairs, Reaganites pursued a “war on terror” of questionable success, and their approach to Third World arenas of conflict, including Central America, exacted a terrible human toll.

Article

Two images dominated popular portrayals of American women in the 1950s. One was the fictional June Cleaver, the female lead character in the popular television program, “Leave It to Beaver,” which portrayed Cleaver as the stereotypical happy American housewife, the exemplar of postwar American domesticity. The other was Cleaver’s alleged real-life opposite, described in Betty Friedan’s The Feminine Mystique (1963) as miserable, bored, isolated, addicted to tranquilizers, and trapped in look-alike suburban tract houses, which Friedan termed “comfortable concentration camps.” Both stereotypes ignore significant proportions of the postwar female population, both offer simplistic and partial views of domesticity, but both reveal the depth of the influence that lay behind the idea of domesticity, real or fictional. Aided and abetted by psychology, social science theory, advertising, popular media, government policy, law, and discriminatory private sector practices, domesticity was both a myth and a powerful ideology that shaped the trajectories of women’s lives.