Jason C. Parker
The decolonization of the European overseas empires had its intellectual roots early in the modern era, but its culmination occurred during the Cold War that loomed large in post-1945 international history. This culmination thus coincided with the American rise to superpower status and presented the United States with a dilemma. While philosophically sympathetic to the aspirations of anticolonial nationalist movements abroad, the United States’ vastly greater postwar global security burdens made it averse to the instability that decolonization might bring and that communists might exploit. This fear, and the need to share those burdens with European allies who were themselves still colonial landlords, led Washington to proceed cautiously. The three “waves” of the decolonization process—medium-sized in the late 1940s, large in the half-decade around 1960, and small in the mid-1970s—prompted the American use of a variety of tools and techniques to influence how it unfolded.
Prior to independence, this influence was usually channeled through the metropolitan authority then winding down. After independence, Washington continued and often expanded the use of these tools, in most cases on a bilateral basis. In some theaters, such as Korea, Vietnam, and the Congo, through the use of certain of these tools, notably covert espionage or overt military operations, Cold War dynamics enveloped, intensified, and repossessed local decolonization struggles. In most theaters, other tools, such as traditional or public diplomacy or economic or technical development aid, affixed the Cold War into the background as a local transition unfolded. In all cases, the overriding American imperative was to minimize instability and neutralize actors on the ground who could invite communist gains.
U.S. imperialism took a variety of forms in the early 20th century, ranging from colonies in Puerto Rico and the Philippines to protectorates in Cuba, Panama, and other countries in Latin America, and open door policies such as that in China. Formal colonies would be ruled with U.S.-appointed colonial governors and supported by U.S. troops. Protectorates and open door policies promoted business expansion overseas through American oversight of foreign governments and, in the case of threats to economic and strategic interests, the deployment of U.S. marines. In all of these imperial forms, U.S. empire-building both reflected and shaped complex social, cultural, and political histories with ramifications for both foreign nations and America itself.
Leopoldo Nuti and Daniele Fiorentino
Relations between Italy and the United States have gone through different stages, from the early process of nation-building during the 18th and the 19th centuries, to the close diplomatic and political alignment of the Cold War and the first two decades of the 21st century. Throughout these two and a half centuries, relations between the two states occasionally experienced some difficult moments—from the tensions connected to the mass immigration of Italians to the United States at the end of the 19th century, to the diplomatic clash at the Versailles Peace Conference at the end of World War I, culminating with the declaration of war by the Fascist government in December 1941. By and large, however, Italy and the United States have mostly enjoyed a strong relationship based on close cultural, economic, and political ties.
Jennifer M. Miller
Over the past 150 years, the United States and Japan have developed one of the United States’ most significant international relationships, marked by a potent mix of cooperation and rivalry. After a devastating war, these two states built a lasting alliance that stands at the center of US diplomacy, security, and economic policy in the Pacific and beyond. Yet this relationship is not simply the product of economic or strategic calculations. Japan has repeatedly shaped American understandings of empire, hegemony, race, democracy, and globalization, because these two states have often developed in remarkable parallel with one another. From the edges of the international order in the 1850s and 1860s, both entered a period of intense state-building at home and imperial expansion abroad in the late 19th and early 20th centuries. These imperial ambitions violently collided in the 1940s in an epic contest to determine the Pacific geopolitical order. After its victory in World War II, the United States embarked on an unprecedented occupation designed to transform Japan into a stable and internationally cooperative democracy. The two countries also forged a diplomatic and security alliance that offered crucial logistical, political, and economic support to the United States’ Cold War quest to prevent the spread of communism. In the 1970s and 1980s, Japan’s rise as the globe’s second-largest economy caused significant tension in this relationship and forced Americans to confront the changing nature of national power and economic growth in a globalizing world. However, in recent decades, rising tensions in the Asia-Pacific have served to focus this alliance on the construction of a stable trans-Pacific economic and geopolitical order.
Thomas I. Faith
Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.
David P. Fields
The United States and the Kingdom of Joseon (Korea) established formal diplomatic relations after signing a “Treaty of Peace, Commerce, Amity, and Navigation” in 1882. Relations between the two states were not close and the United States closed its legation in 1905 following the Japanese annexation of Korea subsequent to the Russo-Japanese War. No formal relations existed for the following forty-four years, but American interest in Korea grew following the 1907 Pyongyang Revival and the rapid growth of Christianity there. Activists in the Korean Independence movement kept the issue of Korea alive in the United States, especially during World War I and World War II, and pressured the American government to support the re-emergence of an independent Korea. Their activism, as well as a distrust of the Soviet Union, was among the factors that spurred the United States to suggest the joint occupation of the Korean peninsula in 1945, which subsequently led to the creation of the Republic of Korea (ROK) in the American zone and the Democratic People’s Republic of Korea (DPRK) in the Soviet zone. The United States withdrew from the ROK in 1948 only to return in 1950 to thwart the DPRK’s attempt to reunite the peninsula by force during the Korean War. The war ended in stalemate, with an armistice agreement in 1953. In the same year the United States and the ROK signed a military alliance and American forces have remained on the peninsula ever since. While the United States has enjoyed close political and security relations with the ROK, formal diplomatic relations have never been established between the United States and the DPRK, and the relationship between the two has been marked by increasing tensions over the latter’s nuclear program since the early 1990s.
The relationship between the United States and Saudi Arabia has shaped the history of both countries. Soon after the Saudi kingdom was founded in 1932, American geologists discovered enormous oil reserves near the Persian Gulf. Oil-driven development transformed Saudi society. Many Americans came to work in Saudi Arabia, while thousands of Saudis studied and traveled in the United States. During the mid-20th century, the American-owned oil company Aramco and the US government worked to strengthen the Saudi regime and empower conservative forces in the kingdom—not only to protect American oil interests, but also to suppress nationalist and leftist movements in Saudi Arabia and elsewhere in the Middle East. The partnership was complicated by disagreement over Israel, triggering an Arab oil embargo against the United States in 1973–1974. During the 1970s, Saudi Arabia became the world’s largest oil exporter, nationalized Aramco, and benefited from surging oil prices. In partnership with the United States, it used its new wealth at home to launch a huge economic development program, and abroad to subsidize political allies like the Afghan mujahideen. The United States led a massive military operation to expel Iraqi forces from Kuwait in 1990–1991, protecting the Saudi regime but angering Saudis who opposed their government’s close relationship with the United States. One result was the rise of Osama bin Laden’s al-Qaeda network and the 9/11 attacks, carried out by a largely Saudi group of hijackers. Despite public opposition on both sides, after 2001 the United States and Saudi Arabia continued their commercial relationship and their political partnership, originally directed against the Soviet Union and Nasser’s Egypt, and later increasingly aimed at Iran.
In the early 20th century, West Virginia coal miners and mine operators fought a series of bloody battles that raged for two decades and prompted national debates over workers’ rights. Miners in the southern part of the state lived in towns wholly owned by coal companies and attempted to join the United Mine Workers of America (UMWA) to negotiate better working conditions but most importantly to restore their civil liberties. Mine operators saw unionization as a threat to their businesses and rights and hired armed guards to patrol towns and prevent workers from organizing. The operators’ allies in local and state government used their authority to help break strikes by sending troops to strike districts, declaring martial law, and jailing union organizers in the name of law and order. Observers around the country were shocked at the levels of violence as well as the conditions that fueled the battles. The Mine Wars include the Paint Creek–Cabin Creek Strike of 1912–1913, the so-called 1920 Matewan Massacre, the 1920 Three Days Battle, and the 1921 Battle of Blair Mountain. In this struggle over unionism, the coal operators prevailed, and West Virginia miners continued to work in nonunion mines and live in company towns through the 1920s.
An ungainly word, it has proven tenacious. Since the early Cold War, “Wilsonianism” has been employed by historians and analysts of US foreign policy to denote two historically related but ideologically and operationally distinct approaches to world politics. One is the foreign policy of the term’s eponym, President Woodrow Wilson, during and after World War I—in particular his efforts to engage the United States and other powerful nations in the cooperative maintenance of order and peace through a League of Nations. The other is the tendency of later administrations and political elites to deem an assertive, interventionist, and frequently unilateralist foreign policy necessary to advance national interests and preserve domestic institutions. Both versions of Wilsonianism have exerted massive impacts on US and international politics and culture. Yet both remain difficult to assess or even define. As historical phenomena they are frequently conflated; as philosophical labels they are ideologically freighted. Perhaps the only consensus is that the term implies the US government’s active rather than passive role in the international order.
It is nevertheless important to distinguish Wilson’s “Wilsonianism” from certain doctrines and practices later attributed to him or traced to his influence. The major reasons are two. First, misconceptions surrounding the aims and outcomes of Wilson’s international policies continue to distort historical interpretation in multiple fields, including American political, cultural, and diplomatic history and the history of international relations. Second, these distortions encourage the conflation of Wilsonian internationalism with subsequent yet distinct developments in American foreign policy. The confused result promotes ideological over historical readings of the nation’s past, which in turn constrain critical and creative thinking about its present and future as a world power.
In the United States, the history of sexual assault in the first half of the 20th century involves multiple contradictions between the ordinary, almost invisible accounts of women of all colors who were raped by fathers, husbands, neighbors, boarders, bosses, hired hands, and other known individuals versus the sensational myths that involved rapacious black men, sly white slavers, libertine elites, and virginal white female victims. Much of the debate about sexual assault revolved around the “unwritten law” that justified “honorable” white men avenging the “defilement” of their women. Both North and South, white people defended lynching and the murder of presumed rapists as “honor killings.” In courtrooms, defense attorneys linked the unwritten law to insanity pleas, arguing that after hearing women tell about their assault, husbands and fathers experienced an irresistible compulsion to avenge the rape of their women. Over time, however, notorious court cases from New York to San Francisco, Indianapolis and Honolulu, to Scottsboro, Alabama, shifted the discourse away from the unwritten law and extralegal “justice” to a more complicated script that demonized unreliable women and absolved imperfect men. National coverage of these cases, made possible by wire services and the Hearst newspaper empire, spurred heated debates concerning the proper roles of men and women. Blockbuster movies like The Birth of a Nation and Gone with the Wind and Book of the Month Club selections such as John Steinbeck’s Of Mice and Men and Richard Wright’s Native Son joined the sensationalized media coverage of high-profile court cases to create new national stereotypes about sexual violence and its causes and culprits. During the 1930s, journalists, novelists, playwrights, and moviemakers increasingly emphasized the culpability of women who, according to this narrative, made themselves vulnerable to assault by stepping outside of their appropriate sphere and tempting men into harming them.
Melissa A. McEuen
The Second World War changed the United States for women, and women in turn transformed their nation. Over three hundred fifty thousand women volunteered for military service, while twenty times as many stepped into civilian jobs, including positions previously closed to them. More than seven million women who had not been wage earners before the war joined eleven million women already in the American work force. Between 1941 and 1945, an untold number moved away from their hometowns to take advantage of wartime opportunities, but many more remained in place, organizing home front initiatives to conserve resources, to build morale, to raise funds, and to fill jobs left by men who entered military service.
The U.S. government, together with the nation’s private sector, instructed women on many fronts and carefully scrutinized their responses to the wartime emergency. The foremost message to women—that their activities and sacrifices would be needed only “for the duration” of the war—was both a promise and an order, suggesting that the war and the opportunities it created would end simultaneously. Social mores were tested by the demands of war, allowing women to benefit from the shifts and make alterations of their own. Yet dominant gender norms provided ways to maintain social order amidst fast-paced change, and when some women challenged these norms, they faced harsh criticism. Race, class, sexuality, age, religion, education, and region of birth, among other factors, combined to limit opportunities for some women while expanding them for others.
However temporary and unprecedented the wartime crisis, American women would find that their individual and collective experiences from 1941 to 1945 prevented them from stepping back into a prewar social and economic structure. By stretching and reshaping gender norms and roles, World War II and the women who lived it laid solid foundations for the various civil rights movements that would sweep the United States and grip the American imagination in the second half of the 20th century.
After World War II, Okinawa was placed under U.S. military rule and administratively separated from mainland Japan. This occupation lasted from 1945 to 1972, and in these decades Okinawa became the “Keystone of the Pacific,” a leading strategic site in U.S. military expansionism in Asia and the Pacific. U.S. rule during this Cold War period was characterized by violence and coercion, resulting in an especially staggering scale of sexual violence against Okinawan women by U.S. military personnel. At the same time, the occupation also facilitated numerous cultural encounters between the occupiers and the occupied, leading to a flourishing cross-cultural grassroots exchange. A movement to establish American-style domestic science (i.e., home economics) in the occupied territory became a particularly important feature of this exchange, one that mobilized an assortment of women—home economists, military wives, club women, university students, homemakers—from the United States, Okinawa, and mainland Japan. The postwar domestic science movement turned Okinawa into a vibrant theater of Cold War cultural performance where women of diverse backgrounds collaborated to promote modern homemaking and build friendship across racial and national divides. As these women took their commitment to domesticity and multiculturalism into the larger terrain of the Pacific, they articulated the complex intertwining that occurred among women, domesticity, the military, and empire.
Dana M. Caldemeyer
Unlike the anti-unionism that runs through the ranks of employers, worker anti-unionism describes the workers who are opposed to or who work against unionization. Anti-union actions can be seen throughout the United States from the early industrial age forward and include anything from refusing to join the union or follow union orders, to fighting against the union, such as with strikebreaking. Workers’ reasons for acting against the union, however, are far more complex, including the economic gains that come from remaining outside the union, moral opposition to unionism, and spite against the union. The variations between workers’ reasons for rejecting the union, then, provide insight into how workers define their place in society as well as their relationship with the union.
Zoning is a legal tool employed by local governments to regulate land development. It determines the use, intensity, and form of development in localities through enforcement of the zoning ordinance, which consists of a text and an accompanying map that divides the locality into zones. Zoning is an exercise of the police powers by local governments, typically authorized through state statutes. Components of what became part of the zoning process emerged piecemeal in U.S. cities during the 19th century in response to development activities deemed injurious to the health, safety, and welfare of the community. American zoning was influenced by and drew upon models already in place in German cities early in the 20th century. Following the First National Conference on Planning and Congestion, held in Washington, DC in 1909, the zoning movement spread throughout the United States. The first attempt to apply a version of the German zoning model to a U.S. city was in New York City in 1916. In the landmark U.S. Supreme Court case, Ambler Realty v. Village of Euclid (1926), zoning was ruled as a constitutional exercise of the police power, a precedent-setting case that defined the perimeters of land use regulation the remainder of the 20th century.
Zoning was explicitly intended to sanction regulation of real property use to serve the public interest, but frequently, it was used to facilitate social and economic segregation. This was most often accomplished by controlling the size and type of housing, where high density housing (for lower income residents) could be placed in relation to commercial and industrial uses, and in some cases through explicit use of racial zoning categories for zones. The U.S. Supreme Court ruled, in Buchanan v. Warley (1917), that a racial zoning plan of the city of Louisville, Kentucky violated the due process clause of the14th Amendment. The decision, however, did not directly address the discriminatory aspects of the law. As a result, efforts to fashion legally acceptable racial zoning schemes persisted late into the 1920s. These were succeeded by the use of restrictive covenants to prohibit black (and other minority) occupancy in certain white neighborhoods (until declared unconstitutional in the late 1940s). More widespread was the use of highly differentiated residential zoning schemes and real estate steering that imbedded racial and ethnic segregation into the residential fabric of American communities.
The Standard State Zoning Enabling Act (SSZEA) of 1924 facilitated zoning. Disseminated by the U.S. Department of Commerce, the SSZEA created a relatively uniform zoning process in U.S. cities, although depending upon their size and functions, there were definite differences in the complexity and scope of zoning schemes. The reason why localities followed the basic form prescribed by the SSZEA was to minimize the chance of the zoning ordinance being struck down by the courts. Nonetheless, from the 1920s through the 1970s, thousands of court cases tested aspects of zoning, but only a few reached the federal courts, and typically, zoning advocates prevailed.
In the 1950s and 1960s, critics of zoning charged that the fragmented city was an unintended consequence. This critique was a response to concerns that zoning created artificial separations among the various types of development in cities, and that this undermined their vitality. Zoning nevertheless remained a cornerstone of U.S. urban and suburban land regulation, and new techniques such as planned unit developments, overlay zones, and form-based codes introduced needed flexibility to reintegrate urban functions previously separated by conventional zoning approaches.