61-80 of 191 Results  for:

  • 20th Century: Post-1945 x
Clear all

Article

Stephen Mandrgoc and David Dunaway

During its existence from 1926 to its formal decommissioning in 1985, US Highway 66, or Route 66, came to occupy a special place in the American imagination. For a half-century and more, it symbolized American individualism, travel, and the freedom of the open road with the transformative rise of America’s automobile culture. Route 66 was an essential connection between the Midwest and the West for American commercial, military, and civilian transportation. It chained together small towns and cities across the nation as America’s “Main Street.” Following the path of older trails and railroads, Route 66 hosted travelers in many different eras: the adventurous motorist in his Ford Model A in the 1920s, the Arkies and Okies desperate for a new start in California in the 1930s, trucks carrying wartime soldiers and supplies in the 1940s, and postwar tourists and travelers from the 1950s onward. By its nature, it brought together diverse cultures of different regions, introducing Americans to the “others” that were their regional neighbors, and exposing travelers to new arts, music, foods, and traditions. It became firmly embedded in pop culture through songs, books, television, and advertisements for its attractions as America’s most famous road. Travel on Highway 66 steadily declined with the development of controlled-access interstate highways in the 1960s and 1970s. The towns and cities it connected and the many businesses and attractions dependent on its traffic and tourism protested the removal of the highway designation by the US Transportation Department in 1985, but their efforts failed. Nonetheless, revivalists who treasured the old road worked to preserve the road sections and attractions that remained, as well as founding a wide variety of organizations and donating to museums and libraries to preserve Route 66 ephemera. In the early 21st century, Route 66 is an international icon of America, traveled by fans from all over the world.

Article

Kathryn Cramer Brownell

Hollywood has always been political. Since its early days, it has intersected with national, state, and local politics. As a new entertainment industry attempting to gain a footing in a society of which it sat firmly on the outskirts, the Jewish industry leaders worked hard to advance the merits of their industry to a Christian political establishment. At the local and state level, film producers faced threats of censorship and potential regulation of more democratic spaces they provided for immigrants and working class patrons in theaters. As Hollywood gained economic and cultural influence, the political establishment took note, attempting to shape silver screen productions and deploy Hollywood’s publicity innovations for its own purposes. Over the course of the 20th century, industry leaders forged political connections with politicians from both parties to promote their economic interests, and politically motivated actors, directors, writers, and producers across the ideological spectrum used their entertainment skills to advance ideas and messages on and off the silver screen. At times this collaboration generated enthusiasm for its ability to bring new citizens into the electoral process. At other times, however, it created intense criticism and fears abounded that entertainment would undermine the democratic process with a focus on style over substance. As Hollywood personalities entered the political realm—for personal, professional, and political gain—the industry slowly reshaped American political life, bringing entertainment, glamor, and emotion to the political process and transforming how Americans communicate with their elected officials and, indeed, how they view their political leaders.

Article

In its formulation of foreign policy, the United States takes account of many priorities and factors, including national security concerns, economic interests, and alliance relationships. An additional factor with significance that has risen and fallen over time is human rights, or more specifically violations of human rights. The extent to which the United States should consider such abuses or seek to moderate them has been and continues to be the subject of considerable debate.

Article

The Immigration Act of 1924 was in large part the result of a deep political and cultural divide in America between heavily immigrant cities and far less diverse small towns and rural areas. The 1924 legislation, together with growing residential segregation, midcentury federal urban policy, and postwar suburbanization, undermined scores of ethnic enclaves in American cities between 1925 and the 1960s. The deportation of Mexicans and their American children during the Great Depression, the incarceration of West Coast Japanese Americans during World War II, and the wartime and postwar shift of so many jobs to suburban and Sunbelt areas also reshaped many US cities in these years. The Immigration Act of 1965, which enabled the immigration of large numbers of people from Asia, Latin America, and, eventually, Africa, helped to revitalize many depressed urban areas and inner-ring suburbs. In cities and suburbs across the country, the response to the new immigration since 1965 has ranged from welcoming to hostile. The national debate over immigration in the early 21st century reflects both familiar and newer cultural, linguistic, religious, racial, and regional rifts. However, urban areas with a history of immigrant incorporation remain the most politically supportive of such people, just as they were a century ago.

Article

Post-1945 immigration to the United States differed fairly dramatically from America’s earlier 20th- and 19th-century immigration patterns, most notably in the dramatic rise in numbers of immigrants from Asia. Beginning in the late 19th century, the U.S. government took steps to bar immigration from Asia. The establishment of the national origins quota system in the 1924 Immigration Act narrowed the entryway for eastern and central Europeans, making western Europe the dominant source of immigrants. These policies shaped the racial and ethnic profile of the American population before 1945. Signs of change began to occur during and after World War II. The recruitment of temporary agricultural workers from Mexico led to an influx of Mexicans, and the repeal of Asian exclusion laws opened the door for Asian immigrants. Responding to complex international politics during the Cold War, the United States also formulated a series of refugee policies, admitting refugees from Europe, the western hemisphere, and later Southeast Asia. The movement of people to the United States increased drastically after 1965, when immigration reform ended the national origins quota system. The intricate and intriguing history of U.S. immigration after 1945 thus demonstrates how the United States related to a fast-changing world, its less restrictive immigration policies increasing the fluidity of the American population, with a substantial impact on American identity and domestic policy.

Article

Laurie Arnold

Indian gaming, also called Native American casino gaming or tribal gaming, is tribal government gaming. It is government gaming built on sovereignty and consequently is a corollary to state gambling such as lotteries rather than a corollary to corporate gaming. While the types of games offered in casinos might differ in format from ancestral indigenous games, gaming itself is a cultural tradition in many tribes, including those who operate casino gambling. Native American casino gaming is a $33.7 billion industry operated by nearly 250 distinct tribes in twenty-nine states in the United States. The Indian Gaming Regulatory Act (IGRA) of 1988 provides the framework for tribal gaming and the most important case law in Indian gaming remains Seminole Tribe of Florida v. Butterworth, in the US Fifth Circuit Court of Appeals, and the US Supreme Court decision over California v. Cabazon Band of Mission Indians.

Article

The United States has engaged with Indigenous nations on a government-to-government basis via federal treaties representing substantial international commitments since the origins of the republic. The first treaties sent to the Senate for ratification under the Constitution of 1789 were treaties with Indigenous nations. Treaties with Indigenous nations provided the means by which approximately one billion acres of land entered the national domain of the United States prior to 1900, at an average price of seventy-five cents per acre – the United States confiscated or claimed another billion acres of Indigenous land without compensation. Despite subsequent efforts of American federal authorities to alter these arrangements, the weight of evidence indicates that the relationship remains primarily one of a nation-to-nation association. Integration of the history of federal relations with Indigenous nations with American foreign relations history sheds important new light on the fundamental linkages between these seemingly distinct state practices from the beginnings of the American republic.

Article

Perhaps the most important radical labor union in U.S. history, the Industrial Workers of the World (IWW) continues to attract workers, in and beyond the United States. The IWW was founded in 1905 in Chicago—at that time, the greatest industrial city in a country that had become the world’s mightiest economy. Due to the nature of industrial capitalism in what, already, had become a global economy, the IWW and its ideals quickly became a worldwide phenomenon. The Wobblies, as members were and still are affectionately known, never were as numerically large as mainstream unions, but their influence, particularly from 1905 into the 1920s, was enormous. The IWW captured the imaginations of countless rebellious workers with its fiery rhetoric, daring tactics, and commitment to revolutionary industrial unionism. The IWW pledged to replace the “bread and butter” craft unionism of the larger, more mainstream American Federation of Labor (AFL), with massive industrial unions strong enough to take on ever-larger corporations and, ultimately, overthrow capitalism to be replaced with a society based upon people rather than profit. In the United States, the union grew in numbers and reputation, before and during World War I, by organizing workers neglected by other unions—immigrant factory workers in the Northeast and Midwest, migratory farmworkers in the Great Plains, and mine, timber, and harvest workers out West. Unlike most other unions of that era, the IWW welcomed immigrants, women, and people of color; truly, most U.S. institutions excluded African Americans and darker-skinned immigrants as well as women, making the IWW among the most radically inclusive institutions in the country and world. Wobbly ideas, members, and publications soon spread beyond the United States—first to Mexico and Canada, then into the Caribbean and Latin America, and to Europe, southern Africa, and Australasia in rapid succession. The expansion of the IWW and its ideals across the world in under a decade is a testament to the passionate commitment of its members. It also speaks to the immense popularity of anticapitalist tendencies that shared more in common with anarchism than social democracy. However, the IWW’s revolutionary program and class-war rhetoric yielded more enemies than allies, including governments, which proved devastating during and after World War I, though the union soldiered on. Even in 2020, the ideals the IWW espoused continued to resonate among a small but growing and vibrant group of workers, worldwide.

Article

Jamie L. Pietruska

The term “information economy” first came into widespread usage during the 1960s and 1970s to identify a major transformation in the postwar American economy in which manufacturing had been eclipsed by the production and management of information. However, the information economy first identified in the mid-20th century was one of many information economies that have been central to American industrialization, business, and capitalism for over two centuries. The emergence of information economies can be understood in two ways: as a continuous process in which information itself became a commodity, as well as an uneven and contested—not inevitable—process in which economic life became dependent on various forms of information. The production, circulation, and commodification of information has historically been essential to the growth of American capitalism and to creating and perpetuating—and at times resisting—structural racial, gender, and class inequities in American economy and society. Yet information economies, while uneven and contested, also became more bureaucratized, quantified, and commodified from the 18th century to the 21st century. The history of information economies in the United States is also characterized by the importance of systems, networks, and infrastructures that link people, information, capital, commodities, markets, bureaucracies, technologies, ideas, expertise, laws, and ideologies. The materiality of information economies is historically inextricable from production of knowledge about the economy, and the concepts of “information” and “economy” are themselves historical constructs that change over time. The history of information economies is not a teleological story of progress in which increasing bureaucratic rationality, efficiency, predictability, and profit inevitably led to the 21st-century age of Big Data. Nor is it a singular story of a single, coherent, uniform information economy. The creation of multiple information economies—at different scales in different regions—was a contingent, contested, often inequitable process that did not automatically democratize access to objective information.

Article

Mass transit has been part of the urban scene in the United States since the early 19th century. Regular steam ferry service began in New York City in the early 1810s and horse-drawn omnibuses plied city streets starting in the late 1820s. Expanding networks of horse railways emerged by the mid-19th century. The electric streetcar became the dominant mass transit vehicle a half century later. During this era, mass transit had a significant impact on American urban development. Mass transit’s importance in the lives of most Americans started to decline with the growth of automobile ownership in the 1920s, except for a temporary rise in transit ridership during World War II. In the 1960s, congressional subsidies began to reinvigorate mass transit and heavy-rail systems opened in several cities, followed by light rail systems in several others in the next decades. Today concerns about environmental sustainability and urban revitalization have stimulated renewed interest in the benefits of mass transit.

Article

By serving travelers and commerce, roads and streets unite people and foster economic growth. But as they develop, roads and streets also disrupt old patterns, upset balances of power, and isolate some as they serve others. The consequent disagreements leave historical records documenting social struggles that might otherwise be overlooked. For long-distance travel in America before the middle of the 20th century, roads were generally poor alternatives, resorted to when superior means of travel, such as river and coastal vessels, canal boats, or railroads were unavailable. Most roads were unpaved, unmarked, and vulnerable to the effects of weather. Before the railroads, for travelers willing to pay the toll, rare turnpikes and plank roads could be much better. Even in towns, unpaved streets were common until the late 19th century, and persisted into the 20th. In the late 19th century, rapid urban growth, rural free delivery of the mails, and finally the proliferation of electric railways and bicycling contributed to growing pressure for better roads and streets. After 1910, the spread of the automobile accelerated the trend, but only with great controversy, especially in cities. Partly in response to the controversy, advocates of the automobile organized to promote state and county motor highways funded substantially by gasoline taxes; such roads were intended primarily for motor vehicles. In the 1950s, massive federal funds accelerated the trend; by then, motor vehicles were the primary transportation mode for both long and short distances. The consequences have been controversial, and alternatives have been attracting growing interest.

Article

Thomas A. Reinstein

The United States has a rich history of intelligence in the conduct of foreign relations. Since the Revolutionary War, intelligence has been most relevant to U.S. foreign policy in two ways. Intelligence analysis helps to inform policy. Intelligence agencies also have carried out overt action—secret operations—to influence political, military, or economic conditions in foreign states. The American intelligence community has developed over a long period, and major changes to that community have often occurred because of contingent events rather than long-range planning. Throughout their history, American intelligence agencies have used intelligence gained from both human and technological sources to great effect. Often, U.S. intelligence agencies have been forced to rely on technological means of intelligence gathering for lack of human sources. Recent advances in cyberwarfare have made technology even more important to the American intelligence community. At the same time, the relationship between intelligence and national-security–related policymaking has often been dysfunctional. Indeed, though some American policymakers have used intelligence avidly, many others have used it haphazardly or not at all. Bureaucratic fights also have crippled the American intelligence community. Several high-profile intelligence failures tend to dominate the recent history of intelligence and U.S. foreign relations. Some of these failures were due to lack of intelligence or poor analytic tradecraft. Others came because policymakers failed to use the intelligence they had. In some cases, policymakers have also pressured intelligence officers to change their findings to better suit those policymakers’ goals. And presidents have often preferred to use covert action to carry out their preferred policies without paying attention to intelligence analysis. The result has been constant debate about the appropriate role of intelligence in U.S. foreign relations.

Article

Mary S. Barton and David M. Wight

The US government’s perception of and response to international terrorism has undergone momentous shifts since first focusing on the issue in the early 20th century. The global rise of anarchist and communist violence provided the impetus for the first major US government programs aimed at combating international terrorism: restrictive immigration policies targeting perceived radicals. By the 1920s, the State Department emerged as the primary government agency crafting US responses to international terrorism, generally combating communist terrorism through diplomacy and information-sharing partnerships with foreign governments. The 1979 Iranian hostage crisis marked the beginning of two key shifts in US antiterrorism policy: a heightened focus on combating Islamist terrorism and a willingness to deploy military force to this end. The terrorist attacks of September 11, 2001, led US officials to conceptualize international terrorism as a high-level national security problem, leading to US military invasions and occupations of Afghanistan and Iraq, a broader use of special forces, and unprecedented intelligence-gathering operations.

Article

Malcolm Byrne

Iran-Contra was a major political scandal in the late 1980s that nearly derailed a popular president and left American society deeply divided about its significance. Although the affair was initially portrayed as a rogue operation run by overzealous White House aides, subsequent evidence showed that the president himself was its driving force with the knowledge of his most senior advisers. Iran-Contra was a foreign policy scandal, but it also gave rise to a significant confrontation between the executive and legislative branches with constitutional implications for their respective roles, especially in foreign policy. The affair exposed significant limits on the ability of all three branches to ferret out and redress official wrongdoing. And the entire episode, a major congressional investigation concluded, was characterized by a remarkable degree of dishonesty and deception, reaching to the highest levels of government. For all these reasons, and in the absence of a clear legal or ethical conclusion (in contrast to Watergate), Iran-Contra left a scar on the American body politic that further eroded the public’s faith in government.

Article

Kelly J. Shannon

Historian James A. Bill famously described America’s relationship with Iran as a tragedy. “Few international relationships,” he wrote, “have had a more positive beginning than that which characterized Iranian-American contacts for more than a century.” The nations’ first diplomatic dealings in the 1850s resulted in a treaty of friendship, and although the U.S. government remained largely aloof from Iranian affairs until World War II, many Iranians saw Americans and the United States positively by the early 20th century. The United States became more deeply involved with Iran during the Second World War, and the two nations were close allies during the Cold War. Yet they became enemies following the 1979 Iranian Revolution. How did this happen? The events that led to the Islamic Republic of Iran dubbing the United States the “Great Satan” in 1979 do indeed contain elements of tragedy. By the late 19th century, Iran—known to Americans as “Persia” until the 1930s—was caught in the middle of the imperial “Great Game” between Great Britain and Russia. Although no European power formally colonized Iran, Britain and Russia developed “spheres of influence” in the country and meddled constantly in Iran’s affairs. As Iranians struggled to create a modern, independent nation-state, they looked to disinterested third parties for help in their struggle to break free from British and Russian control. Consequently, many Iranians came to see the United States as a desirable ally. Activities of individual Americans in Iran from the mid-19th century onward, ranging from Presbyterian missionaries who built hospitals and schools to economic experts who advised Iran’s government, as well as the United States’ own revolutionary and democratic history, fostered a positive view of the United States among Iranians. The two world wars drew the United States into more active involvement in the Middle East, and following both conflicts, the U.S. government defended Iran’s sovereignty against British and Soviet manipulation. The event that caused the United States to lose the admiration of many Iranians occurred in 1953, when the U.S. Central Intelligence Agency and the British Secret Intelligence Service staged a coup, which overthrew Iran’s democratically elected prime minister, Mohammad Mossadegh, because he nationalized Iran’s oil industry. The coup allowed Iran’s shah, Mohammad Reza Shah Pahlavi, to transform himself from a constitutional monarch into an absolute ruler. The 1953 coup, coupled with the subsequent decades of U.S. support for the Shah’s politically repressive regime, resulted in anti-American resentment that burst forth during the 1979 Iranian Revolution. The two nations have been enemies ever since. This article traces the origins and evolution of the U.S. relationship with Iran from the 19th through the early 21st centuries.

Article

Sophie Cooper

Irish and American histories are intertwined as a result of migration, mercantile and economic connections, and diplomatic pressures from governments and nonstate actors. The two fledgling nations were brought together by their shared histories of British colonialism, but America’s growth as an imperial power complicated any natural allegiances that were invoked across the centuries. Since the beginnings of that relationship in 1607 with the arrival of Irish migrants in America (both voluntary and forced) and the building of a transatlantic linen trade, the meaning of “Irish” has fluctuated in America, mirroring changes in both migrant patterns and international politics. The 19th century saw Ireland enter into Anglo-American diplomacy on both sides of the Atlantic, while the 20th century saw Ireland emerge from Britain’s shadow with the establishment of separate diplomatic connections between the United States and Ireland. American recognition of the newly independent Irish Free State was vital for Irish politicians on the world stage; however the Free State’s increasingly isolationist policies during the 1930s to 1950s alienated its American allies. The final decade of the century, however, brought America and Ireland (including both Northern Ireland and the Republic of Ireland) closer than ever before. Throughout their histories, the Irish diasporas—both Protestant and Catholic—in America have played vital roles as pressure groups and fundraisers. The history of American–Irish relations therefore brings together governmental and nonstate organizations and unites political, diplomatic, social, cultural, and economic histories which are still relevant today.

Article

Justus D. Doenecke

For the United States, isolationism is best defined as avoidance of wars outside the Western Hemisphere, particularly in Europe; opposition to binding military alliances; and the unilateral freedom to act politically and commercially unrestrained by mandatory commitments to other nations. Until the controversy over American entry into the League of Nations, isolationism was never subject to debate. The United States could expand its territory, protect its commerce, and even fight foreign powers without violating its traditional tenets. Once President Woodrow Wilson sought membership in the League, however, Americans saw isolationism as a foreign policy option, not simply something taken for granted. A fundamental foreign policy tenet now became a faction, limited to a group of people branded as “isolationists.” Its high point came during the years 1934–1937, when Congress, noting the challenge of the totalitarian nations to the international status quo, passed the neutrality acts to insulate the country from global entanglements. Once World War II broke out in Europe, President Franklin D. Roosevelt increasingly sought American participation on the side of the Allies. Isolationists unsuccessfully fought FDR’s legislative proposals, beginning with repeal of the arms embargo and ending with the convoying of supplies to Britain. The America First Committee (1940–1941), however, so effectively mobilized anti-interventionist opinion as to make the president more cautious in his diplomacy. If the Japanese attack on Pearl Harbor permanently ended classic isolationism, by 1945 a “new isolationism” voiced suspicion of the United Nations, the Truman Doctrine, aid to Greece and Turkey, the Marshall Plan, the North Atlantic Treaty Organization, and U.S. participation in the Korean War. Yet, because the “new isolationists” increasingly advocated militant unilateral measures to confront Communist Russia and China, often doing so to advance the fortunes of the Republican party, they exposed themselves to charges of inconsistency and generally faded away in the 1950s. Since the 1950s, many Americans have opposed various military involvements— including the ones in Vietnam, Iraq, and Afghanistan— but few envision returning to an era when the United States avoids all commitments.

Article

Olivia L. Sohns

Moral, political, and strategic factors have contributed to the emergence and durability of the U.S.-Israel alliance. It took decades for American support for Israel to evolve from “a moral stance” to treating Israel as a “strategic asset” to adopting a policy of “strategic cooperation.” The United States supported Israel’s creation in 1948 not only because of the lobbying efforts of American Jews but also due to humanitarian considerations stemming from the Holocaust. Beginning in the 1950s, Israel sought to portray itself as an ally of the United States on grounds that America and Israel were fellow liberal democracies and shared a common Judeo-Christian cultural heritage. By the mid-1960s, Israel was considered a strategic proxy of American power in the Middle East in the Cold War, while the Soviet Union armed the radical Arab nationalist states and endorsed a Palestinian “people’s wars of national liberation” against Israel. Over the subsequent decades, Israel repeatedly sought to demonstrate that it was allied with the United States in opposing instability in the region that might threaten U.S. interests. Israel also sought to portray itself as a liberal democracy despite its continued occupation of territories that it conquered in the Arab-Israeli War of 1967. After the terrorist attacks of September 11, 2001, and the rise of regional instability and radicalism in the Middle East following the 2003 U.S. invasion of Iraq and the Arab Spring of 2011, Israel’s expertise in the realms of counterterrorism and homeland security provided a further basis for U.S.-Israel military-strategic cooperation. Although American and Israeli interests are not identical, and there have been disagreements between the two countries regarding the best means to secure comprehensive Arab-Israeli and Israeli-Palestinian peace, the foundations of the relationship are strong enough to overcome crises that would imperil a less robust alliance.

Article

Racism and xenophobia, but also resilience and community building, characterize the return of thousands of Japanese Americans, or Nikkei, to the West Coast after World War II. Although the specific histories of different regions shaped the resettlement experiences for Japanese Americans, Los Angeles provides an instructive case study. For generations, the City of Angels has been home to one of the nation’s largest and most diverse Nikkei communities and the ways in which Japanese Americans rebuilt their lives and institutions resonate with the resettlement experience elsewhere. Before World War II, greater Los Angeles was home to a vibrant Japanese American population. First generation immigrants, or Issei, and their American-born children, the Nisei, forged dynamic social, economic, cultural, and spiritual institutions out of various racial exclusions. World War II uprooted the community as Japanese Americans left behind their farms, businesses, and homes. In the best instances, they were able to entrust their property to neighbors or other sympathetic individuals. More often, the uncertainty of their future led Japanese Americans to sell off their property, far below the market price. Upon the war’s end, thousands of Japanese Americans returned to Los Angeles, often to financial ruin. Upon their arrival in the Los Angeles area, Japanese Americans continued to face deep-seated prejudice, all the more accentuated by an overall dearth of housing. Without a place to live, they sought refuge in communal hostels set up in pre-war institutions that survived the war such as a variety of Christian and Buddhist churches. Meanwhile, others found housing in temporary trailer camps set up by the War Relocation Authority (WRA), and later administered by the Federal Public Housing Authority (FPHA), in areas such as Burbank, Sun Valley, Hawthorne, Santa Monica, and Long Beach. Although some local religious groups and others welcomed the returnees, white homeowners, who viewed the settlement of Japanese Americans as a threat to their property values, often mobilized to protest the construction of these camps. The last of these camps closed in 1956, demonstrating the hardship some Japanese Americans still faced in integrating back into society. Even when the returnees were able to leave the camps, they still faced racially restrictive housing covenants and, when those practices were ruled unconstitutional, exclusionary lending. Although new suburban enclaves of Japanese Americans eventually developed in areas such as Gardena, West Los Angeles, and Pacoima by the 1960s, the pathway to those destinations was far from easy. Ultimately, the resettlement of Japanese Americans in Los Angeles after their mass incarceration during World War II took place within the intertwined contexts of lingering anti-Japanese racism, Cold War politics, and the suburbanization of Southern California.

Article

John Gennari

In the post-1945 period, jazz moved rapidly from one major avant-garde revolution (the birth of bebop) to another (the emergence of free jazz) while developing a profusion of subgenres (hard bop, progressive, modal, Third Stream, soul jazz) and a new idiomatic persona (cool or hip) that originated as a form of African American resistance but soon became a signature of transgression and authenticity across the modern arts and culture. Jazz’s long-standing affiliation with African American urban life and culture intensified through its central role in the Black Arts Movement of the 1960s. By the 1970s, jazz, now fully eclipsed in popular culture by rock n’ roll, turned to electric instruments and fractured into a multitude of hyphenated styles (jazz-funk, jazz-rock, fusion, Latin jazz). The move away from acoustic performance and traditional codes of blues and swing musicianship generated a neoclassical reaction in the 1980s that coincided with a mission to establish an orthodox jazz canon and honor the music’s history in elite cultural institutions. Post-1980s jazz has been characterized by tension between tradition and innovation, earnest preservation and intrepid exploration, Americanism and internationalism.