1-15 of 15 Results  for:

  • 20th Century: Post-1945 x
  • Economic History x
Clear all

Article

Antimonopoly, meaning opposition to the exclusive or near-exclusive control of an industry or business by one or a very few businesses, played a relatively muted role in the history of the post-1945 era, certainly compared to some earlier periods in American history. However, the subject of antimonopoly is important because it sheds light on changing attitudes toward concentrated power, corporations, and the federal government in the United States after World War II. Paradoxically, as antimonopoly declined as a grass-roots force in American politics, the technical, expert-driven field of antitrust enjoyed a golden age. From the 1940s to the 1960s, antitrust operated on principles that were broadly in line with those that inspired its creation in the late 19th and early 20th century, acknowledging the special contribution small-business owners made to US democratic culture. In these years, antimonopoly remained sufficiently potent as a political force to sustain the careers of national-level politicians such as congressmen Wright Patman and Estes Kefauver and to inform the opinions of Supreme Court justices such as Hugo Black and William O. Douglas. Antimonopoly and consumer politics overlapped in this period. From the mid-1960s onward, Ralph Nader repeatedly tapped antimonopoly ideas in his writings and consumer activism, skillfully exploiting popular anxieties about concentrated economic power. At the same time, as part of the United States’ rise to global hegemony, officials in the federal government’s Antitrust Division exported antitrust overseas, building it into the political, economic, and legal architecture of the postwar world. Beginning in the 1940s, conservative lawyers and economists launched a counterattack against the conception of antitrust elaborated in the progressive era. By making consumer welfare—understood in terms of low prices and market efficiency—the determining factor in antitrust cases, they made a major intellectual and political contribution to the rightward thrust of US politics in the 1970s and 1980s. Robert Bork’s The Antitrust Paradox, published in 1978, popularized and signaled the ascendency of this new approach. In the 1980s and 1990s antimonopoly drifted to the margin of political debate. Fear of big government now loomed larger in US politics than the specter of monopoly or of corporate domination. In the late 20th century, Americans, more often than not, directed their antipathy toward concentrated power in its public, rather than its private, forms. This fundamental shift in the political landscape accounts in large part for the overall decline of antimonopoly—a venerable American political tradition—in the period 1945 to 2000.

Article

David Blanke

The relationship between the car and the city remains complex and involves numerous private and public forces, innovations in technology, global economic fluctuations, and shifting cultural attitudes that only rarely consider the efficiency of the automobile as a long-term solution to urban transit. The advantages of privacy, speed, ease of access, and personal enjoyment that led many to first embrace the automobile were soon shared and accentuated by transit planners as the surest means to realize the long-held ideals of urban beautification, efficiency, and accessible suburbanization. The remarkable gains in productivity provided by industrial capitalism brought these dreams within reach and individual car ownership became the norm for most American families by the middle of the 20th century. Ironically, the success in creating such a “car country” produced the conditions that again congested traffic, raised questions about the quality of urban (and now suburban) living, and further distanced the nation from alternative transit options. The “hidden costs” of postwar automotive dependency in the United States became more apparent in the late 1960s, leading to federal legislation compelling manufacturers and transit professionals to address the long-standing inefficiencies of the car. This most recent phase coincides with a broader reappraisal of life in the city and a growing recognition of the material limits to mass automobility.

Article

Daniel Pope

Nuclear power in the United States has had an uneven history and faces an uncertain future. Promising in the 1950s electricity “too cheap to meter,” nuclear power has failed to come close to that goal, although it has carved out approximately a 20 percent share of American electrical output. Two decades after World War II, General Electric and Westinghouse offered electric utilities completed “turnkey” plants at a fixed cost, hoping these “loss leaders” would create a demand for further projects. During the 1970s the industry boomed, but it also brought forth a large-scale protest movement. Since then, partly because of that movement and because of the drama of the 1979 Three Mile Island accident, nuclear power has plateaued, with only one reactor completed since 1995. Several factors account for the failed promise of nuclear energy. Civilian power has never fully shaken its military ancestry or its connotations of weaponry and warfare. American reactor designs borrowed from nuclear submarines. Concerns about weapons proliferation stymied industry hopes for breeder reactors that would produce plutonium as a byproduct. Federal regulatory agencies dealing with civilian nuclear energy also have military roles. Those connections have provided some advantages to the industry, but they have also generated fears. Not surprisingly, the “anti-nukes” movement of the 1970s and 1980s was closely bound to movements for peace and disarmament. The industry’s disappointments must also be understood in a wider energy context. Nuclear grew rapidly in the late 1960s and 1970s as domestic petroleum output shrank and environmental objections to coal came to the fore. At the same time, however, slowing economic growth and an emphasis on energy efficiency reduced demand for new power output. In the 21st century, new reactor designs and the perils of fossil-fuel-caused global warming have once again raised hopes for nuclear, but natural gas and renewables now compete favorably against new nuclear projects. Economic factors have been the main reason that nuclear has stalled in the last forty years. Highly capital intensive, nuclear projects have all too often taken too long to build and cost far more than initially forecast. The lack of standard plant designs, the need for expensive safety and security measures, and the inherent complexity of nuclear technology have all contributed to nuclear power’s inability to make its case on cost persuasively. Nevertheless, nuclear power may survive and even thrive if the nation commits to curtailing fossil fuel use or if, as the Trump administration proposes, it opts for subsidies to keep reactors operating.

Article

Frederick Rowe Davis

The history of DDT and pesticides in America is overshadowed by four broad myths. The first myth suggests that DDT was the first insecticide deployed widely by American farmers. The second indicates that DDT was the most toxic pesticide to wildlife and humans alike. The third myth assumes that Rachel Carson’s Silent Spring (1962) was an exposé of the problems of DDT rather than a broad indictment of American dependency on chemical insecticides. The fourth and final myth reassures Americans that the ban on DDT late in 1972 resolved the pesticide paradox in America. Over the course of the 20th century, agricultural chemists have developed insecticides from plants with phytotoxic properties (“botanical” insecticides) and a range of chemicals including heavy metals such as lead and arsenic, chlorinated hydrocarbons like DDT, and organophosphates like parathion. All of the synthetic insecticides carried profound unintended consequences for landscapes and wildlife alike. More recently, chemists have returned to nature and developed chemical analogs of the botanical insecticides, first with the synthetic pyrethroids and now with the neonicotinoids. Despite recent introduction, neonics have become widely used in agriculture and there are suspicions that these chemicals contribute to declines in bees and grassland birds.

Article

Judge Glock

Despite almost three decades of strong and stable growth after World War II, the US economy, like the economies of many developed nations, faced new headwinds and challenges after 1970. Although the United States eventually overcame many of them, and continues to be one of the most dynamic in the world, it could not recover its mid-century economic miracle of rapid and broad-based economic growth. There are three major ways the US economy changed in this period. First, the US economy endured and eventually conquered the problem of high inflation, even as it instituted new policies that prioritized price stability over the so-called “Keynesian” goal of full employment. Although these new policies led to over two decades of moderate inflation and stable growth, the 2008 financial crisis challenged the post-Keynesian consensus and led to new demands for government intervention in downturns. Second, the government’s overall influence on the economy increased dramatically. Although the government deregulated several sectors in the 1970s and 1980s, such as transportation and banking, it also created new types of social and environmental regulation that were more pervasive. And although it occasionally cut spending, on the whole government spending increased substantially in this period, until it reached about 35 percent of the economy. Third, the US economy became more open to the world, and it imported more manufactured goods, even as it became more based on “intangible” products and on services rather than on manufacturing. These shifts created new economic winners and losers. Some institutions that thrived in the older economy, such as unions, which once compromised over a third of the workforce, became shadows of their former selves. The new service economy also created more gains for highly educated workers and for investors in quickly growing businesses, while blue-collar workers’ wages stagnated, at least in relative terms. Most of the trends that affected the US economy in this period were long-standing and continued over decades. Major national and international crises in this period, from the end of the Cold War, to the first Gulf War in 1991, to the September 11 attacks of 2001, seemed to have only a mild or transient impact on the economy. Two events that were of lasting importance were, first, the United States leaving the gold standard in 1971, which led to high inflation in the short term and more stable monetary policy over the long term; and second, the 2008 financial crisis, which seemed to permanently decrease American economic output even while it increased political battles about the involvement of government in the economy. The US economy at the beginning of the third decade of the 21st century was richer than it had ever been, and remained in many respects the envy of the world. But widening income gaps meant many Americans felt left behind in this new economy, and led some to worry that the stability and predictability of the old economy had been lost.

Article

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article. American food in the twentieth and twenty-first centuries is characterized by abundance. Unlike the hardscrabble existence of many earlier Americans, the “Golden Age of Agriculture” brought the bounty produced in fields across the United States to both consumers and producers. While the “Golden Age” technically ended as World War I began, larger quantities of relatively inexpensive food became the norm for most Americans as more fresh foods, rather than staple crops, made their way to urban centers and rising real wages made it easier to purchase these comestibles. The application of science and technology to food production from the field to the kitchen cabinet, or even more crucially the refrigerator by the mid-1930s, reflects the changing demographics and affluence of American society as much as it does the inventiveness of scientists and entrepreneurs. Perhaps the single most important symbol of overabundance in the United States is the postwar Green Revolution. The vast increase in agricultural production based on improved agronomics, provoked both praise and criticism as exemplified by Time magazine’s critique of Rachel Carson’s Silent Spring in September 1962 or more recently the politics of genetically modified foods. Reflecting that which occurred at the turn of the twentieth century, food production, politics, and policy at the turn of the twenty-first century has become a proxy for larger ideological agendas and the fractured nature of class in the United States. Battles over the following issues speak to which Americans have access to affordable, nutritious food: organic versus conventional farming, antibiotic use in meat production, dissemination of food stamps, contraction of farm subsidies, the rapid growth of “dollar stores,” alternative diets (organic, vegetarian, vegan, paleo, etc.), and, perhaps most ubiquitous of all, the “obesity epidemic.” These arguments carry moral and ethical values as each side deems some foods and diets virtuous, and others corrupting. While Americans have long held a variety of food ideologies that meld health, politics, and morality, exemplified by Sylvester Graham and John Harvey Kellogg in the nineteenth and early twentieth centuries, among others, newer constructions of these ideologies reflect concerns over the environment, rural Americans, climate change, self-determination, and the role of government in individual lives. In other words, food can be used as a lens to understand larger issues in American society while at the same time allowing historians to explore the intimate details of everyday life.

Article

Jeffrey F. Taffet

In the first half of the 20th century, and more actively in the post–World War II period, the United States government used economic aid programs to advance its foreign policy interests. US policymakers generally believed that support for economic development in poorer countries would help create global stability, which would limit military threats and strengthen the global capitalist system. Aid was offered on a country-by-country basis to guide political development; its implementation reflected views about how humanity had advanced in richer countries and how it could and should similarly advance in poorer regions. Humanitarianism did play a role in driving US aid spending, but it was consistently secondary to political considerations. Overall, while funding varied over time, amounts spent were always substantial. Between 1946 and 2015, the United States offered almost $757 billion in economic assistance to countries around the world—$1.6 trillion in inflation-adjusted 2015 dollars. Assessing the impact of this spending is difficult; there has long been disagreement among scholars and politicians about how much economic growth, if any, resulted from aid spending and similar disputes about its utility in advancing US interests. Nevertheless, for most political leaders, even without solid evidence of successes, aid often seemed to be the best option for constructively engaging poorer countries and trying to create the kind of world in which the United States could be secure and prosperous.

Article

Laurie Arnold

Indian gaming, also called Native American casino gaming or tribal gaming, is tribal government gaming. It is government gaming built on sovereignty and consequently is a corollary to state gambling such as lotteries rather than a corollary to corporate gaming. While the types of games offered in casinos might differ in format from ancestral indigenous games, gaming itself is a cultural tradition in many tribes, including those who operate casino gambling. Native American casino gaming is a $33.7 billion industry operated by nearly 250 distinct tribes in twenty-nine states in the United States. The Indian Gaming Regulatory Act (IGRA) of 1988 provides the framework for tribal gaming and the most important case law in Indian gaming remains Seminole Tribe of Florida v. Butterworth, in the US Fifth Circuit Court of Appeals, and the US Supreme Court decision over California v. Cabazon Band of Mission Indians.

Article

Entrepreneurship has been a basic element of Latinx life in the United States since long before the nation’s founding, varying in scale and cutting across race, class, and gender to different degrees. Indigenous forms of commerce pre-dated Spanish contact in the Americas and continued thereafter. Beginning in the 16th century, the raising, trading, and production of cattle and cattle-related products became foundational to Spanish, Mexican, and later American Southwest society and culture. By the 19th century, Latinxs in US metropolitan areas began to establish enterprises in the form of storefronts, warehouses, factories, as well as smaller ventures including peddling. At times, they succeeded previous ethnic owners; in other moments, they established new businesses that shaped everyday life and politics of their respective communities. Whatever the scale of their ventures, Latinx business owners continued to capitalize on the migration of Latinx people to the United States from Latin America and the Caribbean during the 20th century. These entrepreneurs entered business for different reasons, often responding to restricted or constrained labor options, though many sought the flexibility that entrepreneurship offered. Despite an increasing association between Latinx people and entrepreneurship, profits from Latinx ventures produced uneven results during the second half of the 20th century. For some, finance and business ownership has generated immense wealth and political influence. For others at the margins of society, it has remained a tool for achieving sustenance amid the variability of a racially stratified labor market. No monolithic account can wholly capture the vastness and complexity of Latinx economic activity. Latinx business and entrepreneurship remains a vital piece of the place-making and politics of the US Latinx population. This article provides an overview of major trends and pivotal moments in its rich history.

Article

Benjamin C. Waterhouse

Political lobbying has always played a key role in American governance, but the concept of paid influence peddling has been marked by a persistent tension throughout the country’s history. On the one hand, lobbying represents a democratic process by which citizens maintain open access to government. On the other, the outsized clout of certain groups engenders corruption and perpetuates inequality. The practice of lobbying itself has reflected broader social, political, and economic changes, particularly in the scope of state power and the scale of business organization. During the Gilded Age, associational activity flourished and lobbying became increasingly the province of organized trade associations. By the early 20th century, a wide range at political reforms worked to counter the political influence of corporations. Even after the Great Depression and New Deal recast the administrative and regulatory role of the federal government, business associations remained the primary vehicle through which corporations and their designated lobbyists influenced government policy. By the 1970s, corporate lobbyists had become more effective and better organized, and trade associations spurred a broad-based political mobilization of business. Business lobbying expanded in the latter decades of the 20th century; while the number of companies with a lobbying presence leveled off in the 1980s and 1990s, the number of lobbyists per company increased steadily and corporate lobbyists grew increasingly professionalized. A series of high-profile political scandals involving lobbyists in 2005 and 2006 sparked another effort at regulation. Yet despite popular disapproval of lobbying and distaste for politicians, efforts to substantially curtail the activities of lobbyists and trade associations did not achieve significant success.

Article

After World War II, the United States backed multinational private oil companies known as the “Seven Sisters”—five American companies (including Standard Oil of New Jersey and Texaco), one British (British Petroleum), and one Anglo-Dutch (Shell)—in their efforts to control Middle East oil and feed rising demand for oil products in the West. In 1960 oil-producing states in Latin America and the Middle East formed the Organization of the Petroleum Exporting Countries (OPEC) to protest what they regarded as the inequitable dominance of the private oil companies. Between 1969 and 1973 changing geopolitical and economic conditions shifted the balance of power from the Seven Sisters to OPEC. Following the first “oil shock” of 1973–1974, OPEC assumed control over the production and price of oil, ending the rule of the companies and humbling the United States, which suddenly found itself dependent upon OPEC for its energy security. Yet this dependence was complicated by a close relationship between the United States and major oil producers such as Saudi Arabia, which continued to adopt pro-US strategic positions even as they squeezed out the companies. Following the Iranian Revolution (1978–1979), the Iran–Iraq War (1980–1988), and the First Iraq War (1990–1991), the antagonism that colored US relations with OPEC evolved into a more comfortable, if wary, recognition of the new normal, where OPEC supplied the United States with crude oil while acknowledging the United States’ role in maintaining the security of the international energy system.

Article

Katherine R. Jewell

The term “Sunbelt” connotes a region defined by its environment. “Belt” suggests the broad swath of states from the Atlantic coast, stretching across Texas and Oklahoma, the Southwest, to southern California. “Sun” suggests its temperate—even hot—climate. Yet in contrast to the industrial northeastern and midwestern Rust Belt, or perhaps, “Frost” Belt, the term’s emergence at the end of the 1960s evoked an optimistic, opportunistic brand. Free from snowy winters, with spaces cooled by air conditioners, and Florida’s sandy beaches or California’s surfing beckoning, it is true that more Americans moved to the Sunbelt states in the 1950s and 1960s than to the deindustrializing centers of the North and East. But the term “Sunbelt” also captures an emerging political culture that defies regional boundaries. The term originates more from the diagnosis of this political climate, rather than an environmental one, associated with the new patterns of migration in the mid-20th century. The term defined a new regional identity: politically, economically, in policy, demographically, and socially, as well as environmentally. The Sunbelt received federal money in an unprecedented manner, particularly because of rising Cold War defense spending in research and military bases, and its urban centers grew in patterns unlike those in the old Northeast and Midwest, thanks to the policy innovations wrought by local boosters, business leaders, and politicians, which defined politics associated with the region after the 1970s. Yet from its origin, scholars debate whether the Sunbelt’s emergence reflects a new regional identity, or something else.

Article

In the seventy years since the end of World War II (1939–1945), postindustrialization—the exodus of manufacturing and growth of finance and services—has radically transformed the economy of North American cities. Metropolitan areas are increasingly home to transnational firms that administer dispersed production networks that span the world. A few major global centers host large banks that coordinate flows of finance capital necessary not only for production, but also increasingly for education, infrastructure, municipal government, housing, and nearly every other aspect of life. In cities of the global north, fewer workers produce goods and more produce information, entertainment, and experiences. Women have steadily entered the paid workforce, where they often do the feminized work of caring for children and the ill, cleaning homes, and preparing meals. Like the Gilded Age city, the postindustrial city creates immense social divisions, injustices, and inequalities: penthouses worth millions and rampant homelessness, fifty-dollar burgers and an epidemic of food insecurity, and unparalleled wealth and long-standing structural unemployment all exist side by side. The key features of the postindustrial service economy are the increased concentration of wealth, the development of a privileged and celebrated workforce of professionals, and an economic system reliant on hyperexploited service workers whose availability is conditioned by race, immigration status, and gender.

Article

In the decade after 1965, radicals responded to the alienating features of America’s technocratic society by developing alternative cultures that emphasized authenticity, individualism, and community. The counterculture emerged from a handful of 1950s bohemian enclaves, most notably the Beat subcultures in the Bay Area and Greenwich Village. But new influences shaped an eclectic and decentralized counterculture after 1965, first in San Francisco’s Haight-Ashbury district, then in urban areas and college towns, and, by the 1970s, on communes and in myriad counter-institutions. The psychedelic drug cultures around Timothy Leary and Ken Kesey gave rise to a mystical bent in some branches of the counterculture and influenced counterculture style in countless ways: acid rock redefined popular music; tie dye, long hair, repurposed clothes, and hip argot established a new style; and sexual mores loosened. Yet the counterculture’s reactionary elements were strong. In many counterculture communities, gender roles mirrored those of mainstream society, and aggressive male sexuality inhibited feminist spins on the sexual revolution. Entrepreneurs and corporate America refashioned the counterculture aesthetic into a marketable commodity, ignoring the counterculture’s incisive critique of capitalism. Yet the counterculture became the basis of authentic “right livelihoods” for others. Meanwhile, the politics of the counterculture defy ready categorization. The popular imagination often conflates hippies with radical peace activists. But New Leftists frequently excoriated the counterculture for rejecting political engagement in favor of hedonistic escapism or libertarian individualism. Both views miss the most important political aspects of the counterculture, which centered on the embodiment of a decentralized anarchist bent, expressed in the formation of counter-institutions like underground newspapers, urban and rural communes, head shops, and food co-ops. As the counterculture faded after 1975, its legacies became apparent in the redefinition of the American family, the advent of the personal computer, an increasing ecological and culinary consciousness, and the marijuana legalization movement.

Article

The key pieces of antitrust legislation in the United States—the Sherman Antitrust Act of 1890 and the Clayton Act of 1914—contain broad language that has afforded the courts wide latitude in interpreting and enforcing the law. This article chronicles the judiciary’s shifting interpretations of antitrust law and policy over the past 125 years. It argues that jurists, law enforcement agencies, and private litigants have revised their approaches to antitrust to accommodate economic shocks, technological developments, and predominant economic wisdom. Over time an economic logic that prioritizes lowest consumer prices as a signal of allocative efficiency—known as the consumer welfare standard—has replaced the older political objectives of antitrust, such as protecting independent proprietors or small businesses, or reducing wealth transfers from consumers to producers. However, a new group of progressive activists has again called for revamping antitrust so as to revive enforcement against dominant firms, especially in digital markets, and to refocus attention on the political effects of antitrust law and policy. This shift suggests that antitrust may remain a contested field for scholarly and popular debate.