1-10 of 10 Results  for:

  • Economic History x
  • Political History x
Clear all

Article

Antimonopoly, meaning opposition to the exclusive or near-exclusive control of an industry or business by one or a very few businesses, played a relatively muted role in the history of the post-1945 era, certainly compared to some earlier periods in American history. However, the subject of antimonopoly is important because it sheds light on changing attitudes toward concentrated power, corporations, and the federal government in the United States after World War II. Paradoxically, as antimonopoly declined as a grass-roots force in American politics, the technical, expert-driven field of antitrust enjoyed a golden age. From the 1940s to the 1960s, antitrust operated on principles that were broadly in line with those that inspired its creation in the late 19th and early 20th century, acknowledging the special contribution small-business owners made to US democratic culture. In these years, antimonopoly remained sufficiently potent as a political force to sustain the careers of national-level politicians such as congressmen Wright Patman and Estes Kefauver and to inform the opinions of Supreme Court justices such as Hugo Black and William O. Douglas. Antimonopoly and consumer politics overlapped in this period. From the mid-1960s onward, Ralph Nader repeatedly tapped antimonopoly ideas in his writings and consumer activism, skillfully exploiting popular anxieties about concentrated economic power. At the same time, as part of the United States’ rise to global hegemony, officials in the federal government’s Antitrust Division exported antitrust overseas, building it into the political, economic, and legal architecture of the postwar world. Beginning in the 1940s, conservative lawyers and economists launched a counterattack against the conception of antitrust elaborated in the progressive era. By making consumer welfare—understood in terms of low prices and market efficiency—the determining factor in antitrust cases, they made a major intellectual and political contribution to the rightward thrust of US politics in the 1970s and 1980s. Robert Bork’s The Antitrust Paradox, published in 1978, popularized and signaled the ascendency of this new approach. In the 1980s and 1990s antimonopoly drifted to the margin of political debate. Fear of big government now loomed larger in US politics than the specter of monopoly or of corporate domination. In the late 20th century, Americans, more often than not, directed their antipathy toward concentrated power in its public, rather than its private, forms. This fundamental shift in the political landscape accounts in large part for the overall decline of antimonopoly—a venerable American political tradition—in the period 1945 to 2000.

Article

Daniel Pope

Nuclear power in the United States has had an uneven history and faces an uncertain future. Promising in the 1950s electricity “too cheap to meter,” nuclear power has failed to come close to that goal, although it has carved out approximately a 20 percent share of American electrical output. Two decades after World War II, General Electric and Westinghouse offered electric utilities completed “turnkey” plants at a fixed cost, hoping these “loss leaders” would create a demand for further projects. During the 1970s the industry boomed, but it also brought forth a large-scale protest movement. Since then, partly because of that movement and because of the drama of the 1979 Three Mile Island accident, nuclear power has plateaued, with only one reactor completed since 1995. Several factors account for the failed promise of nuclear energy. Civilian power has never fully shaken its military ancestry or its connotations of weaponry and warfare. American reactor designs borrowed from nuclear submarines. Concerns about weapons proliferation stymied industry hopes for breeder reactors that would produce plutonium as a byproduct. Federal regulatory agencies dealing with civilian nuclear energy also have military roles. Those connections have provided some advantages to the industry, but they have also generated fears. Not surprisingly, the “anti-nukes” movement of the 1970s and 1980s was closely bound to movements for peace and disarmament. The industry’s disappointments must also be understood in a wider energy context. Nuclear grew rapidly in the late 1960s and 1970s as domestic petroleum output shrank and environmental objections to coal came to the fore. At the same time, however, slowing economic growth and an emphasis on energy efficiency reduced demand for new power output. In the 21st century, new reactor designs and the perils of fossil-fuel-caused global warming have once again raised hopes for nuclear, but natural gas and renewables now compete favorably against new nuclear projects. Economic factors have been the main reason that nuclear has stalled in the last forty years. Highly capital intensive, nuclear projects have all too often taken too long to build and cost far more than initially forecast. The lack of standard plant designs, the need for expensive safety and security measures, and the inherent complexity of nuclear technology have all contributed to nuclear power’s inability to make its case on cost persuasively. Nevertheless, nuclear power may survive and even thrive if the nation commits to curtailing fossil fuel use or if, as the Trump administration proposes, it opts for subsidies to keep reactors operating.

Article

Christoph Nitschke and Mark Rose

U.S. history is full of frequent and often devastating financial crises. They have coincided with business cycle downturns, but they have been rooted in the political design of markets. Financial crises have also drawn from changes in the underpinning cultures, knowledge systems, and ideologies of marketplace transactions. The United States’ political and economic development spawned, guided, and modified general factors in crisis causation. Broadly viewed, the reasons for financial crises have been recurrent in their form but historically specific in their configuration: causation has always revolved around relatively sudden reversals of investor perceptions of commercial growth, stock market gains, monetary availability, currency stability, and political predictability. The United States’ 19th-century financial crises, which happened in rapid succession, are best described as disturbances tied to market making, nation building, and empire creation. Ongoing changes in America’s financial system aided rapid national growth through the efficient distribution of credit to a spatially and organizationally changing economy. But complex political processes—whether Western expansion, the development of incorporation laws, or the nation’s foreign relations—also underlay the easy availability of credit. The relationship between systemic instability and ideas and ideals of economic growth, politically enacted, was then mirrored in the 19th century. Following the “Golden Age” of crash-free capitalism in the two decades after the Second World War, the recurrence of financial crises in American history coincided with the dominance of the market in statecraft. Banking and other crises were a product of political economy. The Global Financial Crisis of 2007–2008 not only once again changed the regulatory environment in an attempt to correct past mistakes, but also considerably broadened the discursive situation of financial crises as academic topics.

Article

Laurie Arnold

Indian gaming, also called Native American casino gaming or tribal gaming, is tribal government gaming. It is government gaming built on sovereignty and consequently is a corollary to state gambling such as lotteries rather than a corollary to corporate gaming. While the types of games offered in casinos might differ in format from ancestral indigenous games, gaming itself is a cultural tradition in many tribes, including those who operate casino gambling. Native American casino gaming is a $33.7 billion industry operated by nearly 250 distinct tribes in twenty-nine states in the United States. The Indian Gaming Regulatory Act (IGRA) of 1988 provides the framework for tribal gaming and the most important case law in Indian gaming remains Seminole Tribe of Florida v. Butterworth, in the US Fifth Circuit Court of Appeals, and the US Supreme Court decision over California v. Cabazon Band of Mission Indians.

Article

Benjamin C. Waterhouse

Political lobbying has always played a key role in American governance, but the concept of paid influence peddling has been marked by a persistent tension throughout the country’s history. On the one hand, lobbying represents a democratic process by which citizens maintain open access to government. On the other, the outsized clout of certain groups engenders corruption and perpetuates inequality. The practice of lobbying itself has reflected broader social, political, and economic changes, particularly in the scope of state power and the scale of business organization. During the Gilded Age, associational activity flourished and lobbying became increasingly the province of organized trade associations. By the early 20th century, a wide range at political reforms worked to counter the political influence of corporations. Even after the Great Depression and New Deal recast the administrative and regulatory role of the federal government, business associations remained the primary vehicle through which corporations and their designated lobbyists influenced government policy. By the 1970s, corporate lobbyists had become more effective and better organized, and trade associations spurred a broad-based political mobilization of business. Business lobbying expanded in the latter decades of the 20th century; while the number of companies with a lobbying presence leveled off in the 1980s and 1990s, the number of lobbyists per company increased steadily and corporate lobbyists grew increasingly professionalized. A series of high-profile political scandals involving lobbyists in 2005 and 2006 sparked another effort at regulation. Yet despite popular disapproval of lobbying and distaste for politicians, efforts to substantially curtail the activities of lobbyists and trade associations did not achieve significant success.

Article

Historians of colonial British North America have largely relegated piracy to the marginalia of the broad historical narrative from settlement to revolution. However, piracy and unregulated privateering played a pivotal role in the development of every English community along the eastern seaboard from the Carolinas to New England. Although many pirates originated in the British North American colonies and represented a diverse social spectrum, they were not supported and protected in these port communities by some underclass or proto-proletariat but by the highest echelons of colonial society, especially by colonial governors, merchants, and even ministers. Sea marauding in its multiple forms helped shape the economic, legal, political, religious, and cultural worlds of colonial America. The illicit market that brought longed-for bullion, slaves, and luxury goods integrated British North American communities with the Caribbean, West Africa, and the Pacific and Indian Oceans throughout the 17th century. Attempts to curb the support of sea marauding at the turn of the 18th century exposed sometimes violent divisions between local merchant interests and royal officials currying favor back in England, leading to debates over the protection of English liberties across the Atlantic. When the North American colonies finally closed their ports to English pirates during the years following the Treaty of Utrecht (1713), it sparked a brief yet dramatic turn of events where English marauders preyed upon the shipping belonging to their former “nests.” During the 18th century, colonial communities began to actively support a more regulated form of privateering against agreed upon enemies that would become a hallmark of patriot maritime warfare during the American Revolution.

Article

From the founding of the American republic through the 19th century, the nation’s environmental policy mostly centered on promoting American settlers’ conquest of the frontier. Early federal interventions, whether railroad and canal subsidies or land grant acts, led to rapid transformations of the natural environment that inspired a conservation movement by the end of the 19th century. Led by activists and policymakers, this movement sought to protect America’s resources now jeopardized by expansive industrial infrastructure. During the Gilded Age, the federal government established the world’s first national parks, and in the Progressive Era, politicians such as President Theodore Roosevelt called for the federal government to play a central role in ensuring the efficient utilization of the nation’s ecological bounty. By the early 1900s, conservationists established new government agencies, such as the U.S. Forest Service and the Bureau of Reclamation, to regulate the consumption of trees, water, and other valuable natural assets. Wise-use was the watchword of the day, with environmental managers in DC’s bureaucracy focused mainly on protecting the economic value latent in America’s ecosystems. However, other groups, such as the Wilderness Society, proved successful at redirecting policy prescriptions toward preserving beautiful and wild spaces, not just conserving resources central to capitalist enterprise. In the 1960s and 1970s, suburban and urban environmental activists attracted federal regulators’ attention to contaminated soil and water under their feet. The era of ecology had arrived, and the federal government now had broad powers through the Environmental Protection Agency (EPA) to manage ecosystems that stretched across the continent. But from the 1980s to the 2010s, the federal government’s authority to regulate the environment waxed and waned as economic crises, often exacerbated by oil shortages, brought environmental agencies under fire. The Rooseveltian logic of the Progressive Era, which said that America’s economic growth depended on federal oversight of the environment, came under assault from neoliberal disciples of Ronald Reagan, who argued that environmental regulations were in fact the root cause of economic stagnation in America, not a powerful prescription against it. What the country needed, according to the reformers of the New Right, was unregulated expansion into new frontiers. By the 2010s, the contours of these new frontiers were clear: deep-water oil drilling, Bakken shale exploration, and tar-sand excavation in Alberta, Canada. In many ways, the frontier conquest doctrine of colonial Americans found new life in deregulatory U.S. environmental policy pitched by conservatives in the wake of the Reagan Revolution. Never wholly dominant, this ethos carried on into the era of Donald Trump’s presidency.

Article

American Populism of the 1880s and 1890s marked the political high-water mark of the social movements of farmers, wage earners, women, and other sectors of society in the years after the Civil War. These movements forged the People’s Party, also known as the Populist Party, which campaigned against corporate power and economic inequality and was one of the most successful third parties in US history. Populist candidates won gubernatorial elections in nine states and gained some forty-five seats in the US Congress, including six seats in the Senate, and in 1892 the Populist presidential candidate, James B. Weaver of Iowa, received over a million votes, more than 8 percent of the total. The Populist Party was not a conventional political party but a coalition of organizations, including the Farmers’ Alliances, the Knights of Labor, and other reform movements, in what the Populists described as a “congress of industrial orders.” These organizations gave the People’s Party its strength and shaped its character as a party of working people with a vision of egalitarian cooperation and solidarity comparable to the labor, farmer-labor, and social-democratic parties in Europe and elsewhere that took shape in the same decades. Despite their egalitarian claims, however, the Populists had at best a mixed attitude towards the struggles for racial equality, and at worst accommodated Indian dispossession, Chinese exclusion, and Jim Crow segregation. In terms of its legacy, veterans of the Populist movement and many of its policy proposals would shape progressive and labor-farmer politics deep into the 20th century, partly by way of the Socialist Party, but mainly by way of the progressive or liberal wings of the Democratic and Republican Parties. At the same time, the adjective “populist” has come to describe a wide variety of political phenomena, including right-wing and nationalist movements, that have no particular connection to the late 19th-century Populism.

Article

Christy Ford Chapin

The history of US finance—spanning from the republic’s founding through the 2007–2008 financial crisis—exhibits two primary themes. The first theme is that Americans have frequently expressed suspicion of financiers and bankers. This abiding distrust has generated ferocious political debates through which voters either have opposed government policies that empower financial interests or have advocated proposals to steer financial institutions toward serving the public. A second, related theme that emerges from this history is that government policy—both state and federal—has shaped and reshaped financial markets. This feature follows the pattern of American capitalism, which rather than appearing as laissez-faire market competition, instead materializes as interactions between government and private enterprise structuring each economic sector in a distinctive manner. International comparison illustrates this premise. Because state and federal policies produced a highly splintered commercial banking sector that discouraged the development of large, consolidated banks, American big business has frequently had to rely on securities financing. This shareholder model creates a different corporate form than a commercial-bank model. In Germany, for example, large banks often provide firms with financing as well as business consulting and management strategy services. In this commercial-bank model, German business executives cede some autonomy to bankers but also have more ability to engage in long-term planning than do American executives who tend to cater to short-term stock market demands. Under the banner of the public–private financial system two subthemes appear: fragmented institutional arrangements and welfare programming. Because of government policy, the United States, compared to other western nations, has an unusually fragmented financial system. Adding to this complexity, some of these institutions can be either state or federally chartered; meanwhile, the commercial banking sector has traditionally hosted thousands of banks, ranging from urban, money-center institutions to small unit banks. Space constraints exclude examination of numerous additional organizations, such as venture capital firms, hedge funds, securities brokers, mutual funds, real estate investment trusts, and mortgage brokers. The US regulatory framework reflects this fragmentation, as a bevy of federal and state agencies supervise the financial sector. Since policymakers passed deregulatory measures during the 1980s and 1990s, the sector has moved toward consolidation and universal banking, which permits a large assortment of financial services to coexist under one institutional umbrella. Nevertheless, the US financial sector continues to be more fragmented than other industrialized countries. The public–private financial system has also delivered many government benefits, revealing that the American welfare state is perhaps more robust than scholars often claim. Welfare programming through financial policy tends be “hidden,” frequently because significant portions of benefits provision reside “off the books,” either as government-sponsored enterprises that are nominally private or as government guarantees in the place of direct spending. Yet these programs have heavily affected both their beneficiaries and the nation’s economy. The government, for example, has directed significant resources toward the construction and maintenance of a massive farm credit system. Moreover, policymakers established mortgage insurance and residential financing programs, creating an economy and consumer culture that revolve around home ownership. While both agricultural and mortgage programs have helped low-income beneficiaries, they have dispensed more aid to middle-class and corporate recipients. These programs, along with the institutional configuration of the banking and credit system, demonstrate just how important US financial policy has been to the nation’s unfolding history.

Article

The key pieces of antitrust legislation in the United States—the Sherman Antitrust Act of 1890 and the Clayton Act of 1914—contain broad language that has afforded the courts wide latitude in interpreting and enforcing the law. This article chronicles the judiciary’s shifting interpretations of antitrust law and policy over the past 125 years. It argues that jurists, law enforcement agencies, and private litigants have revised their approaches to antitrust to accommodate economic shocks, technological developments, and predominant economic wisdom. Over time an economic logic that prioritizes lowest consumer prices as a signal of allocative efficiency—known as the consumer welfare standard—has replaced the older political objectives of antitrust, such as protecting independent proprietors or small businesses, or reducing wealth transfers from consumers to producers. However, a new group of progressive activists has again called for revamping antitrust so as to revive enforcement against dominant firms, especially in digital markets, and to refocus attention on the political effects of antitrust law and policy. This shift suggests that antitrust may remain a contested field for scholarly and popular debate.