Since the early 1800s railroads have served as a critical element of the transportation infrastructure in the United States and have generated profound changes in technology, finance, business-government relations, and labor policy. By the 1850s railroads, at least in the northern states, had evolved into the nation’s first big businesses, replete with managerial hierarchies that in many respects resembled the structure of the US Army. After the Civil War ended, the railroad network grew rapidly, with lines extending into the Midwest and ultimately, with the completion of the first transcontinental railroad in 1869, to the Pacific Coast. The last third of the 19th century was characterized by increased militancy among railroad workers, as well as by the growing danger that railroading posed to employees and passengers. Intense competition among railroad companies led to rate wars and discriminatory pricing. The presence of rebates and long-haul/short-haul price differentials led to the federal regulation of the railroads in 1887. The Progressive Era generated additional regulation that reduced profitability and discouraged additional investment in the railroads. As a result, the carriers were often unprepared for the traffic demands associated with World War I, leading to government operation of the railroads between 1917 and 1920. Highway competition during the 1920s and the economic crises of the 1930s provided further challenges for the railroads. The nation’s railroads performed well during World War II but declined steadily in the years that followed. High labor costs, excessive regulatory oversight, and the loss of freight and passenger traffic to cars, trucks, and airplanes ensured that by the 1960s many once-profitable companies were on the verge of bankruptcy. A wave of mergers failed to halt the downward slide. The bankruptcy of Penn Central in 1970 increased public awareness of the dire circumstances and led to calls for regulatory reform. The 1980 Staggers Act abolished most of the restrictions on operations and pricing, thus revitalizing the railroads.
Despite almost three decades of strong and stable growth after World War II, the US economy, like the economies of many developed nations, faced new headwinds and challenges after 1970. Although the United States eventually overcame many of them, and continues to be one of the most dynamic in the world, it could not recover its mid-century economic miracle of rapid and broad-based economic growth. There are three major ways the US economy changed in this period. First, the US economy endured and eventually conquered the problem of high inflation, even as it instituted new policies that prioritized price stability over the so-called “Keynesian” goal of full employment. Although these new policies led to over two decades of moderate inflation and stable growth, the 2008 financial crisis challenged the post-Keynesian consensus and led to new demands for government intervention in downturns. Second, the government’s overall influence on the economy increased dramatically. Although the government deregulated several sectors in the 1970s and 1980s, such as transportation and banking, it also created new types of social and environmental regulation that were more pervasive. And although it occasionally cut spending, on the whole government spending increased substantially in this period, until it reached about 35 percent of the economy. Third, the US economy became more open to the world, and it imported more manufactured goods, even as it became more based on “intangible” products and on services rather than on manufacturing. These shifts created new economic winners and losers. Some institutions that thrived in the older economy, such as unions, which once compromised over a third of the workforce, became shadows of their former selves. The new service economy also created more gains for highly educated workers and for investors in quickly growing businesses, while blue-collar workers’ wages stagnated, at least in relative terms. Most of the trends that affected the US economy in this period were long-standing and continued over decades. Major national and international crises in this period, from the end of the Cold War, to the first Gulf War in 1991, to the September 11 attacks of 2001, seemed to have only a mild or transient impact on the economy. Two events that were of lasting importance were, first, the United States leaving the gold standard in 1971, which led to high inflation in the short term and more stable monetary policy over the long term; and second, the 2008 financial crisis, which seemed to permanently decrease American economic output even while it increased political battles about the involvement of government in the economy. The US economy at the beginning of the third decade of the 21st century was richer than it had ever been, and remained in many respects the envy of the world. But widening income gaps meant many Americans felt left behind in this new economy, and led some to worry that the stability and predictability of the old economy had been lost.
Christoph Nitschke and Mark Rose
U.S. history is full of frequent and often devastating financial crises. They have coincided with business cycle downturns, but they have been rooted in the political design of markets. Financial crises have also drawn from changes in the underpinning cultures, knowledge systems, and ideologies of marketplace transactions. The United States’ political and economic development spawned, guided, and modified general factors in crisis causation. Broadly viewed, the reasons for financial crises have been recurrent in their form but historically specific in their configuration: causation has always revolved around relatively sudden reversals of investor perceptions of commercial growth, stock market gains, monetary availability, currency stability, and political predictability. The United States’ 19th-century financial crises, which happened in rapid succession, are best described as disturbances tied to market making, nation building, and empire creation. Ongoing changes in America’s financial system aided rapid national growth through the efficient distribution of credit to a spatially and organizationally changing economy. But complex political processes—whether Western expansion, the development of incorporation laws, or the nation’s foreign relations—also underlay the easy availability of credit. The relationship between systemic instability and ideas and ideals of economic growth, politically enacted, was then mirrored in the 19th century. Following the “Golden Age” of crash-free capitalism in the two decades after the Second World War, the recurrence of financial crises in American history coincided with the dominance of the market in statecraft. Banking and other crises were a product of political economy. The Global Financial Crisis of 2007–2008 not only once again changed the regulatory environment in an attempt to correct past mistakes, but also considerably broadened the discursive situation of financial crises as academic topics.
Benjamin C. Waterhouse
Political lobbying has always played a key role in American governance, but the concept of paid influence peddling has been marked by a persistent tension throughout the country’s history. On the one hand, lobbying represents a democratic process by which citizens maintain open access to government. On the other, the outsized clout of certain groups engenders corruption and perpetuates inequality. The practice of lobbying itself has reflected broader social, political, and economic changes, particularly in the scope of state power and the scale of business organization. During the Gilded Age, associational activity flourished and lobbying became increasingly the province of organized trade associations. By the early 20th century, a wide range at political reforms worked to counter the political influence of corporations. Even after the Great Depression and New Deal recast the administrative and regulatory role of the federal government, business associations remained the primary vehicle through which corporations and their designated lobbyists influenced government policy. By the 1970s, corporate lobbyists had become more effective and better organized, and trade associations spurred a broad-based political mobilization of business. Business lobbying expanded in the latter decades of the 20th century; while the number of companies with a lobbying presence leveled off in the 1980s and 1990s, the number of lobbyists per company increased steadily and corporate lobbyists grew increasingly professionalized. A series of high-profile political scandals involving lobbyists in 2005 and 2006 sparked another effort at regulation. Yet despite popular disapproval of lobbying and distaste for politicians, efforts to substantially curtail the activities of lobbyists and trade associations did not achieve significant success.
Richard N. L. Andrews
Between 1964 and 2017, the United States adopted the concept of environmental policy as a new focus for a broad range of previously disparate policy issues affecting human interactions with the natural environment. These policies ranged from environmental health, pollution, and toxic exposure to management of ecosystems, resources, and use of the public lands, environmental aspects of urbanization, agricultural practices, and energy use, and negotiation of international agreements to address global environmental problems. In doing so, it nationalized many responsibilities that had previously been considered primarily state or local matters. It changed the United States’ approach to federalism by authorizing new powers for the federal government to set national minimum environmental standards and regulatory frameworks with the states mandated to participate in their implementation and compliance. Finally, it explicitly formalized administrative procedures for federal environmental decision-making with stricter requirements for scientific and economic justification rather than merely administrative discretion. In addition, it greatly increased public access to information and opportunities for input, as well as for judicial review, thus allowing citizen advocates for environmental protection and appreciative uses equal legitimacy with commodity producers to voice their preferences for use of public environmental resources. These policies initially reflected widespread public demand and broad bipartisan support. Over several decades, however, they became flashpoints, first, between business interests and environmental advocacy groups and, subsequently, between increasingly ideological and partisan agendas concerning the role of the federal government. Beginning in the 1980s, the long-standing Progressive ideal of the “public interest” was increasingly supplanted by a narrative of “government overreach,” and the 1990s witnessed campaigns to delegitimize the underlying evidence justifying environmental policies by labeling it “junk science” or a “hoax.” From the 1980s forward, the stated priorities of environmental policy vacillated repeatedly between presidential administrations and Congresses supporting continuation and expansion of environmental protection and preservation policies versus those seeking to weaken or even reverse protections in favor of private-property rights and more damaging uses of resources. Yet despite these apparent shifts, the basic environmental laws and policies enacted during the 1970s remained largely in place: political gridlock, in effect, maintained the status quo, with the addition of a very few innovations such as “cap and trade” policies. One reason was that environmental policies retained considerable latent public support: in electoral campaigns, they were often overshadowed by economic and other issues, but they still aroused widespread support in their defense when threatened. Another reason was that decisions by the courts also continued to reaffirm many existing policies and to reject attempts to dismantle them. With the election of Donald Trump in 2016, along with conservative majorities in both houses of Congress, US environmental policy came under the most hostile and wide-ranging attack since its origins. More than almost any other issue, the incoming president targeted environmental policy for rhetorical attacks and budget cuts, and sought to eradicate the executive policies of his predecessor, weaken or rescind protective regulations, and undermine the regulatory and even the scientific capacity of the federal environmental agencies. In the early 21st century, it is as yet unclear how much of his agenda will actually be accomplished, or whether, as in past attempts, much of it will ultimately be blocked by Congress, the courts, public backlash, and business and state government interests seeking stable policy expectations rather than disruptive deregulation.
“Corporate social responsibility” is a term that first began to circulate widely in the late 1960s and early 1970s. Though it may seem to be a straightforward concept, the phrase can imply a range of activities, from minority hiring initiatives and environmentally sound operations, to funding local nonprofits and cultural institutions. The idea appeared to have developed amid increasing demands made of corporations by a number of different groups, such as the consumer movement. However, American business managers engaged in many of these practices well before that phrase was coined. As far back as the early 19th century, merchants and business owners envisioned a larger societal role. However, broader political, social, and economic developments, from the rise of Gilded Age corporations to the onset of the Cold War, significantly influenced understandings of business social responsibility. Likewise, different managers and corporations have had different motives for embracing social responsibility initiatives. Some embraced social responsibility rhetoric as a public relations tool. Others saw the concept as a way to prevent government regulation. Still others undertook social responsibility efforts because they fit well with their own socially progressive ethos. Though the terms and understandings of a business’s social responsibilities have shifted over time, the basic idea has been a perennial feature of commercial life in the United States.
The United States underwent massive economic change in the four decades following the end of the American Civil War in 1865. A vibrant industrial economy catapulted the nation to a world leader in mining and manufacturing; the agricultural sector overcame organizational and technological challenges to increase productivity; and the innovations in financial, accounting, and marketing methods laid the foundation for a powerful economy that would dominate the globe in the 20th century. The emergence of this economy, however, did not come without challenges. Workers in both the industrial and agricultural sectors offered an alternative path for the American economy in the form of labor strikes and populist reforms; their attempts to disrupt the growing concentration of wealth and power played out in both the polls and the factory floor. Movements that sought to regulate the growth of large industrial firms and railroads failed to produce much meaningful policy, even as they raised major critiques of the emerging economic order. In the end, a form of industrial capitalism emerged that used large corporate structures, relatively weak unions, and limited government interventions to build a dynamic, but unbalanced, economic order in the United States.