The history of US finance—spanning from the republic’s founding through the 2007–2008 financial crisis—exhibits two primary themes. The first theme is that Americans have frequently expressed suspicion of financiers and bankers. This abiding distrust has generated ferocious political debates through which voters either have opposed government policies that empower financial interests or have advocated proposals to steer financial institutions toward serving the public. A second, related theme that emerges from this history is that government policy—both state and federal—has shaped and reshaped financial markets. This feature follows the pattern of American capitalism, which rather than appearing as laissez-faire market competition, instead materializes as interactions between government and private enterprise structuring each economic sector in a distinctive manner. International comparison illustrates this premise. Because state and federal policies produced a highly splintered commercial banking sector that discouraged the development of large, consolidated banks, American big business has frequently had to rely on securities financing. This shareholder model creates a different corporate form than a commercial-bank model. In Germany, for example, large banks often provide firms with financing as well as business consulting and management strategy services. In this commercial-bank model, German business executives cede some autonomy to bankers but also have more ability to engage in long-term planning than do American executives who tend to cater to short-term stock market demands. Under the banner of the public–private financial system two subthemes appear: fragmented institutional arrangements and welfare programming. Because of government policy, the United States, compared to other western nations, has an unusually fragmented financial system. Adding to this complexity, some of these institutions can be either state or federally chartered; meanwhile, the commercial banking sector has traditionally hosted thousands of banks, ranging from urban, money-center institutions to small unit banks. Space constraints exclude examination of numerous additional organizations, such as venture capital firms, hedge funds, securities brokers, mutual funds, real estate investment trusts, and mortgage brokers. The US regulatory framework reflects this fragmentation, as a bevy of federal and state agencies supervise the financial sector. Since policymakers passed deregulatory measures during the 1980s and 1990s, the sector has moved toward consolidation and universal banking, which permits a large assortment of financial services to coexist under one institutional umbrella. Nevertheless, the US financial sector continues to be more fragmented than other industrialized countries. The public–private financial system has also delivered many government benefits, revealing that the American welfare state is perhaps more robust than scholars often claim. Welfare programming through financial policy tends be “hidden,” frequently because significant portions of benefits provision reside “off the books,” either as government-sponsored enterprises that are nominally private or as government guarantees in the place of direct spending. Yet these programs have heavily affected both their beneficiaries and the nation’s economy. The government, for example, has directed significant resources toward the construction and maintenance of a massive farm credit system. Moreover, policymakers established mortgage insurance and residential financing programs, creating an economy and consumer culture that revolve around home ownership. While both agricultural and mortgage programs have helped low-income beneficiaries, they have dispensed more aid to middle-class and corporate recipients. These programs, along with the institutional configuration of the banking and credit system, demonstrate just how important US financial policy has been to the nation’s unfolding history.
United States Financial History
Christy Ford Chapin
The Rise of the Sunbelt South
Katherine R. Jewell
The term “Sunbelt” connotes a region defined by its environment. “Belt” suggests the broad swath of states from the Atlantic coast, stretching across Texas and Oklahoma, the Southwest, to southern California. “Sun” suggests its temperate—even hot—climate. Yet in contrast to the industrial northeastern and midwestern Rust Belt, or perhaps, “Frost” Belt, the term’s emergence at the end of the 1960s evoked an optimistic, opportunistic brand. Free from snowy winters, with spaces cooled by air conditioners, and Florida’s sandy beaches or California’s surfing beckoning, it is true that more Americans moved to the Sunbelt states in the 1950s and 1960s than to the deindustrializing centers of the North and East. But the term “Sunbelt” also captures an emerging political culture that defies regional boundaries. The term originates more from the diagnosis of this political climate, rather than an environmental one, associated with the new patterns of migration in the mid-20th century. The term defined a new regional identity: politically, economically, in policy, demographically, and socially, as well as environmentally. The Sunbelt received federal money in an unprecedented manner, particularly because of rising Cold War defense spending in research and military bases, and its urban centers grew in patterns unlike those in the old Northeast and Midwest, thanks to the policy innovations wrought by local boosters, business leaders, and politicians, which defined politics associated with the region after the 1970s. Yet from its origin, scholars debate whether the Sunbelt’s emergence reflects a new regional identity, or something else.
Financial Crises in American History
Christoph Nitschke and Mark Rose
U.S. history is full of frequent and often devastating financial crises. They have coincided with business cycle downturns, but they have been rooted in the political design of markets. Financial crises have also drawn from changes in the underpinning cultures, knowledge systems, and ideologies of marketplace transactions. The United States’ political and economic development spawned, guided, and modified general factors in crisis causation. Broadly viewed, the reasons for financial crises have been recurrent in their form but historically specific in their configuration: causation has always revolved around relatively sudden reversals of investor perceptions of commercial growth, stock market gains, monetary availability, currency stability, and political predictability. The United States’ 19th-century financial crises, which happened in rapid succession, are best described as disturbances tied to market making, nation building, and empire creation. Ongoing changes in America’s financial system aided rapid national growth through the efficient distribution of credit to a spatially and organizationally changing economy. But complex political processes—whether Western expansion, the development of incorporation laws, or the nation’s foreign relations—also underlay the easy availability of credit. The relationship between systemic instability and ideas and ideals of economic growth, politically enacted, was then mirrored in the 19th century. Following the “Golden Age” of crash-free capitalism in the two decades after the Second World War, the recurrence of financial crises in American history coincided with the dominance of the market in statecraft. Banking and other crises were a product of political economy. The Global Financial Crisis of 2007–2008 not only once again changed the regulatory environment in an attempt to correct past mistakes, but also considerably broadened the discursive situation of financial crises as academic topics.
Foreign Trade Policy from the Revolution to World War I
Economic nationalism tended to dominate U.S. foreign trade policy throughout the long 19th century, from the end of the American Revolution to the beginning of World War I, owing to a pervasive American sense of economic and geopolitical insecurity and American fear of hostile powers, especially the British but also the French and Spanish and even the Barbary States. Following the U.S. Civil War, leading U.S. protectionist politicians sought to curtail European trade policies and to create a U.S.-dominated customs union in the Western Hemisphere. American proponents of trade liberalization increasingly found themselves outnumbered in the halls of Congress, as the “American System” of economic nationalism grew in popularity alongside the perceived need for foreign markets. Protectionist advocates in the United States viewed the American System as a panacea that not only promised to provide the federal government with revenue but also to artificially insulate American infant industries from undue foreign-market competition through high protective tariffs and subsidies, and to retaliate against real and perceived threats to U.S. trade. Throughout this period, the United States itself underwent a great struggle over foreign trade policy. By the late 19th century, the era’s boom-and-bust global economic system led to a growing perception that the United States needed more access to foreign markets as an outlet for the country’s surplus goods and capital. But whether the United States would obtain foreign market access through free trade or through protectionism led to a great debate over the proper course of U.S. foreign trade policy. By the time that the United States acquired a colonial empire from the Spanish in 1898, this same debate over U.S. foreign trade policy had effectively merged into debates over the course of U.S. imperial expansion. The country’s more expansionist-minded economic nationalists came out on top. The overwhelming 1896 victory of William McKinley—the Republican party’s “Napoleon of Protection”—marked the beginning of substantial expansion of U.S. foreign trade through a mixture of protectionism and imperialism in the years leading up to World War I.
Women in Early American Economy
Jane T. Merritt
From the planter societies and subsistence settlements of the 17th century to the global markets of the late 18th century, white, black, and Indian women participated extensively in the early American economy. As the colonial world gave way to an independent nation and household economies yielded to cross-Atlantic commercial networks, women played an important role as consumers and producers. Was there, however, a growing gendered divide in the American economy by the turn of the 19th century? Were there more restrictions on women’s business activities, property ownership, work lives, consumer demands, or productive skills? Possibly, we ask the wrong questions when exploring women’s history. By posing questions that compare the past with present conditions, we miss the more nuanced and shifting patterns that made up the variety of women’s lives. Whether rural or urban, rich or poor, free or enslaved, women’s legal and marital status dictated some basic parameters of how they operated within the early American economy. But despite these boundaries, or perhaps because of them, women created new strategies to meet the economic needs of households, families, and themselves. As entrepreneurs they brought in lodgers or operated small businesses that generated extra income. As producers they finagled the materials necessary to create items for home use and to sell at market. As consumers, women, whether free or enslaved, demanded goods from merchants and negotiated prices that fit their budgets. As laborers, these same women translated myriad skills into wages or exchanged labor for goods. In all these capacities, women calculated, accumulated, and survived in the early American economy.
Food in 20th-Century American Cities
Changing foodways, the consumption and production of food, access to food, and debates over food shaped the nature of American cities in the 20th century. As American cities transformed from centers of industrialization at the start of the century to post-industrial societies at the end of the 20th century, food cultures in urban America shifted in response to the ever-changing urban environment. Cities remained centers of food culture, diversity, and food reform despite these shifts. Growing populations and waves of immigration changed the nature of food cultures throughout the United States in the 20th century. These changes were significant, all contributing to an evolving sense of American food culture. For urban denizens, however, food choice and availability were dictated and shaped by a variety of powerful social factors, including class, race, ethnicity, gender, and laboring status. While cities possessed an abundance of food in a variety of locations to consume food, fresh food often remained difficult for the urban poor to obtain as the 20th century ended. As markets expanded from 1900 to 1950, regional geography became a less important factor in determining what types of foods were available. In the second half of the 20th century, even global geography became less important to food choices. Citrus fruit from the West Coast was readily available in northeastern markets near the start of the century, and off-season fruits and vegetables from South America filled shelves in grocery stores by the end of the 20th century. Urban Americans became further disconnected from their food sources, but this dislocation spurred counter-movements that embraced ideas of local, seasonal foods and a rethinking of the city’s relationship with its food sources.
Food in 19th-Century American Cities
Cindy R. Lobel
Over the course of the 19th century, American cities developed from small seaports and trading posts to large metropolises. Not surprisingly, foodways and other areas of daily life changed accordingly. In 1800, the dietary habits of urban Americans were similar to those of the colonial period. Food provisioning was very local. Farmers, hunters, fishermen, and dairymen from a few miles away brought food by rowboats and ferryboats and by horse carts to centralized public markets within established cities. Dietary options were seasonal as well as regional. Few public dining options existed outside of taverns, which offered lodging as well as food. Most Americans, even in urban areas, ate their meals at home, which in many cases were attached to their workshops, countinghouses, and offices. These patterns changed significantly over the course of the19th century, thanks largely to demographic changes and technological developments. By the turn of the 20th century, urban Americans relied on a food-supply system that was highly centralized and in the throes of industrialization. Cities developed complex restaurant sectors, and majority immigrant populations dramatically shaped and reshaped cosmopolitan food cultures. Furthermore, with growing populations, lax regulation, and corrupt political practices in many cities, issues arose periodically concerning the safety of the food supply. In sum, the roots of today’s urban food systems were laid down over the course of the 19th century.
Domestic Workers in U.S. History
Domestic work was, until 1940, the largest category of women’s paid labor. Despite the number of women who performed domestic labor for pay, the wages and working conditions were often poor. Workers labored long hours for low pay and were largely left out of state labor regulations. The association of domestic work with women’s traditional household labor, defined as a “labor of love” rather than as real work, and its centrality to southern slavery, have contributed to its low status. As a result, domestic work has long been structured by class, racial, and gendered hierarchies. Nevertheless, domestic workers have time and again done their best to resist these conditions. Although traditional collective bargaining techniques did not always translate to the domestic labor market, workers found various collective and individual methods to insist on higher wages and demand occupational respect, ranging from quitting to “pan-toting” to forming unions.