The first credit reporting organizations emerged in the United States during the 19th century to address problems of risk and uncertainty in an expanding market economy. Early credit reporting agencies assisted merchant lenders by collecting and centralizing information about the business activities and reputations of unknown borrowers throughout the country. These agencies quickly evolved into commercial surveillance networks, amassing huge archives of personal information about American citizens and developing credit rating systems to rank them. Shortly after the Civil War, separate credit reporting organizations devoted to monitoring consumers, rather than businesspeople, also began to emerge to assist credit-granting retailers. By the early 20th century, hundreds of local credit bureaus dissected the personal affairs of American consumers, forming the genesis of a national consumer credit surveillance infrastructure.
The history of American credit reporting reveals fundamental links between the development of modern capitalism and contemporary surveillance society. These connections became increasingly apparent during the late 20th century as technological advances in computing and networked communication fueled the growth of new information industries, raising concerns about privacy and discrimination. These connections and concerns, however, are not new. They can be traced to 19th-century credit reporting organizations, which turned personal information into a commodity and converted individual biographies into impersonal financial profiles and risk metrics. As these disembodied identities and metrics became authoritative representations of one’s reputation and worth, they exerted real effects on one’s economic life chances and social legitimacy. While drawing attention to capitalism’s historical twin, surveillance, the history of credit reporting illuminates the origins of surveillance-based business models that became ascendant during the 21st century.
Article
Credit Reporting and the History of Commercial Surveillance in America
Josh Lauer
Article
The Tuskegee Syphilis Study
Susan M. Reverby
Between 1932 and 1972, the US Public Health Service (PHS) ran the Tuskegee Study of Untreated Syphilis in the Male Negro in Macon County, Alabama, to learn more about the effects of untreated syphilis on African Americans, and to see if the standard heavy metal treatments advocated at the time were efficacious in the disease’s late latent stage. Syphilis is a sexually transmitted infection and can be passed by a mother to her fetus at birth. It is contagious in its first two stages, but usually not in its third late latent stage. Syphilis can be, although is not always, fatal, and usually causes serious cardiovascular or neurological damage. To study the disease, the PHS recruited 624 African American men, 439 who were diagnosed with the latent stage of the disease and 185 without the disease who were to act as the controls in the experiment. However, the men were not told they were to participate in a medical experiment nor were they asked to give their consent to be used as subjects for medical research. Instead, the PHS led the men to believe that they were being treated for their syphilis by the provision of aspirins, iron tonics, vitamins, and diagnosis spinal taps, labeled a “special treatment” for the colloquial term “bad blood.” Indeed, even when penicillin became widely available by the early 1950s as a cure for syphilis, the researchers continued the study and tried to keep the men from treatment, however not always successfully.
Although a number of health professionals raised objections to the study over the years, while—thirteen articles were published in various medical journals, it continued unobstructed until 1972, when a journalist exposed the full implications of the study and a national uproar ensued. The widespread media coverage resulted in a successful lawsuit, federal paid health care to the remaining men and their syphilis-positive wives and children, Congressional hearings, a federal report, and changes to the legislation concerning informed consent for medical research. The government officially closed the study in 1972. In 1996, a Legacy Committee requested a formal apology from the federal government, which took place at the White House on May 16, 1997.
Rumors have surrounded the study since its public exposure, especially the beliefs that the government gave healthy men syphilis, rather than recruiting men that had the disease already, in order to conduct the research, and that all men in the study were left untreated decade after decade. In its public life, the study often serves a metaphor for mistrust of medical care and government research, memorialized in popular culture through music, plays, poems, and films.
Article
The Information Economy
Jamie L. Pietruska
The term “information economy” first came into widespread usage during the 1960s and 1970s to identify a major transformation in the postwar American economy in which manufacturing had been eclipsed by the production and management of information. However, the information economy first identified in the mid-20th century was one of many information economies that have been central to American industrialization, business, and capitalism for over two centuries. The emergence of information economies can be understood in two ways: as a continuous process in which information itself became a commodity, as well as an uneven and contested—not inevitable—process in which economic life became dependent on various forms of information. The production, circulation, and commodification of information has historically been essential to the growth of American capitalism and to creating and perpetuating—and at times resisting—structural racial, gender, and class inequities in American economy and society. Yet information economies, while uneven and contested, also became more bureaucratized, quantified, and commodified from the 18th century to the 21st century.
The history of information economies in the United States is also characterized by the importance of systems, networks, and infrastructures that link people, information, capital, commodities, markets, bureaucracies, technologies, ideas, expertise, laws, and ideologies. The materiality of information economies is historically inextricable from production of knowledge about the economy, and the concepts of “information” and “economy” are themselves historical constructs that change over time. The history of information economies is not a teleological story of progress in which increasing bureaucratic rationality, efficiency, predictability, and profit inevitably led to the 21st-century age of Big Data. Nor is it a singular story of a single, coherent, uniform information economy. The creation of multiple information economies—at different scales in different regions—was a contingent, contested, often inequitable process that did not automatically democratize access to objective information.
Article
Chemical and Biological Weapons Policy
Thomas I. Faith
Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.
Article
American Radio and Technological Transformation from Invention to Broadcasting, 1900–1945
Michael A. Krysko
Radio debuted as a wireless alternative to telegraphy in the late 19th century. At its inception, wireless technology could only transmit signals and was incapable of broadcasting actual voices. During the 1920s, however, it transformed into a medium primarily identified as one used for entertainment and informational broadcasting. The commercialization of American broadcasting, which included the establishment of national networks and reliance on advertising to generate revenue, became the so-called American system of broadcasting. This transformation demonstrates how technology is shaped by the dynamic forces of the society in which it is embedded. Broadcasting’s aural attributes also engaged listeners in a way that distinguished it from other forms of mass media. Cognitive processes triggered by the disembodied voices and sounds emanating from radio’s loudspeakers illustrate how listeners, grounded in particular social, cultural, economic, and political contexts, made sense of and understood the content with which they were engaged. Through the 1940s, difficulties in expanding the international radio presence of the United States further highlight the significance of surrounding contexts in shaping the technology and in promoting (or discouraging) listener engagement with programing content.
Article
Social Science and US Foreign Affairs
Joy Rohde
Since the social sciences began to emerge as scholarly disciplines in the last quarter of the 19th century, they have frequently offered authoritative intellectual frameworks that have justified, and even shaped, a variety of U.S. foreign policy efforts. They played an important role in U.S. imperial expansion in the late 19th and early 20th centuries. Scholars devised racialized theories of social evolution that legitimated the confinement and assimilation of Native Americans and endorsed civilizing schemes in the Philippines, Cuba, and elsewhere. As attention shifted to Europe during and after World War I, social scientists working at the behest of Woodrow Wilson attempted to engineer a “scientific peace” at Versailles. The desire to render global politics the domain of objective, neutral experts intensified during World War II and the Cold War. After 1945, the social sciences became increasingly central players in foreign affairs, offering intellectual frameworks—like modernization theory—and bureaucratic tools—like systems analysis—that shaped U.S. interventions in developing nations, guided nuclear strategy, and justified the increasing use of the U.S. military around the world.
Throughout these eras, social scientists often reinforced American exceptionalism—the notion that the United States stands at the pinnacle of social and political development, and as such has a duty to spread liberty and democracy around the globe. The scholarly embrace of conventional political values was not the result of state coercion or financial co-optation; by and large social scientists and policymakers shared common American values. But other social scientists used their knowledge and intellectual authority to critique American foreign policy. The history of the relationship between social science and foreign relations offers important insights into the changing politics and ethics of expertise in American public policy.
Article
Infrastructure: Mass Transit in 19th- and 20th-Century Urban America
Jay Young
Mass transit has been part of the urban scene in the United States since the early 19th century. Regular steam ferry service began in New York City in the early 1810s and horse-drawn omnibuses plied city streets starting in the late 1820s. Expanding networks of horse railways emerged by the mid-19th century. The electric streetcar became the dominant mass transit vehicle a half century later. During this era, mass transit had a significant impact on American urban development. Mass transit’s importance in the lives of most Americans started to decline with the growth of automobile ownership in the 1920s, except for a temporary rise in transit ridership during World War II. In the 1960s, congressional subsidies began to reinvigorate mass transit and heavy-rail systems opened in several cities, followed by light rail systems in several others in the next decades. Today concerns about environmental sustainability and urban revitalization have stimulated renewed interest in the benefits of mass transit.