The first credit reporting organizations emerged in the United States during the 19th century to address problems of risk and uncertainty in an expanding market economy. Early credit reporting agencies assisted merchant lenders by collecting and centralizing information about the business activities and reputations of unknown borrowers throughout the country. These agencies quickly evolved into commercial surveillance networks, amassing huge archives of personal information about American citizens and developing credit rating systems to rank them. Shortly after the Civil War, separate credit reporting organizations devoted to monitoring consumers, rather than businesspeople, also began to emerge to assist credit-granting retailers. By the early 20th century, hundreds of local credit bureaus dissected the personal affairs of American consumers, forming the genesis of a national consumer credit surveillance infrastructure.
The history of American credit reporting reveals fundamental links between the development of modern capitalism and contemporary surveillance society. These connections became increasingly apparent during the late 20th century as technological advances in computing and networked communication fueled the growth of new information industries, raising concerns about privacy and discrimination. These connections and concerns, however, are not new. They can be traced to 19th-century credit reporting organizations, which turned personal information into a commodity and converted individual biographies into impersonal financial profiles and risk metrics. As these disembodied identities and metrics became authoritative representations of one’s reputation and worth, they exerted real effects on one’s economic life chances and social legitimacy. While drawing attention to capitalism’s historical twin, surveillance, the history of credit reporting illuminates the origins of surveillance-based business models that became ascendant during the 21st century.
12
Article
Credit Reporting and the History of Commercial Surveillance in America
Josh Lauer
Article
The Tuskegee Syphilis Study
Susan M. Reverby
Between 1932 and 1972, the US Public Health Service (PHS) ran the Tuskegee Study of Untreated Syphilis in the Male Negro in Macon County, Alabama, to learn more about the effects of untreated syphilis on African Americans, and to see if the standard heavy metal treatments advocated at the time were efficacious in the disease’s late latent stage. Syphilis is a sexually transmitted infection and can be passed by a mother to her fetus at birth. It is contagious in its first two stages, but usually not in its third late latent stage. Syphilis can be, although is not always, fatal, and usually causes serious cardiovascular or neurological damage. To study the disease, the PHS recruited 624 African American men, 439 who were diagnosed with the latent stage of the disease and 185 without the disease who were to act as the controls in the experiment. However, the men were not told they were to participate in a medical experiment nor were they asked to give their consent to be used as subjects for medical research. Instead, the PHS led the men to believe that they were being treated for their syphilis by the provision of aspirins, iron tonics, vitamins, and diagnosis spinal taps, labeled a “special treatment” for the colloquial term “bad blood.” Indeed, even when penicillin became widely available by the early 1950s as a cure for syphilis, the researchers continued the study and tried to keep the men from treatment, however not always successfully.
Although a number of health professionals raised objections to the study over the years, while—thirteen articles were published in various medical journals, it continued unobstructed until 1972, when a journalist exposed the full implications of the study and a national uproar ensued. The widespread media coverage resulted in a successful lawsuit, federal paid health care to the remaining men and their syphilis-positive wives and children, Congressional hearings, a federal report, and changes to the legislation concerning informed consent for medical research. The government officially closed the study in 1972. In 1996, a Legacy Committee requested a formal apology from the federal government, which took place at the White House on May 16, 1997.
Rumors have surrounded the study since its public exposure, especially the beliefs that the government gave healthy men syphilis, rather than recruiting men that had the disease already, in order to conduct the research, and that all men in the study were left untreated decade after decade. In its public life, the study often serves a metaphor for mistrust of medical care and government research, memorialized in popular culture through music, plays, poems, and films.
Article
The Information Economy
Jamie L. Pietruska
The term “information economy” first came into widespread usage during the 1960s and 1970s to identify a major transformation in the postwar American economy in which manufacturing had been eclipsed by the production and management of information. However, the information economy first identified in the mid-20th century was one of many information economies that have been central to American industrialization, business, and capitalism for over two centuries. The emergence of information economies can be understood in two ways: as a continuous process in which information itself became a commodity, as well as an uneven and contested—not inevitable—process in which economic life became dependent on various forms of information. The production, circulation, and commodification of information has historically been essential to the growth of American capitalism and to creating and perpetuating—and at times resisting—structural racial, gender, and class inequities in American economy and society. Yet information economies, while uneven and contested, also became more bureaucratized, quantified, and commodified from the 18th century to the 21st century.
The history of information economies in the United States is also characterized by the importance of systems, networks, and infrastructures that link people, information, capital, commodities, markets, bureaucracies, technologies, ideas, expertise, laws, and ideologies. The materiality of information economies is historically inextricable from production of knowledge about the economy, and the concepts of “information” and “economy” are themselves historical constructs that change over time. The history of information economies is not a teleological story of progress in which increasing bureaucratic rationality, efficiency, predictability, and profit inevitably led to the 21st-century age of Big Data. Nor is it a singular story of a single, coherent, uniform information economy. The creation of multiple information economies—at different scales in different regions—was a contingent, contested, often inequitable process that did not automatically democratize access to objective information.
Article
Chemical and Biological Weapons Policy
Thomas I. Faith
Chemical and biological weapons represent two distinct types of munitions that share some common policy implications. While chemical weapons and biological weapons are different in terms of their development, manufacture, use, and the methods necessary to defend against them, they are commonly united in matters of policy as “weapons of mass destruction,” along with nuclear and radiological weapons. Both chemical and biological weapons have the potential to cause mass casualties, require some technical expertise to produce, and can be employed effectively by both nation states and non-state actors. U.S. policies in the early 20th century were informed by preexisting taboos against poison weapons and the American Expeditionary Forces’ experiences during World War I. The United States promoted restrictions in the use of chemical and biological weapons through World War II, but increased research and development work at the outset of the Cold War. In response to domestic and international pressures during the Vietnam War, the United States drastically curtailed its chemical and biological weapons programs and began supporting international arms control efforts such as the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. U.S. chemical and biological weapons policies significantly influence U.S. policies in the Middle East and the fight against terrorism.
Article
Civilian Nuclear Power
Daniel Pope
Nuclear power in the United States has had an uneven history and faces an uncertain future. Promising in the 1950s electricity “too cheap to meter,” nuclear power has failed to come close to that goal, although it has carved out approximately a 20 percent share of American electrical output. Two decades after World War II, General Electric and Westinghouse offered electric utilities completed “turnkey” plants at a fixed cost, hoping these “loss leaders” would create a demand for further projects. During the 1970s the industry boomed, but it also brought forth a large-scale protest movement. Since then, partly because of that movement and because of the drama of the 1979 Three Mile Island accident, nuclear power has plateaued, with only one reactor completed since 1995.
Several factors account for the failed promise of nuclear energy. Civilian power has never fully shaken its military ancestry or its connotations of weaponry and warfare. American reactor designs borrowed from nuclear submarines. Concerns about weapons proliferation stymied industry hopes for breeder reactors that would produce plutonium as a byproduct. Federal regulatory agencies dealing with civilian nuclear energy also have military roles. Those connections have provided some advantages to the industry, but they have also generated fears. Not surprisingly, the “anti-nukes” movement of the 1970s and 1980s was closely bound to movements for peace and disarmament.
The industry’s disappointments must also be understood in a wider energy context. Nuclear grew rapidly in the late 1960s and 1970s as domestic petroleum output shrank and environmental objections to coal came to the fore. At the same time, however, slowing economic growth and an emphasis on energy efficiency reduced demand for new power output. In the 21st century, new reactor designs and the perils of fossil-fuel-caused global warming have once again raised hopes for nuclear, but natural gas and renewables now compete favorably against new nuclear projects.
Economic factors have been the main reason that nuclear has stalled in the last forty years. Highly capital intensive, nuclear projects have all too often taken too long to build and cost far more than initially forecast. The lack of standard plant designs, the need for expensive safety and security measures, and the inherent complexity of nuclear technology have all contributed to nuclear power’s inability to make its case on cost persuasively. Nevertheless, nuclear power may survive and even thrive if the nation commits to curtailing fossil fuel use or if, as the Trump administration proposes, it opts for subsidies to keep reactors operating.
Article
The Environment in the Atomic Age
Rachel Rothschild
The development of nuclear technology had a profound influence on the global environment following the Second World War, with ramifications for scientific research, the modern environmental movement, and conceptualizations of pollution more broadly. Government sponsorship of studies on nuclear fallout and waste dramatically reconfigured the field of ecology, leading to the widespread adoption of the ecosystem concept and new understandings of food webs as well as biogeochemical cycles. These scientific endeavors of the atomic age came to play a key role in the formation of environmental research to address a variety of pollution problems in industrialized countries. Concern about invisible radiation served as a foundation for new ways of thinking about chemical risks for activists like Rachel Carson and Barry Commoner as well as many scientists, government officials, and the broader public. Their reservations were not unwarranted, as nuclear weapons and waste resulted in radioactive contamination of the environment around nuclear-testing sites and especially fuel-production facilities. Scholars date the start of the “Anthropocene” period, during which human activity began to have substantial effects on the environment, variously from the beginning of human farming roughly 8,000 years ago to the emergence of industrialism in the 19th century. But all agree that the advent of nuclear weapons and power has dramatically changed the potential for environmental alterations. Our ongoing attempts to harness the benefits of the atomic age while lessening its negative impacts will need to confront the substantial environmental and public-health issues that have plagued nuclear technology since its inception.
Article
Technology and the Environment
Timothy James LeCain
Technology and environmental history are both relatively young disciplines among Americanists, and during their early years they developed as distinctly different and even antithetical fields, at least in topical terms. Historians of technology initially focused on human-made and presumably “unnatural” technologies, whereas environmental historians focused on nonhuman and presumably “natural” environments. However, in more recent decades, both disciplines have moved beyond this oppositional framing. Historians of technology increasingly came to view anthropogenic artifacts such as cities, domesticated animals, and machines as extensions of the natural world rather than its antithesis. Even the British and American Industrial Revolutions constituted not a distancing of humans from nature, as some scholars have suggested, but rather a deepening entanglement with the material environment. At the same time, many environmental historians were moving beyond the field’s initial emphasis on the ideal of an American and often Western “wilderness” to embrace a concept of the environment as including humans and productive work. Nonetheless, many environmental historians continued to emphasize the independent agency of the nonhuman environment of organisms and things. This insistence that not everything could be reduced to human culture remained the field’s most distinctive feature.
Since the turn of millennium, the two fields have increasingly come together in a variety of synthetic approaches, including Actor Network Theory, envirotechnical analysis, and neomaterialist theory. As the influence of the cultural turn has waned, the environmental historians’ emphasis on the independent agency of the nonhuman has come to the fore, gaining wider influence as it is applied to the dynamic “nature” or “wildness” that some scholars argue exists within both the technological and natural environment. The foundational distinctions between the history of technology and environmental history may now be giving way to more materially rooted attempts to understand how a dynamic hybrid environment helps to create human history in all of its dimensions—cultural, social, and biological.
Article
Nuclear Arms Control in US Foreign Policy
Jonathan Hunt
The development of military arms harnessing nuclear energy for mass destruction has inspired continual efforts to control them. Since 1945, the United States, the Soviet Union, the United Kingdom, France, the People’s Republic of China (PRC), Israel, India, Pakistan, North Korea, and South Africa acquired control over these powerful weapons, though Pretoria dismantled its small cache in 1989 and Russia inherited the Soviet arsenal in 1996. Throughout this period, Washington sought to limit its nuclear forces in tandem with those of Moscow, prevent new states from fielding them, discourage their military use, and even permit their eventual abolition.
Scholars disagree about what explains the United States’ distinct approach to nuclear arms control. The history of U.S. nuclear policy treats intellectual theories and cultural attitudes alongside technical advances and strategic implications. The central debate is one of structure versus agency: whether the weapons’ sheer power, or historical actors’ attitudes toward that power, drove nuclear arms control. Among those who emphasize political responsibility, there are two further disagreements: (1) the relative influence of domestic protest, culture, and politics; and (2) whether U.S. nuclear arms control aimed first at securing the peace by regulating global nuclear forces or at bolstering American influence in the world.
The intensity of nuclear arms control efforts tended to rise or fall with the likelihood of nuclear war. Harry Truman’s faith in the country’s monopoly on nuclear weapons caused him to sabotage early initiatives, while Dwight Eisenhower’s belief in nuclear deterrence led in a similar direction. Fears of a U.S.-Soviet thermonuclear exchange mounted in the late 1950s, stoked by atmospheric nuclear testing and widespread radioactive fallout, which stirred protest movements and diplomatic initiatives. The spread of nuclear weapons to new states motivated U.S. presidents (John Kennedy in the vanguard) to mount a concerted campaign against “proliferation,” climaxing with the 1968 Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Richard Nixon was exceptional. His reasons for signing the Strategic Arms Limitation Treaty (SALT I) and Anti-Ballistic Missile Treaty (ABM) with Moscow in 1972 were strategic: to buttress the country’s geopolitical position as U.S. armed forces withdrew from Southeast Asia. The rise of protest movements and Soviet economic difficulties after Ronald Reagan entered the Oval Office brought about two more landmark U.S.-Soviet accords—the 1987 Intermediate Ballistic Missile Treaty (INF) and the 1991 Strategic Arms Reduction Treaty (START)—the first occasions on which the superpowers eliminated nuclear weapons through treaty. The country’s attention swung to proliferation after the Soviet collapse in December 1991, as failed states, regional disputes, and non-state actors grew more prominent. Although controversies over Iraq, North Korea, and Iran’s nuclear programs have since erupted, Washington and Moscow continued to reduce their arsenals and refine their nuclear doctrines even as President Barack Obama proclaimed his support for a nuclear-free world.
Article
Universities in America since 1945
Christopher P. Loss
Until World War II, American universities were widely regarded as good but not great centers of research and learning. This changed completely in the press of wartime, when the federal government pumped billions into military research, anchored by the development of the atomic bomb and radar, and into the education of returning veterans under the GI Bill of 1944. The abandonment of decentralized federal–academic relations marked the single most important development in the history of the modern American university. While it is true that the government had helped to coordinate and fund the university system prior to the war—most notably the country’s network of public land-grant colleges and universities—government involvement after the war became much more hands-on, eventually leading to direct financial support to and legislative interventions on behalf of core institutional activities, not only the public land grants but the nation’s mix of private institutions as well. However, the reliance on public subsidies and legislative and judicial interventions of one kind or another ended up being a double-edged sword: state action made possible the expansion in research and in student access that became the hallmarks of the post-1945 American university; but it also created a rising tide of expectations for continued support that has proven challenging in fiscally stringent times and in the face of ongoing political fights over the government’s proper role in supporting the sector.
Article
The Space Race and American Foreign Relations
Teasel Muir-Harmony
The Soviet Union’s successful launch of the first artificial satellite Sputnik 1 on October 4, 1957, captured global attention and achieved the initial victory in what would soon become known as the space race. This impressive technological feat and its broader implications for Soviet missile capability rattled the confidence of the American public and challenged the credibility of U.S. leadership abroad. With the U.S.S.R.’s launch of Sputnik, and then later the first human spaceflight in 1961, U.S. policymakers feared that the public and political leaders around the world would view communism as a viable and even more dynamic alternative to capitalism, tilting the global balance of power away from the United States and towards the Soviet Union.
Reactions to Sputnik confirmed what members of the U.S. National Security Council had predicted: the image of scientific and technological superiority had very real, far-reaching geopolitical consequences. By signaling Soviet technological and military prowess, Sputnik solidified the link between space exploration and national prestige, setting a course for nationally funded space exploration for years to come. For over a decade, both the Soviet Union and the United States funneled significant financial and personnel resources into achieving impressive firsts in space, as part of a larger effort to win alliances in the Cold War contest for global influence.
From a U.S. vantage point, the space race culminated in the first Moon landing in July 1969. In 1961, President John F. Kennedy proposed Project Apollo, a lunar exploration program, as a tactic for restoring U.S. prestige in the wake of Soviet cosmonaut Yuri Gagarin’s spaceflight and the failure of the Bay of Pigs invasion. To achieve Kennedy’s goal of sending a man to the Moon and returning him safely back to Earth by the end of the decade, the United States mobilized a workforce in the hundreds of thousands. Project Apollo became the most expensive government funded civilian engineering program in U.S. history, at one point stretching to more than 4 percent of the federal budget. The United States’ substantial investment in winning the space race reveals the significant status of soft power in American foreign policy strategy during the Cold War.
12