1-4 of 4 Results

  • Keywords: nuclear power x
Clear all

Article

Daniel Pope

Nuclear power in the United States has had an uneven history and faces an uncertain future. Promising in the 1950s electricity “too cheap to meter,” nuclear power has failed to come close to that goal, although it has carved out approximately a 20 percent share of American electrical output. Two decades after World War II, General Electric and Westinghouse offered electric utilities completed “turnkey” plants at a fixed cost, hoping these “loss leaders” would create a demand for further projects. During the 1970s the industry boomed, but it also brought forth a large-scale protest movement. Since then, partly because of that movement and because of the drama of the 1979 Three Mile Island accident, nuclear power has plateaued, with only one reactor completed since 1995. Several factors account for the failed promise of nuclear energy. Civilian power has never fully shaken its military ancestry or its connotations of weaponry and warfare. American reactor designs borrowed from nuclear submarines. Concerns about weapons proliferation stymied industry hopes for breeder reactors that would produce plutonium as a byproduct. Federal regulatory agencies dealing with civilian nuclear energy also have military roles. Those connections have provided some advantages to the industry, but they have also generated fears. Not surprisingly, the “anti-nukes” movement of the 1970s and 1980s was closely bound to movements for peace and disarmament. The industry’s disappointments must also be understood in a wider energy context. Nuclear grew rapidly in the late 1960s and 1970s as domestic petroleum output shrank and environmental objections to coal came to the fore. At the same time, however, slowing economic growth and an emphasis on energy efficiency reduced demand for new power output. In the 21st century, new reactor designs and the perils of fossil-fuel-caused global warming have once again raised hopes for nuclear, but natural gas and renewables now compete favorably against new nuclear projects. Economic factors have been the main reason that nuclear has stalled in the last forty years. Highly capital intensive, nuclear projects have all too often taken too long to build and cost far more than initially forecast. The lack of standard plant designs, the need for expensive safety and security measures, and the inherent complexity of nuclear technology have all contributed to nuclear power’s inability to make its case on cost persuasively. Nevertheless, nuclear power may survive and even thrive if the nation commits to curtailing fossil fuel use or if, as the Trump administration proposes, it opts for subsidies to keep reactors operating.

Article

Sai Felicia Krishna-Hensel

Throughout history, technology has played a significant role in international relations (IR). Technological development is an important factor underlying much of humanity’s social, economic, and political development, as well as in interstate and interregional relationships. Beginning with the earliest tool industries of the Paleolithic and Neolithic periods to the present time, technology has been an integral component of the transformative processes that resulted in the organization, expansion, and establishment of distinctive societies. The presence or absence of equal access to technology has often determined the nature of relationships between societies and civilizations. Technology increases the options available to policymakers in their pursuit of the goals of the state, but also complicates their decision making. The question of whether, and how much, technological change has influenced IR has been the subject of considerable debate. Scholars are divided on the emphasis that should be placed on technological progress as an independent variable in the study of relations between states and as a factor in analyzing power configurations in the international system. Among the scientific and technological revolutions that are believed to have contributed to the changing nature of power and relations between states are transportation and communication, the industrial revolution, the nuclear revolution, and the contemporary information revolution. Future research should focus on how these technological changes are going to influence the debates on power, deterrence, diplomacy, and other instruments of IR.

Article

The immediate aftermath of a great urban earthquake is a dramatic and terrible event, comparable to a massive terrorist attack. Yet the shocking impact soon fades from the public mind and receives surprisingly little attention from historians, unlike wars and human atrocities. In 1923, the Great Kanto earthquake and its subsequent fires demolished most of Tokyo and Yokohama and killed around 140,000 Japanese: a level of devastation and fatalities comparable with the atomic bombing of Hiroshima and Nagasaki in 1945. But the second event has infinitely more resonance in public consciousness and historical studies than the first. Indeed, most people would be challenged to name a single earthquake with an indisputable historical impact, including even the most famous of all earthquakes: the San Francisco earthquake and fire of 1906. In truth, however, great earthquakes, from ancient times—as recorded by Greek and biblical writers—to the present day, have had major cultural, economic, and political consequences—often a combination of all three—some of which were beneficial. Thus, the current prime minister of India owes his election in 2014 to an earthquake that devastated part of his home state of Gujarat in 2001, which led to its striking economic growth. The martial law imposed on Tokyo and Yokohama after the 1923 earthquake gave new authority to the Japanese army, which eventually took over the Japanese government and led Japan to war with China and the world. The destruction of San Francisco in 1906 produced a boom in rebuilding and financial and technological development of the surrounding area on the San Andreas Fault, including what became Silicon Valley. A great earthquake in Venezuela in 1812 was the principal cause of the temporary defeat of its leader Simon Bolivar by the Spanish colonial regime, but his subsequent exile led to his permanent freeing of Bolivia, Colombia, Ecuador, Peru, and Venezuela from Spanish rule. The catastrophic Lisbon earthquake of 1755—as well known in the early 19th century as the 1945 atomic bombings are today—was a pivotal factor in the freeing of Enlightenment science from Catholic religious orthodoxy, as epitomized by Voltaire’s satirical novel Candide, written in response to the earthquake. Even the minor earthquakes in Britain in 1750, the so-called Year of Earthquakes, produced the earliest scientific understanding of earthquakes, published by the Royal Society: the beginning of seismology. The long-term impact of a great earthquake depends on its epicenter, magnitude, and timing—and also on human factors: the political, social, intellectual, religious, and cultural resources specific to a region’s history. Each earthquake-struck society offers its own particular lesson, and yet, taken together, such earth-shattering events have important shared consequences for the history of the world.

Article

Frank C. Zagare

Perfect deterrence theory and classical deterrence theory are two theoretical frameworks that have divergent empirical implications and dissimilar policy recommendations. In perfect deterrence theory, threat credibility plays a central role in the operation of both direct and extended deterrence relationships. But credible threats are neither necessary nor sufficient for deterrence to prevail, and under certain conditions, the presence of a credible threat may actually undermine deterrence. In perfect deterrence theory, the cost of conflict and status quo evaluations are also important strategic variables. Classical deterrence theorists tend to fixate on the former and ignore the latter. This theoretical oversight precludes a nuanced understanding of the dynamics of deterrence.