Murphy’s Law says, anything that can go wrong will go wrong. In 1958, as the Cold War’s nuclear-arms race was accelerating, researchers at the think tank RAND worried that something—the ultimate thing—could go wrong with a nuclear weapon.
By that time, at least a dozen nuclear-weapon mishaps had occurred, including accidental drops, jettisons, and crashes. Due to technical and human safeguards, the nuclear material did not detonate. But the researchers saw ways the safeguards could fail or be intentionally defeated. Thus the question: Could Murphy’s Law go nuclear?
The researchers’ report, “On the Risk of an Accidental or Unauthorized Nuclear Detonation,” was declassified in 2000 and is now on the Internet. It is an interesting example of how to think about the risk of something happening when it has not happened before.
Normally, risks are associated with odds, and odds are based on past observations. For example, during the 1950s, the U.S. Air Force’s B-52 bomber had a number of accidents. Dividing that number by the total B-52 flight-hours gave odds of one accident per 25,000 flight-hours.
Lacking a record of nuclear-detonation accidents, the researchers could not calculate odds in the same way. Zero divided by anything would be zero.
The RAND researchers argued the actual risk was not zero. They cited numerous plausible scenarios in which technical flaws, human errors, sabotage, or some combination of these factors could cause a nuclear detonation. The bad scenarios were all highly unlikely, but no one knew how unlikely. In contrast, it was certain that the likelihood of an accident was increasing with the number of nuclear weapons.
The researchers also saw increasing risk in a key trend of the time, having more planes on continuous ground alert, or staying continuously aloft, with nuclear weapons ready to strike. This trend would greatly increase the number of flight-hours in which an accident could occur, as well as opportunities for various other human mistakes.
Finally, the researchers delved deeply into the possibility that an insider could deliberately override safeguards in an act of nuclear sabotage. Precedents existed for non-nuclear saboteurs, including military personnel with mental disorders. Against this backdrop, the researchers noted that many then-current nuclear weapons could be detonated singlehandedly by an individual with the right access and knowledge.
In response to these scenarios, the researchers recommended new efforts to develop technical and process safeguards to further reduce risk without sacrificing readiness. For example, the researchers suggested a lock for nuclear weapons, the combination for which would only be transmitted with the order to use the weapon.
The researchers also praised the idea of an acceleration switch, then under development, that would prevent a weapon from detonating while being handled on the ground. To illustrate the value, the researchers cited training incidents that would have caused a nuclear detonation if they occurred in the field.
Unlike many research reports, this one influenced the highest levels of decision-making. As told in Sharon Bertsch McGrayne’s The Theory That Would Not Die, the Commander of the U.S. Air Force’s Strategic Air Command, General Curtis LeMay, ordered new safeguards for nuclear weapons because of the report.
(McGrayne’s book is a popular account of the historical uses of Bayesian probability, a techique that incorporates degrees of subjective belief in addition to direct observations. The Bayesian approach can be useful when there aren’t enough observations to analyze or when the observations have uncertainties. Some of the statistical analyses in the RAND report used a Bayesian approach, which was unusual for the time.)
Since the early 1960s, the United States has continued to improve nuclear-weapons safeguards, not just due to research reports but also due to close calls. For example, in 1961 an air accident plunged two hydrogen bombs into a North Carolina field. One of the recovered bombs only had a single safeguard—out of six—remaining to prevent a nuclear detonation. Other accidents also avoided a detonation but spilled dangerous nuclear material.
Compared to early nuclear weapons, modern nuclear weapons have far stronger safeguards. They include a more sophisticated version of the combination lock suggested in the RAND report, physically requiring two people to unlock; arming components designed to fail under adverse conditions such as a crash, thus making them “fail safe”; and special types of conventional explosives and containment devices to prevent leakage of nuclear materials in an accident.
In addition to having safer weapons, the United States now has far fewer nuclear weapons deployed, on lower levels of alert, than during the height of the Cold War. So, the RAND researchers (Fred Charles Ikle, Gerald J. Aronson, and Albert Mandansky) would be pleased.
I am pleased too. Reading their report reminded me of the time I toured a decommissioned Titan II nuclear-missile silo in Arizona. Although it was a relatively low-tech artifact of the 1960s, I was impressed with how well considered its design and operating procedures were. It felt like those involved were up to the enormous responsibility attached to their jobs. That included everyone from the thinkers at RAND to the systems designers to the hands-on crews.
May they all continue their success, in the United States and wherever else Murphy’s Law and nuclear weapons could meet.