Since shortly after the first and, thus far, only use of atomic weapons, in 1945, scientists, policy analysts, and government officials have sought to identify measures to inhibit the further acquisition and use of the enormous destructive potential of nuclear technology. In the late 1960s, a similar group of stakeholders initiated efforts to prevent the biological sciences from being used to develop weapons whose destructive effects against humans, animals, and plants could, in some circumstances, rival those of nuclear weapons. Today, questions are being raised about how to manage the potential threat posed by information technology, whose growth and spread some believe may position cyber weapons alongside nuclear and biological weapons in the elite club of technologies capable of unleashing massive harm.
These technologies differ in their legal status and characteristics. But they also have one critically important similarity: each is what has come to be called dual-use. Over the years, this concept has been defined in various ways. The European Commission (EC), for example, defines dual-use goods as "items, including software and technology, which can be used for both civil and military purposes.” The U.S. government’s Code of Federal Regulations takes a similar approach, describing "items that can be used both in military and other strategic uses . . . and commercial applications.” These definitions focus on the inherent characteristics of the technology and are consistent with how the term dual-use is used in discussions of nuclear technology.
Other definitions, however, focus more on what one analyst has called externalities, such as the context in which the technology is used, or the users themselves. This is reflected in the 2004 National Academy of Sciences (NAS) report, Biotechnology Research in an Age of Terrorism, which describes the dual-use dilemma in biology as "when the same technologies can be used legitimately for human betterment and misused for bioterrorism.” An externally driven approach is also evident in the work of analysts at the Center for International and Security Studies at Maryland, whose proposal for oversight of dual-use biotechnology research extends to research that is intended for beneficial purposes but can also cause harm, either inadvertently or as a result of deliberate malfeasance. This definition is even broader than that used by the NAS in that it includes not just deliberate misuse of dual-use technology but accidents and other unintended outcomes.
Whatever definition one uses, military measures such as deterrence, defense, and reprisal are clearly of limited value in preventing dual-use technologies that are widely available around the globe, such as biological and information technology, from being used for hostile purposes. In both of these areas, the difficulties associated with identifying the source of an attack, or what is called attribution, render deterrence and reprisal much less effective. The technology also favors the offense over defense; that is, a biological or cyberattack is generally easier to carry out than to defend against. The situation is different in the nuclear area, where the technology is not as broadly disseminated and where measures such as deterrence, defense, and reprisal have for almost seventy years played a major role in preventing the use of nuclear weapons. They also have helped convince at least some countries that their security does not require them to use their civilian nuclear technology to develop a nuclear weapons capability.