Reliable (fulfilling rigorous “safety, security, and effectiveness” standards) and Traceable (using “transparent and auditable methodologies, data sources, and design procedure and documentation”) Responsible (informed by “appropriate levels of judgment and care”) Įquitable (minimizing “unintended bias in AI capabilities”) The first step in this direction is represented by the five self-regulation principles with which the US Department of Defense (DOD) commits to ensuring that military AIs incorporate ethical characteristics: While several ethical standards have already been implemented in the domain of civilian uses of AI, governments have yet to agree a shared ethical framework to regulate military AI. That is because, aside from obvious specificities, military and civilian applications of AI face comparable ethical challenges and must align with fundamentally analogous shared values and societal expectations. What distinctively characterizes our proposal is the suggestion that any normative framework for AWS should make parallel with the codes of practice already established to regulate civilian technologies, such as commercial drones and autonomous cars. Compared to an indiscriminate ban, this approach would be more politically efficacious and authentically moral. To avoid that the prophecy fulfills itself, we recommend that each instance of design, development, and deployment of AWS should be internationally regulated by legal and ethical standards. 2 Articulating an alternative proposal, we argue that the conditions for ethically using military applications of AI can be conceptually specified as clearly as those relevant to similar nonmilitary technologies that the decisional processes (including the public discussion) and the research efforts (including the transfer from civilian to military industry) necessary to practically meet such conditions would be hindered by a pre-emptive ban on AWS and that any such unconditional prohibition would solicit the very same deregulation and uncontrolled proliferation that it was supposed to prevent. 1 A universal ban on these machines has been advocated by those who believe that the conditions necessary to use AWS ethically either are impossible to devise or cannot realistically be met in practice. One of the controversies most intensely debated by technology ethicists remains the military application of AI, which includes-but is not limited to-autonomous weapon systems (AWS), i.e., machines designed to independently search for, select, and engage targets. Regulating the use of artificial intelligence (AI) to make it safe and compliant with ethical standards recently became a public concern and a global priority. We examine the general limitations of principle sets, refuting the charge that such ethical theorizing is a misguided enterprise and critically addressing the proposed ban on military applications of artificial intelligence and autonomous weapon systems. In addition to guiding their research and development, these principles can enhance the capability of the armed forces to make ethical decisions in conflict and operations.
In this article, after reviewing several such sets intended to guide the responsible development and deployment of artificial intelligence and autonomous systems in the civilian domain, we propose a series of 11 positive ethical principles to be embedded in the design of autonomous and intelligent technologies used by armed forces. Both corporate leaders and military commanders turn to ethical principle sets when they search for guidance concerning moral decision making and best practice.