Drone swarms in city war which targets human eye or make a tiny hole in the skull, or weapons which target and take out humans in motion at a border- without human control?
Autonomous weapons systems(LAWS) select and engage targets without human intervention; they become lethal when those targets include humans.
As long as there are military advantages, is it beyond reach to stop LAWS?
LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
The UN has held three major meetings in Geneva under the auspices of the Convention on Certain Conventional Weapons, or CCW, to discuss the possibility of a treaty banning autonomous weapons. There is at present broad agreement on the need for "meaningful human control" over selection of targets and decisions to apply deadly force. Much work remains to be done on refining the necessary definitions and identifying exactly what should or should not be included in any proposed treaty.
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986.
He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum's Council on AI and Robotics.
He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring.
His books include "The Use of Knowledge in Analogy and Induction", "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig).
April 6, 2016
University of California, Berkeley