Artificial intelligence expert Mark Bishop says a ban on weapons that can deploy and destroy without human intervention is vital. He is a professor of cognitive computing at Goldsmiths, University of London, and chairs the Society for the Study of Artificial Intelligence and the Simulation of Behavior.
What is the Campaign to Stop Killer Robots?
MB: It is a confederation of non-governmental organizations and pressure groups lobbying for a ban on producing and deploying fully autonomous weapon systems, where the ability of a human to both choose the precise target and intervene in the final decision to attack is removed.
How close are we to this?
MB: Examples already exist. Some, such as the Phalanx gun system, used on the majority of U.S. Navy ships to detect and automatically engage incoming threats, have been around for some time. Another is the Israeli Harpy “fire-and-forget” unmanned aerial vehicle, which will seek out and destroy radar installations.
What’s driving the technology’s development?
MB: Current Western military strategy focuses more on drones than on traditional forces, but remote-controlled drones are vulnerable to hijacking. Fully autonomous systems are virtually immune to this. They also lower costs. This means manufacturers sell more, so there is a commercial imperative to develop autonomous systems and for governments to deploy them.
What are the dangers?
MB: There are reasons to doubt whether autonomous systems can appropriately judge the need to engage, react to threats proportionately, or reliably discriminate between combatants and civilians. Also, when you get complex software systems interacting, there is huge potential for unforeseen consequences.
Full Article: Slate.com