A Dangerous Future of Killer Robots

November 26th, 2012

Via: Washington Post:

The use of drones to kill suspected terrorists is controversial, but so long as a human being decides whether to fire the missile, it is not a radical shift in how humanity wages war. Since the first archer fired the first arrow, warriors have been inventing ways to strike their enemies while removing themselves from harm’s way.

Soon, however, military robots will be able to pick out human targets on the battlefield and decide on their own whether to go for the kill. An Air Force report predicted two years ago that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems.” A 2011 Defense Department road map for ground-based weapons states: “There is an ongoing push to increase autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.”

The Pentagon still requires autonomous weapons to have a “man in the loop” — the robot or drone can train its sights on a target, but a human operator must decide whether to fire. But full autonomy with no human controller would have clear advantages. A computer can process information and engage a weapon infinitely faster than a human soldier. As other nations develop this capacity, the United States will feel compelled to stay ahead. A robotic arms race seems inevitable unless nations collectively decide to avoid one.

Don’t Worry: Pentagon: A Human Will Always Decide When a Robot Kills You

One Response to “A Dangerous Future of Killer Robots”

  1. dale says:

    -a computer can process information and engage infinitely faster-

    imagine a parameter modification across the entire fleet one day; one very bad day.

Leave a Reply

You must be logged in to post a comment.