Robot Warriors Will Get a Guide to Ethics

May 19th, 2009

Oh sure. You better clip a clothes peg on your nose for this one.

Via: MSNBC:

Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an “ethical governor,” a package of software and hardware that tells robots when and what to fire. His book on the subject, “Governing Lethal Behavior in Autonomous Robots,” comes out this month.

He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.

“Ultimately these systems could have more information to make wiser decisions than a human could make,” said Arkin. “Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives.”

6 Responses to “Robot Warriors Will Get a Guide to Ethics”

  1. realitydesign says:

    First off, I’m disgusted over the fact that articles like this normalize
    war just in their content- as if, war is so common and so ongoing that we
    need to talk about constantly.

    Secondly, can someone explain to me how war and ethics and ’saving more
    lives’ all fit together? So let me get this straight, war is about saving
    lives?

    Yeah.

  2. Ann says:

    Well, in the broadest sense of the term, ethics are merely a code of conduct. In that sense, ‘kill anything that moves’ could be considered a code of ethics.

    I am of course being blazingly sarcastic here.

  3. anothernut says:

    @realitydesign: So let me get this straight, war is about saving lives?

    As far as I know, that’s always been the ultimate “reason” given by generals, emperors, and presidents: “if we don’t take their resources, we’ll die”; “if we don’t kill them first, they’ll kill us”, etc. And it’s still working very well.

    As for war being a constant, welcome to the US of A. And as I’ve posted elsewhere, I think the worst thing about the Obama presidency is convincing millions of Americans that since both he and Bush agree on military aggression as the only “solution” to our “concerns” in the Middle East, and since he and Bush are SOOOOO very different (one “so far right”, the other “so far left”), then it must mean that there really IS no other alternative to perpetual war/occupation/destabilization/etc. The most disheartening thing, for me at least, is seeing so many “democrats” and “progressives” talk about Afghanistan as if it’s a given that we should be there, and that there’s really no problem with us staying there “as long as it takes”.

    And of course, there’s extremely little, if any, discussion about our gargantuan dependence on oil. “What’s that got to do with Afghanistan?!” LOL

  4. Zuma says:

    asimov’s laws of robotics

    The 1940 Laws of Robotics

    First Law:
    A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    Second Law:
    A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

    Third Law:
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  5. Peregrino says:

    Here is a quote from a life of Gandhi that I happen to be reading at the moment that addresses the issue of lethal ethics: “‘…the most brutal and degraded acts of the previous fight become the norm.’ This is the cruel truth of all violence.” Edward Albee’s “Who’s Afraid of Virginia Woolf” charts the similar progression of emotional violence. Still, if it’s only robots mangling robots, what’s the worry? Unless, of course, the robots go nuclear. Then the destruction of the earth, and one might presume, the solar system, might become collateral damage in a spat between machines.

  6. DrFix says:

    I believe it was an article several months back on The Register.com’s web site that talked about DARPA and the projects they were working on. You had to laugh for crying because all the shit you’d thought was “crazy” for any sane person to be cooking up is exactly what they’re doing. That these lunatics were, with straight faces, going about the business of making themselves extinct was just too much. And in deference to Zuma and Asimovs laws of robotics… You first have to make your robots “obey” and recognize those laws. And judging from the way these lunatics behave I doubt any such code will ever be introduced. Take a look at the clowns who lie and abuse us daily in the name of protecting us and you can pretty much forget there being any LAWS that these devices will recognize other than the law of the JUNGLE!

Leave a Reply

You must be logged in to post a comment.