Geekologie I Watch Stuff The Superficial

This Has Gotten Way Out Of Hand: Army Robots Will Require A 'Warrior Code'

doomsday.jpg

How many times do I have to emphasize that I am not kidding about a robot apocalypse? Did the Terminator series teach us nothing besides Arnold Schwarzenegger should run for governor? Now, in a recent report by the US Navy, it has been suggested that robots participating in battle be programmed with a 'Warrior Code' to help prevent destruction of the entire human potato-sack race.

"There is a common misconception that robots will do only what we have programmed them to do," Patrick Lin, the chief compiler of the report, said. "Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person." The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program.

It's been suggested we use Isaac Asimov's Three Rules Of Robotics as a starting point for the 'Warrior Code'. Isaac's Rules were as follows:

1 A robot may not injure a human being or, through inaction, allow a human being to come to harm


2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law

3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Being the Geekologie Writer, I got a sneak peak at the Warrior Code in progress, and I've got to say, not good:

1 There is no warrior code


2 PEW PEW

3 PEW PEW

Military's killer robots must learn warrior code [timesonline]
and
Experts Warn of 'Terminator'-Style Military-Robot Rebellion [foxsnews]

Thanks to Bryan, Chris, timgrab, T6000 (what are you doing here!?), Matt, Sprite and Thumperchica, who are all smart enough to know this is life or death, but not smart enough to know I just stole their identities. Hello, credit cards!

There are Comments.
  • Steve Belzer

    Asimov's code seems like it would work for robots that perform functions that are not inherently threatening.  However, once a thinking robot is given permission to kill SOME people, I don't think that the ideas of "friend" and "foe" are concrete enough to guarantee that an AI controlled weapons system will consistently be able to distinguish between them.

blog comments powered by Disqus