Should all governments prohibit the use of killer robots?

Harvard Law School and Human Rights Watch published a report calling for a ban on “autonomous weapons” — before it’s too late. Don’t laugh, this isn’t science fiction any more.
Today’s hi-tech warfare is mostly waged via remote control machines. The U.S. military currently has MADSS, a 1,400-pound rover that carries gear and shoots a machine gun. It also has the Protector, a 1,000-pound rover that scans for bombs and fires a bazooka.
But militaries are already experimenting with automated systems. The Israeli “Iron Dome” system detects and shoots down incoming rockets. The “Phalanx CIWS” system used by U.S. naval combat ships does that with a swiveling Gattling gun. The C-RAM system does the same on land on a truck.
Human Rights Watch, an organization which stands up for the fair treatment of people, says it’s only a matter of time before automation is used for attack. And that’s where the ethical problems appear. Consider the difference between remote control and automation.
In theory, a human pilot could be prosecuted for murdering innocent people. But a “fully autonomous” machine is programmed to make decisions all on it’s own. You can’t jail a human for a robot’s self-determined actions. These weapons have the potential to commit criminal acts — unlawful acts that would constitute a crime if done with intent — for which no one could be held responsible. Because these robots would be designed to kill, someone should be held legally and morally accountable for unlawful killings and other harms the weapons cause.
One hundred and twenty participating nations will decide at the CCW’s (Convention on Conventional Weapons) annual meeting on 13 November 2015 on whether and how to continue the talks which saw the first experts meeting held in May 2014.

briservShould all governments prohibit the use of killer robots?