This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at http://www.nytimes.com/2015/05/27/opinion/the-morality-of-robotic-war.html

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
The Morality of Robotic War The Morality of Robotic War
(35 minutes later)
WASHINGTON — Earlier this month, “Avengers: Age of Ultron” was released in theaters across the United States, featuring Marvel comics superheros battling evil robots powered by artificial intelligence and hellbent on destroying humanity. WASHINGTON — Earlier this month, “Avengers: Age of Ultron” was released in theaters across the United States, featuring Marvel comics superheroes battling evil robots powered by artificial intelligence and hellbent on destroying humanity.
Sentient military machines still remain in the realm of science fiction, but some autonomous weapons are already technically possible today. And the role of autonomy in military systems is likely to grow as armies take advantage of the same basic technology used in self-driving cars and household robots. Autonomous weapons are not the same as drones. A drone is remotely piloted by a person, who makes the decision to fire its weapons. In contrast, an autonomous weapon is one that, once activated, can select and engage targets on its own.Sentient military machines still remain in the realm of science fiction, but some autonomous weapons are already technically possible today. And the role of autonomy in military systems is likely to grow as armies take advantage of the same basic technology used in self-driving cars and household robots. Autonomous weapons are not the same as drones. A drone is remotely piloted by a person, who makes the decision to fire its weapons. In contrast, an autonomous weapon is one that, once activated, can select and engage targets on its own.
In mid-April, 90 countries and dozens of nongovernmental organizations met in Geneva to discuss the challenges raised by lethal autonomous weapons, and a consortium of more than 50 NGOs called for a pre-emptive ban.In mid-April, 90 countries and dozens of nongovernmental organizations met in Geneva to discuss the challenges raised by lethal autonomous weapons, and a consortium of more than 50 NGOs called for a pre-emptive ban.
Advocates of a ban on autonomous weapons often claim that the technology today isn’t good enough to discriminate reliably between civilian and military targets, and therefore can’t comply with the laws of war. In some situations, that’s true. For others, it’s less clear. Over 30 countries already have automated defensive systems to shoot down rockets and missiles. They are supervised by humans but, once activated, select and engage targets without further human input. These systems work quite effectively and have been used without controversy for decades.Advocates of a ban on autonomous weapons often claim that the technology today isn’t good enough to discriminate reliably between civilian and military targets, and therefore can’t comply with the laws of war. In some situations, that’s true. For others, it’s less clear. Over 30 countries already have automated defensive systems to shoot down rockets and missiles. They are supervised by humans but, once activated, select and engage targets without further human input. These systems work quite effectively and have been used without controversy for decades.
Autonomous weapons should not be banned based on the state of the technology today, but governments must start working now to ensure that militaries use autonomous technology in a safe and responsible manner that retains human judgment and accountability in the use of force.Autonomous weapons should not be banned based on the state of the technology today, but governments must start working now to ensure that militaries use autonomous technology in a safe and responsible manner that retains human judgment and accountability in the use of force.
Greater autonomy could even reduce casualties in war, if used in the right way. The same types of sensors and information-processing that will enable a self-driving car to avoid hitting pedestrians could also potentially enable a robotic weapon to avoid civilians on a battlefield. It is entirely plausible that future sensors could distinguish accurately between a person holding a rake and a person holding a rifle, and do it better than humans.Greater autonomy could even reduce casualties in war, if used in the right way. The same types of sensors and information-processing that will enable a self-driving car to avoid hitting pedestrians could also potentially enable a robotic weapon to avoid civilians on a battlefield. It is entirely plausible that future sensors could distinguish accurately between a person holding a rake and a person holding a rifle, and do it better than humans.
Precision-guided weapons, or “smart bombs,” have already dramatically reduced civilian casualties in war. Air-to-ground bombs that had a 50 percent chance of landing within a half-mile of the target during World War II are now accurate within five feet. This guidance technology has been so instrumental in saving civilian lives that Human Rights Watch has suggested that using unguided weapons in populated areas is a war crime.Precision-guided weapons, or “smart bombs,” have already dramatically reduced civilian casualties in war. Air-to-ground bombs that had a 50 percent chance of landing within a half-mile of the target during World War II are now accurate within five feet. This guidance technology has been so instrumental in saving civilian lives that Human Rights Watch has suggested that using unguided weapons in populated areas is a war crime.
Even if autonomous weapons could accurately discriminate and engage targets, however, they would still raise issues of safety, responsibility and control. Autopilot systems already help airplane pilots navigate dangerous weather conditions and reduce human error. But the same features that make them reliable — the fact that they follow their programming precisely every time — can make them brittle when used outside of their intended operating environment. The same goes for autonomous systems operating in environments outside the bounds of their programming.Even if autonomous weapons could accurately discriminate and engage targets, however, they would still raise issues of safety, responsibility and control. Autopilot systems already help airplane pilots navigate dangerous weather conditions and reduce human error. But the same features that make them reliable — the fact that they follow their programming precisely every time — can make them brittle when used outside of their intended operating environment. The same goes for autonomous systems operating in environments outside the bounds of their programming.
Militaries are also unlikely to share their algorithms with competitors, raising the prospect of unanticipated encounters between autonomous systems on the battlefield. Like the stock market “flash crash” in 2010 that was exacerbated by automated high-frequency trading algorithms, a “flash war” sparked by unintended interactions between autonomous systems could occur one day.Militaries are also unlikely to share their algorithms with competitors, raising the prospect of unanticipated encounters between autonomous systems on the battlefield. Like the stock market “flash crash” in 2010 that was exacerbated by automated high-frequency trading algorithms, a “flash war” sparked by unintended interactions between autonomous systems could occur one day.
Most important, we must avoid a situation where the spread of autonomous weapons leads civilian leaders and militaries to ethically dissociate themselves from the costs of war. Humans should be making decisions about taking human lives.Most important, we must avoid a situation where the spread of autonomous weapons leads civilian leaders and militaries to ethically dissociate themselves from the costs of war. Humans should be making decisions about taking human lives.
If an autonomous weapon struck the wrong target, could the person who launched it deny responsibility, pointing out that she launched the autonomous weapon, but did not choose that specific target?If an autonomous weapon struck the wrong target, could the person who launched it deny responsibility, pointing out that she launched the autonomous weapon, but did not choose that specific target?
Recent debates over drones have raised similar questions. A drone operator is just as accountable as a pilot in an airplane or a tank driver for their actions — and numerous sources suggest drone pilots do not exhibit moral detachment from war. The concern is that autonomous weapons could lead to a situation where people no longer feel morally responsible for the use of force.Recent debates over drones have raised similar questions. A drone operator is just as accountable as a pilot in an airplane or a tank driver for their actions — and numerous sources suggest drone pilots do not exhibit moral detachment from war. The concern is that autonomous weapons could lead to a situation where people no longer feel morally responsible for the use of force.
Today, if a fighter pilot launches a missile, she knows that she is responsible for it. The missile, once released, often can’t be recalled and homing missiles “lock on” to targets autonomously. If the result is deemed an illegal use of force, accountability turns on whether what happened was an unpredictable technical malfunction, or something that the pilot should have foreseen.Today, if a fighter pilot launches a missile, she knows that she is responsible for it. The missile, once released, often can’t be recalled and homing missiles “lock on” to targets autonomously. If the result is deemed an illegal use of force, accountability turns on whether what happened was an unpredictable technical malfunction, or something that the pilot should have foreseen.
Autonomous weapons could complicate this in two ways: either because the weapon is so complex that its operator doesn’t know how it will behave or because the human operators feel that it is the weapon, not themselves, doing the killing.Autonomous weapons could complicate this in two ways: either because the weapon is so complex that its operator doesn’t know how it will behave or because the human operators feel that it is the weapon, not themselves, doing the killing.
We must recognize that autonomous systems are machines that people operate, not independent moral agents.We must recognize that autonomous systems are machines that people operate, not independent moral agents.
Weapons themselves do not comply with the laws of war. Weapons are used by people in ways that comply with, or violate, the laws of war. This means that as weapons incorporate greater autonomy, the human operator still has a responsibility to ensure that the actions he or she is taking are lawful.Weapons themselves do not comply with the laws of war. Weapons are used by people in ways that comply with, or violate, the laws of war. This means that as weapons incorporate greater autonomy, the human operator still has a responsibility to ensure that the actions he or she is taking are lawful.
Ensuring responsibility, not just accountability, is the real challenge. Accountability is a problem when there is an accident and it is unclear who is to blame. The real problem is that a responsibility vacuum might emerge, where people are being killed by autonomous weapons but no person is clearly responsible for the killing.Ensuring responsibility, not just accountability, is the real challenge. Accountability is a problem when there is an accident and it is unclear who is to blame. The real problem is that a responsibility vacuum might emerge, where people are being killed by autonomous weapons but no person is clearly responsible for the killing.
This could occur if bad training or poor system design led the operator to misunderstand the weapon. It also could happen if the system itself exhibits surprising behavior in a real-world environment that even the designers couldn’t have anticipated. In that case, it would be hard to blame the engineer. It could be that those who tested the weapons failed to anticipate that situation. Or it could be that the real world is simply too complex to foresee every problem.This could occur if bad training or poor system design led the operator to misunderstand the weapon. It also could happen if the system itself exhibits surprising behavior in a real-world environment that even the designers couldn’t have anticipated. In that case, it would be hard to blame the engineer. It could be that those who tested the weapons failed to anticipate that situation. Or it could be that the real world is simply too complex to foresee every problem.
In 2003, the United States Patriot air-defense system, which incorporates a high degree of autonomy, shot down two friendly aircraft. Ultimately, no one was held responsible. Partly, this was because the friendly-fire casualties resulted from poor operator training, complex system design, and a real-world environment that wasn’t anticipated.In 2003, the United States Patriot air-defense system, which incorporates a high degree of autonomy, shot down two friendly aircraft. Ultimately, no one was held responsible. Partly, this was because the friendly-fire casualties resulted from poor operator training, complex system design, and a real-world environment that wasn’t anticipated.
Stopping weapons from having more intelligence will not solve these problems. It’s how we use the technology that matters.Stopping weapons from having more intelligence will not solve these problems. It’s how we use the technology that matters.
Humans must ultimately bear moral responsibility and face the horror of war squarely — not outsource it to machines. And people must be able to remain in control of a weapon and manage its behavior. We cannot have weapons that are intrinsically uncontrollable or wildly unpredictable. After you fire a bullet, you can’t take it back, but its trajectory is predictable. The key is to ensure that future weapons that behave like self-steering bullets do not run amok. They should have enough autonomy to complete the approved mission — and nothing more. And armies must ensure that operators are confident they are taking lawful action, given what they know about the weapon’s capabilities, the target and the context.Humans must ultimately bear moral responsibility and face the horror of war squarely — not outsource it to machines. And people must be able to remain in control of a weapon and manage its behavior. We cannot have weapons that are intrinsically uncontrollable or wildly unpredictable. After you fire a bullet, you can’t take it back, but its trajectory is predictable. The key is to ensure that future weapons that behave like self-steering bullets do not run amok. They should have enough autonomy to complete the approved mission — and nothing more. And armies must ensure that operators are confident they are taking lawful action, given what they know about the weapon’s capabilities, the target and the context.
Weapons with greater autonomy could mean more accuracy and fewer civilian casualties. The appropriate response is not to forgo potentially useful technology, but instead to understand where human judgment is still required, regardless of how advanced the technology becomes.Weapons with greater autonomy could mean more accuracy and fewer civilian casualties. The appropriate response is not to forgo potentially useful technology, but instead to understand where human judgment is still required, regardless of how advanced the technology becomes.
Michael C. Horowitz, an associate professor of political science at the University of Pennsylvania, and Paul Scharre, a senior fellow at the Center for a New American Security, direct the center’s Ethical Autonomy Project.