By J. Michael Cole
It’s almost impossible
nowadays to attend a law-enforcement or defense show that does not feature
unmanned vehicles, from aerial surveillance drones to bomb disposal robots, as
the main attraction. This is part of a trend that has developed over the years
where tasks that were traditionally handled in situ are now operated remotely, thus
minimizing the risks of casualties while extending the length of operations.
While military forces,
police/intelligence agencies and interior ministries have set their sights on drones for missions spanning the full
spectrum from terrain mapping to targeted killings, today’s unmanned vehicles
remain reliant on human controllers who are often based hundreds, and sometimes
thousands of kilometers away from the theater of operations. Consequently,
although the use of drones substantially increases operational
effectiveness — and, in the case of targeted killings, adds to the emotional
distance between perpetrator and target — they remain primarily an extension
of, and are regulated by, human decision making.
All that could be about to
change, with reports that the U.S. military (and presumably
others) have been making steady progress developing drones that operate with
little, if any, human oversight. For the time being, developers in the U.S.
military insist that when it comes to lethal operations, the new generation of
drones will remain under human supervision. Nevertheless, unmanned vehicles
will no longer be the “dumb” drones in use today; instead, they will have the
ability to “reason” and will be far more autonomous, with humans acting more as
supervisors than controllers.
Scientists and military
officers are already envisaging scenarios in which a manned combat platform is
accompanied by a number of “sentient” drones conducting tasks ranging from
radar jamming to target acquisition and damage assessment, with humans
retaining the prerogative of launching bombs and missiles.
It’s only a matter of time,
however, before the defense industry starts arguing that autonomous drones
should be given the “right” to use deadly force without human intervention. In
fact, Ronald Arkin of Georgia Tech contends that such an evolution is inevitable. In his
view, sentient drones could act more ethically and humanely, without their
judgment being clouded by human emotion (though he concedes that unmanned
systems will never be perfectly ethical). Arkin is not alone in thinking that
“automated killing” has a future, if the guidelines established in the U.S. Air
Force’s Unmanned Aircraft Systems Flight Plan
2009-2047 are any indication.
In an age where printers and
copy machines continue to jam, the idea that drones could start making life-and-death decisions should be cause for
concern. Once that door is opened, the risk that we are on a slippery ethical
slope with potentially devastating results seems all too real. One need not
envision the nightmares scenario of an out-of-control Skynet from Terminator movie fame to see where things could
go wrong.
In this day and age,
battlefield scenarios are less and less the meeting of two conventional forces
in open terrain, and instead increasingly takes the form of combatants engaging
in close quarter firefights in dense urban areas. This is especially true of
conflicts pitting modern military forces — the very same forces that are most
likely to deploy sentient drones — against a weaker opponent, such as NATO in
Afghanistan, the U.S. in Iraq, or Israel in Lebanon, Gaza, and the West Bank.
Israeli counterterrorism
probably provides the best examples of the ethical problems that would arise
from the use of sentient drones with a license to kill. While it is true that
domestic politics and the thirst for vengeance are both factors in the decision
to attack a “terrorist” target, in general the Israel Defense Forces (IDF) must
continually use proportionality and weigh the operational benefits of launching
an attack in an urban area against the costs of attendant civilian collateral. The IDF
has faced severe criticism over the years for what human rights organizations
and others have called “disproportionate” attacks against Palestinians and
Lebanese. In many instances, such criticism was justified.
That said, what often goes
unreported are the occasions when the Israeli government didn’t launch an
attack because of the high risks of collateral damage, or because a target’s
family was present in the building when the attack was to take place. As Daniel
Byman writes in a recent book on Israeli counterterrorism, “Israel
spends an average of ten hours planning the operation and twenty seconds on the
question of whether to kill or not.”
Those twenty seconds make all
the difference, and it’s difficult to imagine how a robot could make such a
call. Unarguably, there will be times when hatred will exacerbate pressures to
use deadly violence (e.g., the 1982 Sabra and Shatila massacre
that
was carried out while the IDF looked on). But equally there are times when
human compassion, or the ability to think strategically, imposes restraints on
the desirability of using force. Unless artificial intelligence reaches a point
where it can replicate, if not transcend, human cognition and emotion, machines will not be able to
act under ethical considerations or to imagine the consequences of action in
strategic terms.
How, for example, would a
drone decide whether to attack a Hezbollah rocket launch site or depot in Southern Lebanon located near
a hospital or with schools in the vicinity? How, without human intelligence,
will it be able to determine whether civilians remain in the building, or
recognize that schoolchildren are about to leave the classroom and play in the
yard? Although humans were ultimately responsible, the downing of Iran Air Flight 655 in 1988 by the U.S.
Navy is nevertheless proof that only humans still have the ability to avoid
certain types of disaster. The A300 civilian aircraft, with 290 people on
board, was shot down by the U.S. Navy’s USS Vincennes after operators mistook it for an
Iranian F-14 aircraft and warnings to change course were unheeded. Without
doubt, today’s more advanced technology would have ensured the Vincennes made
visual contact with the airliner, which wasn’t the case back in 1988. Had such
contact been made, U.S. naval officers would very likely have called off the
attack. Absent human agency, whether a fully independent drone would make a
similar call would be contingent on the quality of its software — a not so
comforting thought.
And the problems don’t just
end there. It’s already become clear that states regard the use of unmanned
vehicle as somewhat more acceptable than human intrusions. From Chinese UAVs
conducting surveillance near the border with India to U.S. drones launching Hellfire missiles at suspected terrorists in places like
Pakistan, Afghanistan or Yemen, states regard such activity as less intrusive
than, say, U.S. special forces taking offensive action on their soil. Once
drones start acting on their own and become commonplace, the level of
acceptability will likely increase, further deresponsibilizing their users.
Finally, by removing human
agency altogether from the act of killing, the restraints on the use of force
risk being further weakened. Technological advances over the centuries have
consistently increased the physical and emotional distance between an attacker and his
target, resulting in ever-higher levels of destructiveness. Already back during
the Gulf War of 1991, critics were arguing that the “videogame” and
“electronic narrative” aspect of fixing a target in the crosshairs of an
aircraft flying at 30,000 feet before dropping a precision-guided bomb had made
killing easier, at least for the perpetrator and the public. Things were taken
to a greater extreme with the introduction of attack drones, with U.S. Air
Force pilots not even having to be in Afghanistan to launch attacks against extremist groups there, drawing
accusations that the U.S. conducts an “antiseptic” war.
Still, at some point, a human
has to make a decision whether to kill or not. It’s hard to imagine that we
could ever be confident enough to allow technology to cross that thin red line.
No comments:
Post a Comment