But while the technology continues to advance, said the panelists, humans have failed to elucidate and codify the moral and ethical questions that arise from the use of autonomous systems.
“This technology is moving forward, and people don’t yet know where it’s headed,” Scharre said. “There are further down the road going to be some tough questions about rules of engagement for these armed systems when they are on their own. Well, those rules have not yet been written.”
In the context of the military, there are two primary views with regard to autonomous weapons and the laws of war, Scharre explained. One view is that we should make it explicit that machines are not moral agents; that the moral and ethical considerations involved in lethal decisions is a distinctly human characteristic, and we cannot offload that burden to a robot. The competing view is that we should focus on the effects of war and avoiding civilian harm, and if autonomous weapons can improve our results, we should be obligated to use them.
“How do we move forward incrementally down the path toward future autonomy,” Scharre said, “finding beneficial ways of using this technology going forward that might make war more precise, more humane, without losing our humanity in the process?”
It is incumbent on humans to find the answer, he said.
For better or worse, war and business are often driving forces behind technological innovations, discussed the panelists. Those breakthroughs, however, don’t remain solely on the battlefield or in the marketplace. These advancements also make their way into domestic security spheres — particularly in intelligence and law enforcement — and so the moral and ethical questions associated with AI technology are relevant beyond their initial applications.
Some law enforcement agencies, for example, are using facial recognition software, powered by machine learning algorithms, that many of us have become accustomed to on our smartphones and laptops. For police officers, facial recognition technology can greatly speed up the suspect identification process — what might have taken days, weeks, or even longer can now be done in a matter of seconds.
According to panelist Major Douglas Burig, director of the Pennsylvania State Police’s Bureau of Criminal Investigation, facial recognition is just one of many pieces of evidence; although it is incredibly useful, there are limits — legal and practical — to what it can do and how it can be used.
“This is one of the caveats of facial recognition — it is not discriminating enough to be considered identification,” Burig said. “It’s not fingerprints, it’s not DNA. If we were to use that in evidence, if we were to use that as probable cause for a search warrant, it would become ‘fruits of the poisonous tree’ and we would have no foundation, so we don’t do that with this technology.”
Still, autonomous technology has the capability of encroaching on individual rights in new and distinct ways, said the panelists.
“The minute we call (artificial intelligence) magic or deem it a savior and appeal to these higher powers, we are abdicating control and responsibility for our own actions and our own responsibilities as people in a democratic society,” said panelist Marc Canellas, a juris doctor candidate at NYU School of Law with prior experience as a technology staffer in Congress; an aerospace and cognitive engineer studying human decision making and human-machine interaction; and a voting member of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee.
Many of the panelists emphasized that point: some autonomous systems will make our lives better, safer, and more efficient, but we must continue to accept human responsibility for the actions and outcomes of AI-driven technology and take steps to safeguard against privacy and human rights abuses.
The meaning of humanity