Research

Security center explores the 'Autonomous Future' through interdisciplinary lens

Experts from across Penn State and the U.S. discussed interconnections of artificial intelligence and security as part of recent CSRE symposium

Paul Scharre, Senior Fellow and director of the Technology and National Security Program at the Center for a New American Security and author of "Army of None: Autonomous Weapons and the Future of War."  Credit: Andrew Gabriel / Penn StateCreative Commons

UNIVERSITY PARK, Pa. — From self-parking cars on the roads to virtual assistants in our homes, we are becoming more and more accustomed to — and comfortable with — artificial intelligence in our daily lives. But with the rise of AI comes the need to address underlying ethical and philosophical questions: What tasks should be passed on to machines, what tasks should be reserved for humans, and what might be the consequences, intended or not, of our increasingly automated world?

These questions are even more pressing in the realms of national defense and global security, in which autonomous systems lead to wide-ranging privacy concerns and have begun to encroach on life-or-death decisions. In an effort to continue the conversation around these challenging and timely topics, the Penn State Center for Security Research and Education (CSRE), in collaboration with the Penn State Journal of Law and International Affairs, hosted its spring 2019 symposium, “Security and the Autonomous Future,” a two-day event that brought together an interdisciplinary array of experts from Penn State and across the country.

“The symposium really exemplified our mission of bringing together experts from diverse fields to address security-related issues in a holistic manner,” said CSRE Director James W. Houck. “It capped off an exciting academic year in which we were able to showcase the interdisciplinary impact of our programs.”

Security and the Autonomous Future

Vice Admiral (Ret.) James W. Houck, director of CSRE and a Distinguished Scholar in Residence at Penn State Law and the School of International Affairs, gives closing remarks at the symposium. Credit: Andrew Gabriel / Penn StateCreative Commons

Given CSRE’s mission, all of its programs include a broad spectrum of security-related disciplines and experts. But the spring symposium particularly was the perfect opportunity to bring all of those scholars and specialists together in the same room, Houck said.

Twenty-five experts from across the University and the country discussed the interconnections of artificial intelligence and security as part of the symposium’s five panels, focusing on emerging technology, ethics, international norms, domestic security, and the meaning of humanity.

The panelists and moderators represented a diverse range of disciplines and backgrounds, including aerospace engineering, cognitive science, computer engineering, electrical engineering, law, law enforcement, mathematics, military, peace and conflict studies, philosophy, political science, public policy, and theology, among others.

“This (symposium) has enabled these different conversations between people who don’t always get a chance to interact,” said Prasenjit Mitra, professor of information sciences and technology and associate dean for research in Penn State’s College of Information Sciences and Technology, who attended the event after hearing about it from colleagues.

“When people talk about AI, by and large, it’s about the technological aspects. We need more conversations about social impacts and repercussions,” he added.

The time to have those conversations is now, according to panelist Patrick McDaniel, distinguished professor of computer science and engineering and the William L. Weiss Chair in Information and Communications Technology in the Penn State College of Engineering.

“I think it’s not hyperbole to say that we are on the cusp of one of the great transitions in our existence as a species,” said McDaniel.  “There is going to be an enormous social disruption to this technology, so I think it’s really important for us to understand that … when we deploy these systems, we’re going to have negative consequences to society at large.”

Current and emerging technology

Paul Scharre delivered the keynote address at the CSRE Symposium, speaking about the development and use of autonomous and semi-autonomous weapons around the globe. Credit: Andrew Gabriel / Penn StateCreative Commons

In the world of popular science fiction, stories about artificial intelligence often revolve around robots, initially built to serve mankind, becoming more intelligent, more violent, and ultimately turning against their human creators. The reality — at least up until now — is that humans are building many machines explicitly for the purposes of war, said the panelists.

“We are building ever-more sophisticated autonomous systems that are being born with a gun in their hands,” said Paul Scharre, senior fellow and director of the Technology and National Security Program at the Center for a New American Security and author of "Army of None: Autonomous Weapons and the Future of War."

Scharre, the event’s keynote speaker, focused his discussion on the role of artificial intelligence on the battlefield.

Scharre said that armed military robots have existed for about 20 years, and are being used to some extent in many countries around the globe; for example, at least 20 countries currently have or are developing armed drones. But not all robots are created equal. Scharre pointed to important differences, both in capability and availability, between semi-autonomous weapons and fully autonomous systems.

He explained that with some semi-autonomous weapons, such as the long-range, anti-ship missile (LRASM), humans choose the target, but once the weapon is deployed, it has the ability to re-route, surveil an area, or open fire based on particular criteria. More common are human-supervised weapons, also considered semi-autonomous, which are capable of firing or deploying automatically but that still involve active human oversight and the ability to cease fire at any time. These types of weapons, including the U.S. Army’s Patriot Air and Missile Defense System and the Navy’s Aegis Combat System, are useful for situations where the “speed of incoming threats might overwhelm humans’ ability to respond,” Scharre said.

Fully autonomous systems, on the other hand, operate without any human supervision, he added. This type of technology is still in development and there are only a couple examples currently being utilized, including the Israeli Harpy drone, which is used to hunt enemy radar systems. The Harpy drone can autonomously search an area of a couple hundred kilometers and for up to 2.5 hours, Scharre said, and identify and attack a target on its own.

Though the changing dynamics of battlefields may feel far away to many of us, the reality is that technological security, driven by autonomous systems, plays a major role in our personal lives. As Internet of Things (IoT) devices, including “smart” thermostats, appliances, and virtual assistants, continue to occupy our homes, there is growing concern over the devices’ lack of security and our subsequent vulnerability to fraud, identify theft, and invasions of privacy. The development of more sophisticated autonomous systems and machine learning algorithms holds the potential to address some of these security shortcomings, said the panelists.

“We have all of these hopelessly insecure devices that are coming out,” Scharre said, “and automation is one way to begin to tackle this problem because it is cheaper to replicate (autonomous technology) than to grow a new computer science major.”

For governments and corporations, however, AI is instrumental in the advancement of cyberwarfare, he continued. Stuxnet, the malware that, between 2009 and 2010, disrupted operations at the Iranian Natanz Nuclear Facility, operated with a high degree of autonomy as it infected Natanz’s computer networks — damaging centrifuge operations while hijacking surveillance videos on-site to make it appear that the facility was operating normally.

'Those rules have not yet been written'

Major Douglas Burig, director of the Pennsylvania State Police Bureau of Criminal Investigation, speaks during the panel on "Autonomous Systems and Domestic Security." He is joined by (left to right) moderator Anne McKenna and panelists David Atkinson, Alan Wagner, and Marc Canellas. Credit: Andrew Gabriel / Penn StateCreative Commons

But while the technology continues to advance, said the panelists, humans have failed to elucidate and codify the moral and ethical questions that arise from the use of autonomous systems.

“This technology is moving forward, and people don’t yet know where it’s headed,” Scharre said. “There are further down the road going to be some tough questions about rules of engagement for these armed systems when they are on their own. Well, those rules have not yet been written.”

In the context of the military, there are two primary views with regard to autonomous weapons and the laws of war, Scharre explained. One view is that we should make it explicit that machines are not moral agents; that the moral and ethical considerations involved in lethal decisions is a distinctly human characteristic, and we cannot offload that burden to a robot. The competing view is that we should focus on the effects of war and avoiding civilian harm, and if autonomous weapons can improve our results, we should be obligated to use them.

“How do we move forward incrementally down the path toward future autonomy,” Scharre said, “finding beneficial ways of using this technology going forward that might make war more precise, more humane, without losing our humanity in the process?”

It is incumbent on humans to find the answer, he said.

For better or worse, war and business are often driving forces behind technological innovations, discussed the panelists. Those breakthroughs, however, don’t remain solely on the battlefield or in the marketplace. These advancements also make their way into domestic security spheres — particularly in intelligence and law enforcement — and so the moral and ethical questions associated with AI technology are relevant beyond their initial applications.

Some law enforcement agencies, for example, are using facial recognition software, powered by machine learning algorithms, that many of us have become accustomed to on our smartphones and laptops. For police officers, facial recognition technology can greatly speed up the suspect identification process — what might have taken days, weeks, or even longer can now be done in a matter of seconds.

According to panelist Major Douglas Burig, director of the Pennsylvania State Police’s Bureau of Criminal Investigation, facial recognition is just one of many pieces of evidence; although it is incredibly useful, there are limits — legal and practical — to what it can do and how it can be used.

“This is one of the caveats of facial recognition — it is not discriminating enough to be considered identification,” Burig said. “It’s not fingerprints, it’s not DNA. If we were to use that in evidence, if we were to use that as probable cause for a search warrant, it would become ‘fruits of the poisonous tree’ and we would have no foundation, so we don’t do that with this technology.”

Still, autonomous technology has the capability of encroaching on individual rights in new and distinct ways, said the panelists.

“The minute we call (artificial intelligence) magic or deem it a savior and appeal to these higher powers, we are abdicating control and responsibility for our own actions and our own responsibilities as people in a democratic society,” said panelist Marc Canellas, a juris doctor candidate at NYU School of Law with prior experience as a technology staffer in Congress; an aerospace and cognitive engineer studying human decision making and human-machine interaction; and a voting member of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee.

Many of the panelists emphasized that point: some autonomous systems will make our lives better, safer, and more efficient, but we must continue to accept human responsibility for the actions and outcomes of AI-driven technology and take steps to safeguard against privacy and human rights abuses.

The meaning of humanity

Panel 5, "Autonomous Systems and the Meaning of Humanity," included (left to right) moderator Ben Johnson and panelists David Danks, Noreen Herzfeld, Am Pritchett, and Matthias Scheutz. Credit: Andrew Gabriel / Penn StateCreative Commons

From a business perspective, the push to replace — or supplement — human workers with autonomous systems is often motivated by economic factors: cheaper labor and more efficient production. But the growth of AI in the workforce, as it pushes people out of work or into different types of work, can also have profound effects on the human psyche, said panelist David Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy at Carnegie Mellon University.

“We’re not just using machines to replace human physical labor; we’ve always done that since we domesticated animals,” said Danks. “Now, we’re replacing human cognitive labor, and I think that that actually threatens something that is at the core of our identity, which is our rationality and reason.”

Our relationship with AI is complicated; even as it challenges aspects of our identities, autonomous systems also reflect human perspectives, discussed the panelists.

Noreen Herzfeld, professor of theology and computer science at the College of St. Benedict and St. John’s University, said that, from her perspective, "When we think about AI, we think, ‘god made us in god’s image, we are trying to make AI in our own image. And what I think we are doing is standing in the middle and projecting in two directions — we’re looking at what we might share with god and what we would like to share with the computer.”

When we share characteristics with robots, it is because we create them that way: We program virtual assistants with human voices and assign gendered roles to robots, said Herzfeld. Sometimes, we create for ourselves the illusion of humanity by inferring from machines emotional states that cannot exist. Our prejudices, too, can be coded into AI, sometimes without realizing it.

“It’s so easy with machine learning to put our biases into the machines, just by the data sets that we use,” added Herzfeld.

As people increasingly look to autonomous systems as companions, of a sort, our relationship with machines will further affect our interactions with each other. Danks argued that AI has the potential to “augment, impair, or replace human-to-human social relationships," and it is up to us to take ownership over the development and deployment of autonomous systems in order to bring about positive outcomes.

Which brings us back to security, he said.

“We think of international security as being intimately tied with international diplomacy, and engagement, and politics. That reveals the importance of thinking about not just the way that these AI systems can destabilize an individual military … but also needing to think about the impacts on security as it emerges from the deeply personal, human-to-human interactions and relationships that are really at the core of a lot of our lives," said Danks.

Last Updated September 3, 2019

Contact