Autonomy & AI, Public Policy

Policy Discussions on Ethical and Responsible Use of Autonomous Weapons Likely

By Frank Wolfe | October 19, 2020
Send Feedback

An engineer with Robotic Research LLC, conducts a Pegasus test run during the Army’s Project Convergence exercise last month. Pegasus is a family of robotic air and ground vehicles equipped with AI to avoid obstacles and create full 3D mapping. (U.S. Army)

As the U.S. Army recruits the other military services to participate in next year’s Project Convergence exercise to showcase significantly contracted sensor-to-shooter timelines through artificial intelligence (AI), policy discussions on the future use of lethal autonomous weapons systems (LAWS) are likely.

Drones represent a prominent LAWS use case.

One industry expert said that the U.S. military’s employment of LAWS is inevitable.

“This is going to be a tricky situation over the next couple of years because the weapons systems are going to need to operate on tremendously rapid time scales,” said Patrick Biltgen, director of analytics at Perspecta. “I think the first place where we should experiment with those types of capabilities is in the cyber domain because you’re going to need network defenses that operate faster than human speed.”

“But when we start talking about are we going to have drones that are going to take their own actions without a human in-the-loop, eventually, absolutely, we will have to do that because we’ve built a doctrine based on network-centric warfare,” he said. “And network centric warfare says there has to be a human in-the-loop to make the kinetic decision, but we’ve said that so the adversary knows that so the first thing they’re going to do is cut the network cable and jam every frequency so you can’t talk to those drones, and then I guess they’re just going to go home.”

“So there will need to be experimentation in the ethical and responsible use of how these algorithms operate when they can’t have a human in-the-loop,” Biltgen said. “We will eventually get to that point. We’re just not there in the story yet. And, by the way, many weapons systems do have built-in rules sets. That’s not really AI, but they do have rule sets of this is what I do when I lose lock, or when I don’t have comms, or when I go to safe mode. We just need to think through the implications for those things and decide on the levels of responsible and ethical use of these technologies in the military domain.”

The Army wants to receive a list in the coming weeks of sensors and weapon systems that the Air Force, Navy and Marine Corps plan to include in next year’s Project Convergence demonstration, as the Army readies to expand the “joint kill web” demonstration to include joint partners.

The Army said that a new, joint sensor-to-shooter network is required for multi-domain operations against peer competitors.

The first Project Convergence at Yuma Proving Ground in Arizona concluded last month, and Army leaders praised the exercise as coalescing future capabilities, AI-enabled systems and a new “computer brain” to prove out capacity for passing targeting data in a matter of seconds. The Army said that the exercise demonstrated the interoperability between Lockheed Martin F-35 fighters and Bell/Boeing V-22 tiltrotors with ground forces, including the ability to exchange targeting information.

The Campaign to Stop Killer Robots, a watchdog group, “aims to preserve meaningful human control over the use of force,” according to one of the group’s founding members, Laura Nolan, a computer programmer who resigned from Google because of the company’s work on Project Maven. The Pentagon launched the project in 2017 to develop an AI tool to process data from full-motion video collected by unmanned aircraft and decrease the workload of intelligence analysts.

Google was the prime contractor for Project Maven but dropped out in 2018 after receiving pushback from employees about the company’s tools being used for an AI drone imaging effort. California-based big data analytics company, Palantir Technologies, co-founded and chaired by billionaire venture capitalist Peter Thiel, has assumed Google’s role.

“The technologies being tested as part of Project Convergence demonstrate many of our concerns,” according to Nolan. “Can an operator make a sound decision about whether to strike a newly-detected target in under 20 seconds, or are they just hitting an ‘I-believe button’ to rubber stamp the system’s recommendation, delegating the true decision making authority to software? In a socio-technical system that is explicitly optimizing to reduce the time from detecting a potential threat to destroying it, an individual may not be rewarded for exercising vigilance. The idea of making attacks via the very limited user interface of a smartphone is also troubling.”

“Nowhere in the public reporting about Project Convergence is there any discussion about human factors in design of software interfaces, what training users get about how the targeting systems involved work (and on the shortcomings of those systems), or how to ensure operators have sufficient context and time to make decisions,” per Nolan. “That’s consistent with the Defense Innovation Board (DIB) AI Principles, published last year, which also omit any mention of human factors, computer interfaces, or how to deal with the likelihood of automation bias (the tendency for humans to favor suggestions from automated decision-making systems).”

Vern Boyle, vice president of advanced capabilities for the Cyber Intelligence Mission Solutions division of Northrop Grumman Mission Systems, said that the complexity of future warfighting scenarios in which there are multiple threats will require AI-enabled targeting and firing decisions without a human in the loop.

“Shooting down one missile is one thing,” he said. “Trying to shoot down many with many artillery or other kinetic effects that we might have available is something that will continue to get more complex such that you’ll have to hand over the decision making process to the machines, to some extent. I think we’re going to need to have safety measures and controls such that we can disable or enable certain functions, as needed. I think there will be a [human] on-the-loop control concept, but in terms of people being inside those decision loops, that’s not going to be possible.”

Receive the latest avionics news right to your inbox