The intelligence, surveillance and reconnaissance division at the Combined Air Operations Center at Al Udeid Air Base, Qatar, provides a common threat and targeting picture that are key to planning and executing theater-wide aerospace operations to meet the Combined Forces Air Component commander’s objectives. Photo courtesy of the U.S. Air Force
The Defense Department’s artificial intelligence (AI) development program is set to deliver its first round of algorithms in December to be deployed for warfighting operational sensors, according to project lead Lt. Gen. Jack Shanahan.
Project Maven, also known as the algorithmic warfare cross-functional team, which stood up this past April, is a Defense unit dedicated to AI innovation, as the department works to broaden its knowledge base with new capabilities and free up its analysts from menial tasks. Shanahan, who serves as the Office of the Under Secretary of Defense for Intelligence’s director for Defense Intelligence (warfighter support), leads the team’s effort to develop AI capabilities for the MQ-1 Predator and MQ-9 Reaper unmanned aircraft systems and the wide-area motion imagery sensor.
“The idea of human and machine together is far more powerful than either the human or machine by themselves. This is where we need to go in the department,” said Shanahan at NVIDIA’s GPU technology conference Monday.
The group’s main focus remains implementing AI tools for full-motion video data exploitation meant to maximize the information collected from sensors and free analysts from spending long hours on tasks below their talent level.
“I have a very specific problem that, it is this avalanche of data that we are not capable of fully exploiting,” said Shanahan.
Shanahan cited the DoD’s wide-area motion imagery sensor used on aircraft to capture eight kilometers of visual data at a time. It currently takes around 20 analysts working 24 hours to examine the data collected by the sensor, and still only 6 to 12% of that information is able to be exploited.
The AI algorithms to be deployed in December will work to speed up and automate this process, freeing analysts away from limited desk jobs to work on more complex problem sets.
“It is an unacceptable waste of a resource, which is on the best sensors that we have available to us,” said Shanahan. “What I want are analysts to do more analytical work. The idea of giving them time back to think and actually put things in context, rather than starting at full-motion video screens.”
Project Maven’s algorithms are meant to automate and augment processes related to computer vision, data tracking and geo-registration. Throughout 2018, the group will deploy follow-up sprints meant to continue reducing the number of necessary platforms and begin to limit the number of threshold and objective requirements.
“Right now we’re talking about these algorithms being on the processing or exploitation workstations, but the end goal has to be for these algorithms to also be at the tactical edge on platforms and sensors,” said Shanahan. “I would be so bold to suggest the Department of Defense should never buy another weapon system for the rest of its natural life without artificial intelligence baked into it.”
This article was originally published by Defense Daily, an Avionics sister publication. It has been edited.