The Defense Department released its first official artificial intelligence ethics principles that it says will shape how it develops and implements the technology.
The Defense Department has officially adopted a set of principles to ensure ethical artificial intelligence adoption, but much work is needed on the implementation front, senior DOD tech officials told reporters Feb. 24.
The five principles [see sidebar], which are based on the recommendations of the Defense Innovation Board's 15-month study on the matter, represent a first step and generalized intentions around AI use and adoption including being responsible, equitable, traceable, reliable, and governable. DOD released the principles during a news briefing Feb. 24.
Those AI ethical guidelines will likely be woven into a little bit of everything, like cyber, from data collection to testing, DOD CIO Dana Deasy told reporters.
"We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used and you can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you're actually intending," Deasy said.
The announcement comes a year after DOD released its AI strategy and after years of public protest from tech workers against lethal AI and autonomous weapons systems. Lt. Gen. Jack Shanahan, the head of DOD's Joint Artificial Intelligence Center, has previously said there were "grave misconceptions" about DOD's intentions and technological ability and vowed to bring on an AI ethicist to help shape strategy.
The officials underscored that DOD would not field capabilities that did not meet the principles but also admitted that defining responsible AI is still needed as well as ongoing discussions and exercises help shape "who is held responsible" from software development to fielding.
More specific guidance is needed. Deasy said the committee will develop more principles on how to bring in data, develop solutions and building and testing algorithms and training operators on what to look for with unintended effects. Each of the services and combatant commands would be part of this.
Those implementation guidelines will come out of the AI Executive Steering Group, which has a subgroup dedicated to implementation, the officials said. (The officials would not name who was leading the implementation plan or who was in the steering group.)
The group will also work on procurement guidance, technological safeguards, organizational controls, risk mitigation strategies, and training measures.
"These are proactive and deliberate actions" that form the foundation for practitioners but are malleable enough to adapt as tech evolves, Shanahan said.
Shanahan said DOD was also looking to include "non-obligatory language in contracts" that would ask companies how they planned to abide by the principles when building algorithms and tools -- but that doesn't mean enforcement.
"I'm not suggesting enforcement at the beginning of it," he said. "These are early conversations to be had with our industry partners to say now that we've established these principles for AI ethics, could you develop the capabilities that address each of the five at some point along the way through [research, development, testing and evaluation]."