The Pentagon promises to use artificial intelligence for good, not evil – Military Times
The military has its eye on artificial intelligence solutions to everything from data analysis to surveillance, maintenance and medical care, but before the Defense Department moves full steam ahead into an AI future, they’re laying out some ethical principles to live by.
Defense Secretary Mark Esper signed off on five guidelines in a memo released Monday,
“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” said Esper wrote. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior.”
The list is the result of a 15-month study by the Defense Innovation Board, which is made up of academics and executives in tech and business, who presented their proposed principles in a public forum at Georgetown University in October.
According to Esper’s Monday memo, the Pentagon pledges that its AI efforts will be: 1) Responsible, 2) Equitable, 3) Traceable, 4) Reliable and 5) Governable.
In short, any technology’s development and operation should be carefully developed and used, have safeguards against bias in data analysis, be auditable to find the sources of mistakes and correct them, have narrowly-defined parameters for use and have back-up plans for shut down in case something goes wrong.
“We owe it to the American people and our men and women in uniform to adopt AI principles that reflect our nation’s values of a free and open society,” Lt. Gen. Jack Shanahan, head of the Joint Artificial Intelligence Center, told reporters Monday. “This runs in stark contrast to Russia and China, whose use of AI tech for military purposes raises serious concern about human rights, ethics and international norms.”
China, for example, has used its facial recognition AI technology to surveil its citizens’ public activity.
“I do not believe … that China or Russia are having any sort of conversation like we’re having today,” Shanahan said, taking questions from the press in a public broadcast.
The White House is scheduled to release its own set of principles this summer, he said.
“Our intentions are clear: We will do what it takes to ensure that the U.S. military lives up to our nation’s ethical values while maintaining and strengthening America’s technological advantage,” Shanahan said.
DoD is at the “ground floor” of its AI journey, according to Shanahan, so there are few concrete examples of AI technology that will be subject to these principles.
The JAIC, his organization, has been working developing a tool for UH-60 Black Hawk helicopter maintenance, he said, as well as some cyber defense technology and solutions to make military health care more efficient.
One past project, dubbed Maven, made headlines in 2018, when Gizmodo reported that Google had been quietly allowing DoD access to a software program that could use AI to recognize objects in drone footage.
Some Google employees were dismayed to find that their employer was potentially helping the military home in on human targets.
“We would be doing these AI ethics principles regardless of the angst in the tech industry,” Shanahan said. “And sometimes I think the angst is a little hyped, but we do have people who have serious concerns about working with the Department of Defense.”
In hindsight, he said, secretly launching Project Maven would not line up with the new ethics principles.
“If we would have had the AI ethics principles three years ago … and our starting point with one of the big tech companies was that — and we were transparent about what we were trying to do and why we were trying to do it — maybe we would have had a different outcome,” he said.