WASHINGTON, DC – Today, U.S. Senators Rob Portman (R-OH) and Martin Heinrich (D-NM), the co-founders of the Senate Artificial Intelligence Caucus, along with Caucus members Joni Ernst (R-IA) and Brian Schatz (D-HI), announced that the Senate passed their briefing requirement on explainable artificial intelligence (AI) that was included in the FY 2020 National Defense Authorization Act conference report. Explainable AI refers to artificial intelligence systems that allow human users to understand, trust, and characterize the strengths and weaknesses of the system’s decision making process. Deploying explainable AI will have profound implications for society and the economy, including the use of autonomous weapons and vehicles, as well as efforts to mitigate the bias that may result from automated decision making. This requirement included in the NDAA conference report would require the Department of Defense to brief Congress on a number of factors related to its use of explainable AI, including the extent to which the Department of Defense currently uses and prioritizes the use of explainable AI, the limitations of explainable AI, and potential roadblocks to the effective deployment of explainable AI. The conference report now heads to the president’s desk for signature.
“Explainable AI has a lot to offer. As we deploy autonomous systems, we also should understand why those systems make the decisions they do. By requiring our military to think about the uses of explainable AI, the capabilities and limitations of the technology, and the obstacles to the deployment of explainable systems, the inclusion of this requirement in the NDAA conference report is a step towards that goal,” said Senator Portman.
“Our national security has reached a point where it needs more advanced artificial intelligence systems – but we also need to know that we can trust these technologies and the preface behind any form of autonomous decision making,” said Senator Heinrich. “For this reason, it is so important for the U.S. Department of Defense to better understand, and move forward, emerging AI with clarity behind its decisions – in other words, explainable AI. The inclusion of this requirement in the NDAA conference report is a step in the right direction to better utilize groundbreaking opportunities in AI.”
“As chairman of the Emerging Threats and Capabilities Subcommittee, and as a former company commander, I strongly believe we must make sure our servicemembers are equipped with the latest, cutting edge technology. Artificial intelligence is a key part of that effort, but we still have a lot to learn about this evolving technology, including better understanding its capabilities and limitations. This year’s defense bill includes our provision to help address those concerns so our military can take advantage of the newest technologies and maintain the upper hand on the battlefield,” said Senator Joni Ernst, chairman of the Senate Armed Services Subcommittee on Emerging Threats and Capabilities.
“As the military utilizes AI more frequently in the battlefield and in other operations, it’s critical that we know how the technology makes decisions,” said Senator Schatz. “This new law will help us better understand AI’s potential and limits, as well as how to incorporate human judgement into the use of this technology.”