Congress Looks Beyond Voluntary Commitments on Global AI UseLawmakers Explore Punishments for Foreign Adversaries in Violation of AI Guardrails
The U.S. federal government will need new sanction authorities and additional enforcement powers for foreign adversaries that disregard emerging international norms for artificial intelligence systems, lawmakers and Department of State officials said Wednesday.
The White House has steadily accrued voluntary commitments from leading AI developers to help ensure the safe and secure development of advanced AI models. Companies including Amazon, Google and Microsoft have agreed to safeguards such as pre-deployment AI security testing, risk management information sharing and public reporting of AI system capabilities and limitations (see: 7 Tech Firms Pledge to White House to Make AI Safe, Secure).
But that won’t be enough to thwart foreign adversaries with malicious intentions and access to emerging technologies, top cyber officials from the State Department warned in testimony before the Senate Foreign Relations Committee.
Nathaniel Fick, ambassador at large for the U.S. Department of State Bureau of Cyberspace and Digital Policy, told lawmakers the agency is "not naive" about whether adversaries will voluntarily comply with standards that advance democratic values and protect human rights.
"We need to confront an uncomfortable reality in the software era, which is that controlling access to these technologies is somewhere between very difficult and impossible," he said.
The State Department is collaborating with international partners on AI policy, according to Matthew Graviss, the agency's first chief data and artificial intelligence officer, who was appointed in 2021 to fill the inaugural role. Those efforts have brought the release of a code of conduct that members of the Group of Seven industrialized democracies agreed upon to establish a global set of standards for advanced AI developers.
In October, President Joe Biden encouraged Congress to pass bipartisan legislation for the use of AI systems after signing a sweeping executive order that aims to set new standards and regulations for the emerging technology (see: Biden Urges Congress to Take Action Following AI Order.)
The order directs developers of advanced AI models to share safety results with the federal government, establishes a government AI Safety and Security Board and instructs the National Institute of Standards and Technology to develop rigorous standards for federal agencies deploying AI systems.
Sen. Ben Cardin, D-Md., chair of the Senate Foreign Relations Committee, said he expects legislative action throughout 2024 on AI "to try and get a handle on appropriate guardrails, and to give us the tools so that we can continue to lead in innovation, but also in the responsible use of AI."
Traditional sanctions have often served as effective tools to address economic and diplomatic international challenges, such as preventing adversaries from stealing national secrets or supporting terrorism, Cardin said.
"It may not be as easy to determine with the use of AI tools, but if an adversary is not identifying the source or using it for disinformation to undermine America's national security, we're going to have to have more direction on how we can assist," he said. "Or we may have to try to do that on our own."