Artificial Intelligence & Machine Learning , Government , Industry Specific

White House Official Warns of AI Risks in 2024 Elections

No 'Magic Solution' to Prevent Malicious Use of AI in Elections, OSTP Chief Says
White House Official Warns of AI Risks in 2024 Elections
Arati Prabhakar, director of the White House's Office of Science and Technology Policy, speaking at the World Economic Forum on Tuesday

There's no "magic solution" to fully prevent the potentially harmful impact of artificial intelligence on the 2024 elections in the United States and around the world, a White House official warned at the 2024 World Economic Forum.

See Also: Safeguarding Election Integrity in the Digital Age

Arati Prabhakar, director of the White House's Office of Science and Technology Policy, described generative AI as a "deeply human" technology with significant risks due to its potential to "dramatically accelerate and amplify the erosion of information integrity."

"This issue of the role of AI in our democracies is one where civil society plays a particularly important role," Prabhakar said during a World Economic Forum event hosted by Axios in Davos, Switzerland.

Her comments come as ChatGPT maker OpenAI announced new steps the company is taking to deter the malicious use of its models in the 2024 U.S. election, amid growing concerns about the potential impact of AI-generated election misinformation (see: OpenAI Combats Election Misinformation Amid Growing Concerns).

The company said in a Monday blog post that it would direct users to the Can I Vote website when asked certain questions about voting in the 2024 election while experimenting with new tools to "empower voters to assess an image with trust and confidence in how it was made."

Prabhakar said the departments of Justice and Homeland Security and the U.S. intelligence community have worked in close collaboration with state and local partners in recent months to ensure the integrity of election systems nationwide and added that the White House has taken additional steps to combat the malicious use of AI systems domestically and abroad.

"One of the areas of focus in President Biden's executive order recently was specifically focus on fraud and deception," Prabhakar said about the president's executive order on AI and noted how the order tasks the Department of Commerce with developing guidance for labeling AI-generated content (see: White House Issues Sweeping Executive Order to Secure AI).

"One of the most important things is for citizens to be able to know, when they see information, to know that it is authentically from their governments," she said. "We'll play that active role."


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.