Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

US Federal Agencies Urge Firms to Prepare for Deepfakes

The NSA, FBI and CISA Find the AI-Generated Media 'Particularly Concerning'
US Federal Agencies Urge Firms to Prepare for Deepfakes
Image: Shutterstock

U.S. federal agencies are advising organizations to hone their real-time verification capabilities and passive detection techniques to alleviate the impact of artificial intelligence-generated deepfakes.

See Also: Safeguarding Election Integrity in the Digital Age

Hackers can use the highly realistic and "particularly concerning" type of synthetic media to threaten an organization's brand, impersonate its leaders for financial gain, and use fraudulent communications to access the organization's network to steal personal, financial and internal security information, the National Security Agency, the FBI and the Cybersecurity and Infrastructure Security Agency said (see: FBI: Deepfake Fraudsters Applying for Remote Employment).

Malicious state-sponsored actors may not be using deepfakes to a significant extent yet, but fake images generated by AI will become a challenge as the technology's easy accessibility means less capable malicious actors can make use of its mounting verisimilitude. A sophisticated fake previously constructed over a period of weeks by a professional using specialized software can now be produced in a fraction of that time with someone with limited or no technical expertise.

An immediate, obvious application for deepfakes is manipulating employees through fake online accounts and fraudulent text and voice messages to get past a company's technical defenses. Criminals have already put large language models such as OpenAI's ChatGPT to work generating convincing phishing emails (see: Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails).

A combination of those text outputs with computer-generated imagery "is being used to produce even more convincing fakes," the agencies warn.

Another application poised to grow more salient is disinformation, including wartime propaganda. Early in Kyiv's defensive war against Russia, Ukrainian authorities warned about a possible onslaught of Russian deepfake videos. Days later, unknown adversaries posted onto a hacked Ukrainian news site a deepfake video showing Ukrainian President Volodymyr Zelenskyy supposedly capitulating to Russia.


The cost and technical barriers in using deepfakes and generative AI for malicious purposes will continue to plummet, but so will law enforcement and other defenders' capability to identify and mitigate deepfakes, the agencies said.

Real-time verification and passive detection capabilities can spot deepfakes and determine the source of the media. Detection methods are often passive forensic techniques, in which security personnel primarily focus on developing methods that find evidence of manipulation significant enough to alert analysts that the media needs further inspection.

"This form of detection is a cat and mouse game; as detection methods are developed and made public, there is often a quick response from the generation community to counter them," the agencies said.

Authentication methods are active forensic techniques that are purposely embedded at the time of creation or editing of the media to ensure that the source is transparent. The currently used methods include digital watermarking, sending active signals in real-time capture to verify liveness and adding a cryptographic asset hash on a device.

About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.