FBI: Hackers Use AI for Sextortion, Explosives, Bad WebsitesChina Seeks to Level AI Playing Field by Stealing IP From US, FBI Official Says
U.S. federal law enforcement sounded warnings Friday over malicious use of artificial intelligence to obtain explosives, advance sextortion schemes and propagate malware through malicious websites that appear legitimate.
Intelligence officials wage a two-front battle against misuse of AI, grappling with emboldened criminals who use AI for nefarious purposes as well as nation-state actors such as China who seek to level the AI playing field by stealing intellectual property from the United States, a senior FBI official told journalists Friday. The official spoke on background, meaning the official's name can't be published.
"AI has significantly reduced some technical barriers, allowing those with limited experience or technical expertise to write malicious code and conduct low-level cyber activity," the official said.
"Simultaneously, while still imperfect in generating code, AI helps more sophisticated actors in the malware development process create novel attacks, enabling convincing delivery options and effective social engineering."
Fake Websites, Real Malware
The FBI in recent months observed a proliferation of fraudulent AI-generated websites infected with malware featuring engaging content and multimedia that hackers use to trick unsuspecting users. Some of the fake websites have more than 1 million followers and a significant amount of user engagement, according to the FBI official.
The official said the FBI is working with partners on its ability to authenticate multimedia content and reliably determine what is synthetically generated. The bureau is making hosting providers aware of any illegal activity that may be occurring on their platforms.
The democratization of AI allows criminal actors to train and develop their own indigenous AI models at home for little or no cost without any of the safeguards in place that larger companies or corporations have instituted, the official said. A lot of online AI open-source tools can be readily and easily applied to traditional criminal schemes such as defrauding the elderly, requesting ransom or bypassing bank security (see: Supply Chain, Open Source Pose Major Challenge to AI Systems).
Hackers with more technical inclination have modified, developed or otherwise adjusted open-source models to fit whatever their specific criminal needs are, according to the official. Threat actors have also explored and used AI models available on the dark web that provide capabilities that aren't accessible through traditional open-source AI models provided by large legitimate companies, the official said.
"AI has enabled threat actors to produce and disseminate realistic synthetic content for negligible time and monetary costs," the official said.
A Dangerous New Dimension to Sextortion, Explosives
Malicious actors have created dee fakes of sexually themed images that appear true to victims' likenesses and circulate them on social media forums or pornographic websites. In some cases, the official said, the deepfakes are used to harass and extort victims.
The official said criminals use AI technology to create sexually explicit content to victimize children online and establish a foothold over the victim so that they'll comply with further demands. Whether the sexually explicit content is generated through AI or through contact with the victim, the official said, it's subject to the same type of investigation by the FBI and subsequent criminal punishment (see: US Senate Leader Champions More AI Security, Explainability).
On a similarly grave front, the official said AI can be used by criminal terrorist actors seeking to simplify the production of dangerous chemical or biological substances and increase the substance's potency. Both cybercriminals and terrorists have turned to open-source AI models in hopes of finding ways to create different types of explosives, according to the official.
Some of these criminals have posted information online about their engagements with the AI models and the success they've had defeating security measures. In a number of cases, the official said, individuals have successfully elicited instructions for creating explosives. The official said the FBI is working with AI firms to prevent this type of content from being released into the public domain.
China's Efforts to Catch Up Around AI
China has been particularly brazen in its efforts to steal American AI technology and data in hopes of running better AI programs and enabling foreign influence campaigns, the official said.
Major targets for intellectual property theft are U.S. companies, universities and government research facilities, and the hackers aim to transfer AI algorithms, data expertise and computing infrastructure back to China. Adversaries have gone beyond traditional computer network exploitation to acquire stolen IP and have tapped nontraditional collectors and legal inbound foreign investment.
China has sought to advance its AI programs by acquiring U.S. technology, expertise, underlying training data and model weights using a variety of methods across many sectors of the American economy. China seeks to be a leader in AI and to shape the global standards around how the world uses AI, the official said.
"U.S. talent is one of the most desirable aspects of the AI supply chain that our adversaries need," the official said. "The U.S. sets the gold standard globally for the quality of research and development, and nation-states are actively using diverse means to transfer cutting-edge AI research and development to aid their military and civilian programs."