Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
ChatGPT Showcases Promise of AI in Developing Malware
Check Point Spotted Hacking Forum Posters Probing AI Tool's Malware CapabilitiesCybercriminals have lost little time in converting the artificial intelligence capabilities of ChatGPT to malicious purposes by using it to generate malware scripts.
See Also: AI and ML: Ushering in a new era of network and security
Security researchers at Check Point found members of the low-level hacking community Breach Forums posting over the past few weeks the results of interactions with the OpenAI-developed tool. They include a machine-learning assist in creating a Python script that could be used for ransomware extortion and a Java snippet for surreptitiously downloading Windows applications.
ChatGPT's coding abilities are "pretty basic," Check Point researchers wrote in a Jan. 6 blog. But "it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad."
One AI-generated script posted on Breach Forums and reviewed and verified by researchers amounts to an info stealer that looks for common file types, copies them to a random folder, compresses them and uploads them to a hardcoded FTP server. The poster wrote that he had worked with ChatGPT to generate the script by specifying what the program should do and what steps should be taken - in effect, writing pseudo-code.
Another Breached Forum member discussed "abusing ChatGPT" to create dark web marketplaces.
ChatGPT is the creation of artificial intelligence researcher laboratory OpenAI. The deep-learning system that underpins its tool is the Generative Pre-trained Transformer 3. A 2021 paper from OpenAI describing an artificial intelligence tool for coding also based on GPT-3 and dubbed Codex concluded that the current state-of-the-art for machine-generated code does not substantially increase online threat levels.
GPT-3 is best used for generating code "that can be incorporated as components of more complex systems. Codex struggled to generate SQL and shell injection payloads, OpenAI researchers wrote, but "it had no problem generating code for recursively encrypting files in a directory."
One close observer of the inner workings agrees that AI poses a mounting threat to digital security. "I agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk," tweeted OpenAI CEO Sam Altman in early December. Altman also predicted that artificial general intelligence could be a reality within the next decade, "so we have to take the risk of that extremely seriously too."