Euro Security Watch with Mathew J. Schwartz

Artificial Intelligence & Machine Learning , Attack Surface Management , Cybercrime

Killer Use Cases for AI Dominate RSA Conference Discussions

Use Cases: Cybersecurity Offense, Defense and Safeguarding AI Itself, Experts Say
Killer Use Cases for AI Dominate RSA Conference Discussions
RSA Conference 2023 returned to San Francisco's Moscone Center. (Photo: RSA)

Pre-RSA social media gaming predicted it. Many predicted they would loath it. And it happened: Discussions at this year's RSA conference again and again came back to generative artificial intelligence - but with a twist. Even some of the skeptics professed their conversion to the temple of AI, whose overlord, for better or worse, is poised to preside over human activity with indifference about good or evil intent.

See Also: How to Take the Complexity Out of Cybersecurity

Count Israeli cryptographer Adi Shamir - the S in the RSA cryptosystem - as a convert. One year ago, speaking at RSA, he thought AI might have some defense use cases but didn't see it being an offensive threat.

"I've completely changed my mind," he said on stage Tuesday, as a returning panelist for the annual Cryptographer's Panel. "I now believe that the ability of ChatGPT to produce perfect English, to interact with people, is going to be misused on a massive scale," not least by social engineers, he said.

Experts at RSA identified three broad use cases for generative AI: by cybersecurity defenders, including inside security operations centers; by attackers, including social engineers and malware developers; and for directly safeguarding the integrity and reliability of the large language models and generative tools themselves.

There will be a need to "fight AI with AI," Nikesh Arora, chairman and CEO of Palo Alto Networks, told my colleague Michael Novinson during an interview at RSA Conference 2023.

The pace of malicious innovation in this realm is notable. Only weeks ago, a malware developer demonstrated how malicious code could send a set of written instructions to ChatGPT via an API, describing in written English the type of malicious capabilities it wanted, and receive back a unique set of code to use against a target, Mikko Hypponen, chief research officer at WithSecure, told me during an interview at the conference.

Historically, malware users have "repacked" their code, recompiling it to foil signature-based detection. This ChatGPT-backed approach completely rewrote the malware, making it that much tougher to spot, Hypponen said. While the malware he saw was more of a proof of concept - sent to him by a malware developer proud of the accomplishment - he predicts that before long, malware will arrive carrying built-in generative AI capabilities, making it all the more virulent.

Hence tools such as ChatGPT are already being tapped by criminals to do more than simply writing better phishing emails.

Where might this all end up? One emerging risk is that generative AI will support moving beyond social networks or television to create "more of an immersive experience," which at its endpoint - or "meta point" - will be indistinguishable from reality, futurist Winn Schwartau told me (see: Electronic Pearl Harbor Prophet Issues Metaverse Warning).

Unfortunately, such immersion seems certain to get put to malicious or detrimental use by those peddling extremist politics, cybercriminals seeking an illicit payday, or nation-states practicing information warfare, perhaps at the expense of democracy.

Measuring the "reality distortion" afforded by such environments and using legislation to define public safety limits, including for AI, as well as teaching better critical thinking skills, will be key to arresting attempts to use these capabilities against us on an individual or societal level, Schwartau said.

Numerous governments are moving quickly to question the risks posed by generative AI and propose safeguards. Italy has just lifted a ban on ChatGPT after requested privacy safeguards got added, and Spain and France are continuing to probe the technology (see: European Scrutiny of ChatGPT Grows as Probes Increase).

The G7 group of the world's developed economies on Sunday called for taking a "risk-based" approach to tackling AI, Reuters reported.

"We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency" as well as "address disinformation," G7 ministers said.

European lawmakers, meanwhile, have rewritten their proposed AI Act in light of rapid advances, including requirements of unprecedented levels of transparency from providers, Reuters reported.

Artificial intelligence has arrived, albeit not in the form previewed by so many movies - as a malicious entity directly bent on our destruction. Generative AI turns out to be a reflection of who and what we say and do, with fresh use cases in store that we collectively haven't even imagined. Certainly it's going to get used not just for good but also for ill.

As Schwartau said, unless we properly prepare, "the merger of the technology and the social structure of what we've got going on today is a recipe for disaster."



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.