Encryption & Key Management , Security Operations

RSAC Cryptographers' Panel Tackles AI, Post-Quantum, Privacy

Panelists Discuss Building Safe AI Ecosystems, Post-Quantum Crypto Challenges
RSAC Cryptographers' Panel Tackles AI, Post-Quantum, Privacy
The RSA Conference 2024 Cryptographers' Panel, moderated by Whitfield Diffie, seen at far left (Image: Mathew Schwartz/ISMG)

The annual Cryptographer's Panel has been a fixture at RSA Conference since its launch in 1991, bringing together leading cybersecurity thinkers to review and debate the big topics of the day.

See Also: Webinar | Identity Crisis: How to Combat Session Hijacking and Credential Theft with MDR

This year's panel, moderated by public key cryptography pioneer Whitfield Diffie, touched on everything from the safe use of artificial intelligence and the adversarial risk it poses to how to secure its use inside the enterprise and address emerging privacy concerns. The panelists also reviewed a recent threat to post-quantum cryptography and urged organizations to ensure they have a plan in place to embrace forthcoming post-quantum standards from the U.S. National Institute of Standards and Technology.

Here are highlights from this year's discussion:

Lattice-Based Crypto Attack? Crisis Averted

Are post-quantum cryptosystems safe to use? That question came on the heels of a recent panic moment for the cryptographic community.

On April 10, Chinese cryptographer Yilei Chen of Tsinghua University's Shanghai Artificial Intelligence Laboratory published a 60-page paper that detailed "a polynomial time quantum algorithm for solving the learning with errors problem." Learning with errors is a mathematical concept based on introducing errors into equations to make for more secure cryptosystems.

Finding a way to use quantum computers to solve LWE problems could compromise all lattice-based cryptosystems, including some post-quantum cryptography.

Thankfully, after eight days of nonstop peer review, researchers identified a flaw in the algorithm detailed in the paper, said panelist Adi Shamir, co-inventor of the RSA cryptosystem - he's the "S" - and also the Borman Professor of Computer Science at Israel's Weizmann Institute. Chen in an update to the paper confirmed that "the algorithm contains a bug, which I don't know how to fix."

"It was a very close call," Shamir said of the research and described how the community came together to study the findings as "peer review at its very best." "So the situation on the ground, which claimed that it could kill lattice-based cryptography, including fully homomorphic encryption schemes, has been saved at the moment," he said.

"That's why they say that cryptographers seldom sleep at night," said Craig Gentry, CTO of TripleBlind and inventor of the first fully homomorphic encryption system. Such systems facilitate complex analysis of encrypted data without having to decrypt the data. Despite Chen's "serious" research, Gentry said, "a month ago, we didn't have a quantum attack on lattice-based crypto systems, and we don't today, so I think not much has changed."

Migrating to Post-Quantum Crypto

The research is a reminder that organizations need to develop a post-quantum migration plan and stick to it, said panelist Debbie Taylor Moore, vice president and senior partner for cybersecurity at IBM Consulting.

"It's really important that we not panic but that we be happy that folks are challenging these algorithms," she said.

By summer's end, she expects to see NIST release final standards for three post-quantum algorithms: ML-KEM aka CRYSTALS-Kyber for key establishment, and ML-DSA aka CRYSTALS-Dilithium and SLH-DSA aka SPHINCS+ for digital signatures.

When that happens, "there's going to be a tremendous amount of pressure at the C-suite level to understand what the plan is as an organization," she said. Security personnel will need visibility into the crypto systems inside their organization, including in any products they use, so they can prepare and execute a viable migration strategy (see: Preparing for Post-Quantum? Learn What Cryptography You Have).

"Now, we've got a lot of third-party risk management to consider because you have dependencies from a lot of folks that you don't have control over," she said. Her guiding principles: "Start early" and work as quickly as possible.

The Impact of AI on Security

What is AI today? "Artificial intelligence is a little like the Porsche 911," Diffie said. "They kept that model for longer than any other that's been around, but they kept changing the car. And the model name 'AI' has been around since the mid-50s, and they keep doodling it."

Lately, AI often refers to generative AI. IBM's Moore classified its emergence as being a black swan event - something unusual and unexpected that has massive, sometimes destabilizing impact. In the case of AI, the destabilization was compounded by following hot on the heels of another black swan event: the pandemic.

The rapid introduction of AI has especially caught security teams off guard, driving some to panic, given that it's suddenly their responsibility to secure it, Moore said.

"You have to secure your own organization's AI implementations and make sure that the company sees no harm and also that it's not harming customers," she said. "You have to be concerned with how you're using AI or to optimize operations from a security standpoint and then also beware how AI might be used against their organization."

While this might sound daunting, she said, these remain risk management exercises. And what is the discipline of enterprise cybersecurity if not risk management?

Security: AI for Assistance, Not Front Ends

What are the limits of AI when it comes to security operations? Panelist Tal Rabin, a senior principal scientist in Amazon Web Services' cryptography group and professor of computer science at the University of Pennsylvania, said that while AI can be used to help set up systems, it should always provide such assistance in a stand-alone fashion rather than serving as a front end to any give tool. She said the risk posed by such tools not behaving as expected - for example, to generate private keys that end up being misconfigured despite an AI's assurances to the contrary - could have wide-ranging, dangerous repercussions.

Another outstanding question, especially in Europe, where the General Data Protection Regulation is in effect, is how an AI model trained on private data can be made to unlearn that data, to comply with "right to be forgotten" requests.

"The question is: How does this system unlearn this data? Can it be done at all?" she said. Cue "budding research" to address these sorts of questions. "Maybe you train on small systems, and then if my data is in this little portion, that whole portion is retrained again," she said.

Such concerns may not apply in China or the United States, where organizations have been using any data they can get their hands on to train their models. "I'm quite pessimistic about the possibility of legally developing large language models in Europe, unless you break the law," thanks to privacy rules such as GDPR, Shamir said.

Building Safe AI Ecosystems

Can AI be built for safety? Gentry called for a "supply chain for AI models and data" that are built to be safe, such as not revealing the data on which they've been trained. "Of course, there's going to be a 'darknet' of neural networks" of unknown provenance, he said. But the industry has a time-limited opportunity to build AI models that are free of trapdoors and that won't reveal the data they were trained on, he said.

"For the main batch of AI models, we have a chance here as security professionals or whatever you want to call yourself, to actually make this safe," he said.

Enter security researchers such as Shamir, who previewed new and unexpected ways in which existing, "black box" neural networks might get hacked - for example, to reveal their proprietary approaches to how they use weighting to hone their model. He's co-authored newly released research detailing a polynomial time algorithm that can be used "to extract with arbitrarily high precision" from deep neural networks the extract weightings "from their black-box implementation" - in just 30 minutes.

Given the billions of dollars and GPU cycles being spent to train neural networks, as well as the debate over how LLM owners have trained their models, often using copyrighted material, "this just seems like one of these mini examples of 'they deserve what they get,'" Diffie said.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.