Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , ID Fraud

Cloned Voice Tech Is Coming for Bank Accounts

Experts Warn AI Tools Can Now Compromise Voice Password Systems Used by Many Banks
Cloned Voice Tech Is Coming for Bank Accounts
The voice authenticating a bank account could be real, or could be artificial intelligence. (Image: Shutterstock)

At many financial institutions, your voice is your password. Tiny variations in pitch, tone and timbre make human voices unique - apparently making them an ideal method for authenticating customers phoning for service. Major banks across the globe have embraced voice print recognition.

See Also: 2024 Threat Hunting Report: Insights to Outsmart Modern Adversaries

It's an ideal security measure, as long as computers can't be trained to easily synthesize those pitch, tone and timbre characteristics in real time. They can.

Generative artificial intelligence bellwether OpenAI in late March announced a preview of what it dubbed Voice Engine, technology that with a 15-second audio sample can generate natural-sounding speech "that closely resembles the original speaker."

While OpenAI touted the technology for the good it could do - instantaneous language translation, speech therapy, reading assistance - critics' thoughts went immediately to where it could do harm, including in breaking that once ideal authentication method for keeping fraudsters out. It also could supercharge impersonation fraud fueling "child in trouble" and romance scams as well as disinformation (see: US FTC Proposes Penalties for Deepfake Impersonators).

Voice Engine isn't available to the public yet but a plethora of products are offering a similar service. One research firm predicts the AI voice cloning market will reach nearly $10 billion by 2030.

In fact, AI-voice cloning has percolated as a growing issue for years now. In a 2021 incident in the United Arab Emirates, cybercriminals cloned the voice of a company director, resulting in a bank manager authorizing transfers of $35 million. The technology has only gotten better since then, and criminals continue to use it in similar ways: Not too long ago, a group of fraudsters used deepfake technology to swindle $26 million from a Hong Kong-based multinational company.

Ethical hacker and CEO of SocialProof Security Rachel Tobac recently demonstrated how anyone can use cheap or even free AI-generated voice cloning services to break into one of the several banks in the United States and Europe that accept voice ID as a secure log-in method.

As with the rollout of almost any technology, security specialists are scrambling to keep malicious actors at bay.

The first line of defense is to simply train staff to recognize fake audio clips, Kevin Curran, professor of cybersecurity at Ulster University, told Information Security Media Group. But that continues to be a challenge, as no known tools that can currently identify generative AI-based attacks. "After all, the modus operandi of these attacks is to appear human-like," said Curran, a senior member of the Institute of Electrical and Electronics Engineers.

Even the winners of a contest launched earlier this year by the U.S. Federal Trade Commission acknowledge the problem is a tricky one.

There is "no single solution to this problem," said David Przygoda, part of a team from OmniSpeech that is developing an algorithm to identify "subtle discrepancies that distinguish authentic voices from their artificial counterparts."

"A human voice can already be cloned in as little as 3 seconds and the technology is getting better on a monthly basis," he warned.

Przygoda said technology can't by itself resolve this nascent problem. "It will take a coordinated effort across technologists, policy makers and law enforcement."

For its part, OpenAI has a different recommendation: Banks should stop using voice-based authentication.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.