1 min read

#AI voice clones can now hijack legitimate calls to scam people, #IBM finds

IBM researchers have found a relatively easy way to hijack voice calls using generative AI tools, according to a new report.

Why it matters: Many financial institutions and other stewards of people’s most sensitive data lean heavily on phone calls to verify identities.

Using low-cost AI tools, scammers can now easily impersonate someone’s voice and hijack ongoing conversations to steal funds and other information, per the new findings.

What’s happening: IBM’s researchers detailed a new threat they’re calling “audio-jacking,” where threat actors can use voice clones to manipulate a large language model midway through an ongoing conversation.

A threat actor would need to start by either installing malware on a victim’s phone or compromising a wireless voice-calling service to then connect to their own AI tools.

Source: AI voice clones can now hijack legitimate calls to scam people, IBM finds