Astonishingly (…or perhaps not, for anyone who’s answered a phone call recently), “imposter calls” are the number one offender of spam calls in the US, amounting to 33% of all phone calls according to a recent study by QR Code Generator.
And a scarier possibility may be just around the corner: recent advances in AI have enabled scammers to create highly realistic voice clones which can replicate voices based on just seconds of audio. This allows scammers to deceive potential targets into believing they are speaking over the phone with a relative, close friend or colleague. To make matters worse, these AI-generated voices aren’t just accurate; they’re apparently emotionally persuasive. These voice dupes could bring a surprisingly authentic twist to the old “hi mom” text scams.
With the scalability of modern computing and the availability of technologies like sophisticated call spoofing techniques it’s easy to imagine a future where spam calls, already dominated by “imposter” attempts, become the domain of AI-generated voice imposters that are almost impossible to distinguish from the real thing. These voices could even provide secondary support for other types of scams such as falsifying invoices or providing false bank account details for payments to suppliers.
Outside of phone calls the rising prevalence of AI is also making the detection of more traditional scams far more difficult. For example, AI tools can allow scammers to craft convincing emails without the tell-tale signs (such as grammatical errors) that betrayed earlier attempts.
It’s essential to stay informed and cautious to mitigate risks exacerbated by AI-powered scams. Ironically, we may even need to revisit more traditional methods – such as secret codewords shared offline – to help identify and authenticate the intended human recipients of our communications.