Research underpins need for public awareness campaign to tackle AI-driven fraud

A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have become.

Published in the Journal of Cybersecurity and funded by the Scottish Institute for Policing Research (SIPR), the study provides one of the first psychological countermeasures against AI voices, offering a proactive approach to fraud prevention.

AI-generated voices have become so convincing they’re being used in major scams - from cloning CEOs to approve multi-million-pound transfers to impersonating family members in fake kidnapping calls.

The findings show that short messages explaining AI’s ability to convincingly replicate regional accents and dialects can significantly reduce listeners’ tendency to assume voices are human. Dr Neil Kirk, from the University’s Department of Sociological and Psychological Sciences, who led the study, says this simple shift in awareness could help prevent fraud as synthetic voices become increasingly realistic.

Dr Kirk said:

Scammers often use emotional hooks - such as urgent calls from “relatives” needing help or fake delivery issues - to pressure victims into quick decisions. When combined with AI-generated voices that sound authentic and even mimic local accents, these tactics become far harder to detect.

Voice fraud is harder to detect than video deepfakes because it relies on just one sensory cue, and this global problem is already hitting the UK hard. A survey from Starling Bank found that 28% of UK adults have been targeted by AI voice cloning scams, yet nearly 46% are unaware these scams even exist, and just a third know the warning signs.

Victims of deep-fake scam calls lose an average of £595 per incident, with some cases exceeding £13,000, according to the Annual Fraud Report 2025 by UK Finance.

Dr Kirk said:

AI voice technology is advancing faster than public awareness. If we don’t update people’s expectations now, we risk leaving entire communities vulnerable to scams. Fraudsters are already exploiting these gaps, and the consequences can be devastating. Education is the most powerful tool we have to close that gap, and it is something we can implement quickly and at scale. The study introduces the concept of MINDSET (Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology), a belief that voice systems can't handle local or regional speech. This bias makes speakers of underrepresented dialects particularly vulnerable to scams as they are more likely to believe an AI voice speaking this way is a real person.

Across two experiments with 300 Scottish participants, researchers found that capability-based messages informing participants that AI can authentically replicate Scottish accents and dialects significantly reduced the bias toward classifying voices as human.

Warnings which simply highlighted the risks of AI voice scams had little effect unless combined with capability information. While the warnings didn’t sharpen people’s skills at spotting which voices were actually human or AI, they did make them more cautious and less likely to make the error of assuming Scottish-sounding AI voices were real people.

Dr Kirk said:

Our findings suggest clear opportunities for fraud prevention. Banks, telecom providers, and public awareness campaigns could incorporate capability-based messages into security prompts or fraud alerts to help protect consumers. Informing rather than alarming may be the most scalable way to increase vigilance. But this cannot be left to industry alone- governments and policymakers need to work together with businesses to launch coordinated education campaigns that close the awareness gap and keep people safe.

This study builds on earlier research by Dr Kirk, published earlier this year, which demonstrated just how convincing AI-generated voices can be, especially when mimicking regional dialects such as Dundonian Scots. That research found that listeners often assumed the AI voices were real, especially in local dialects.

Read the full paper here.

Share This

Pause carousel

Play carousel