David Duffy, CEO of Virgin Money, recognised the imperative to enhance the bank’s fraud prevention measures while visiting Microsoft’s Seattle headquarters. During the visit, he gained insights into the advancements in generative AI technology, particularly in the creation of “deepfake” voices and videos.
Acknowledging the potential risks, Duffy emphasised that AI, coupled with quantum computing, could significantly escalate the challenges in combating financial crime.
Banks have faced challenges dealing with impersonation fraud for a considerable time. However, the increasing ease of generating deepfakes and voice cloning introduces new risks. Scammers can now exploit these technologies to impersonate individuals ranging from potential romantic partners to family members in crisis, broadening the scope of their targets and raising the success rate of such schemes.
Sandra Peaston, Research Director at the Fraud Prevention body, Cifas, notes that UK banks are already experiencing scams involving deepfakes. These scams often involve impersonating celebrities, leveraging the abundance of footage available to train deepfake algorithms. Criminals utilise synthesised videos to pass online “know your customer” checks, attempting to open bank accounts or apply for credit cards.
Moreover, deepfake videos are employed as clickbait to lure users to malicious websites, aiming to harvest sensitive information such as card payment details, according to research conducted by Stop Scams UK and consultancy PwC.
As deepfake technology advances, Sandra Peaston warns that scammers may require less training material, enabling industrial-scale use.
This raises the risk of victims being duped over the phone through voice-cloning technology, even without a frequent media presence. The UK’s vulnerability to fraud is heightened due to widespread English usage, near-instant payments, and extensive digital banking adoption.
In the first half of 2023, the UK witnessed fraud losses amounting to £580m, as per UK Finance. Notably, £43.5m was lost to scams involving police or bank staff impersonations, while an additional £6.9m was attributed to CEO impersonations.
The use of AI-driven translation tools in deepfakes is expected to enable scammers to replicate voices and accents in various languages, potentially extending their fraudulent activities across borders.
Chris Lewis, Head of Research at anti-fraud data company, Synectics, warns that other European countries, having encountered less fraud historically, might face an abrupt increase in scams.
As fraudster technology advances, so does the sophistication of tools for prevention and detection.
Ajay Bhalla, President of Cyber and Intelligence at Mastercard, has revealed the development of an AI-driven screening tool offered to nine UK lenders for early fraud detection before funds leave customers’ accounts.
Lloyds Banking Group, the UK’s largest high street lender, sees AI’s pattern recognition capabilities as complementary to its existing fraud prevention system.
Liz Ziegler, Fraud Prevention Director at Lloyds, explains that the bank utilises behavioural analysis to create a detailed customer profile, freezing payments when unusual activity is detected.
Industry experts are also working on watermarking technology to embed a traceable mark on AI-generated content for detection, though it remains in early stages, with potential vulnerabilities.
The Chair of the Basel Committee on Banking Supervision, Pablo Hernández de Cos, has called for global coordination to address challenges posed by rapidly advancing technology, expressing concerns about its potential impact on history.
In the UK, banks face increasing incentives to combat evolving fraud types. New rules from the Payment Systems Regulator, effective from October, will hold financial institutions accountable for compensating victims of authorised push payments fraud.
Virgin Money, in response, announced a £130 million investment in financial crime prevention to enhance cyber defence and biometric capabilities.
The rise of deepfake-powered fraud is expected to prompt tech companies to compensate victims, with the banking sector currently being the only one reimbursing fraud victims.
Steve Cornwell, Head of Fraud Risk at TSB, urged AI software providers to implement safeguards against criminal use, while the Online Safety Act is anticipated to hold tech companies responsible for removing fraudulent content from their platforms.