Will the real Avatar please stand up?

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
In this week's news, I have been reading about banks actively cloning their consumer-facing, highly expert and reputable investment analysts, into avatars and using these avatars to deliver financial information to the markets.
With many reports telling us that financial services, as a sector, is behind the curve in AI deployment and that where it is deployed, it is in back-office applications and used by employees but not in consumer-facing applications, this is something novel and a bit scary, but also exciting.
Is this evidence that AI is now being used by financial services on the outside and to achieve the potential to increase productivity and deliver better information to consumers at reduced costs?
Every light has shadow
One of the often noted risks of AI in financial services is the risk of it being used to enable bad actors to become even more terrible as a result of their ability to harness the sophisticated powers of AI in their malevolence and to maximise the impact of their wrongs.
The utilisation of AI in this way could be hugely harmful to firms, to consumers and to the wider markets. A form of financial crime on steroids. It is not a hypothetical risk. We are already seeing it play out.
Deepfake bankers
In terms of bad actors using AI, it turns out that ‘finfluencers’ as a form of delivering potentially questionable information to the market, are quite passė. Enter the deepfake banker. In the news at the end of last week, it was reported that highly respected investment bankers are being malevolently reincarnated into AI-generated versions of themselves and, in the shape of these avatars, are seeking to influence investor behaviours with fake news.
That story came not too many days after a similar one about a highly respected financial journalist being cloned into a convincing but fake version of himself which then became prolific on social media and evaded many attempts to stop it in its tracks, reinventing itself and popping up out of alternative and shady locations all over the web.
Deception
There is nothing new about fraud, an ancient form of crime as old as humanity, but AI and social media provide it with a novel platform and with some new, sophisticated toys.
These scams are designed to encourage consumers to disclose personal data, including financial information, and to participate in fraudulent arrangements. With online scams already defrauding consumers out of £billions annually, this problem has the potential to explode exponentially as scammers harness the powers of AI.
Also in this week's news, cybersecurity is confirmed as being for one major bank, its “single largest cost” of hundreds of millions. It is most likely not the only bank in this predicament. The same report notes that bank defence systems are under constant attack from online criminals.
Financial services as a sector is not alone. Similar levels of cost and system vulnerability are reported in other sectors. Retail has been hit hard recently and high-street giants are still struggling to recover service levels, with many consumers of their retail products affected.
True or false?
With both the fake and the real in circulation, how can consumers be expected to discern between them? How can consumers be expected to have the confidence required in the integrity of the financial markets? How can the financial services sector drive down the foundational pillars that are needed to ground confidence in the markets and resilience in the consumers of financial products and services, given the changes at play in the digital world?
Mitigation
Having identified the risk, the financial services sector as a whole has the task of mitigating it. The risk is a collective one for the entire industry and for the providers of digital services that it relies heavily upon in order to function effectively.
There is risk to genuine financial services firms and to prominent individuals whose hard-won reputations are at stake because they are relied upon for their expertise, and for being reputable and trustworthy. There is risk for consumers who will increasingly have a difficult job to ascertain whether information is genuine or not before engaging with it. There is risk for the entire financial services sector, which relies heavily upon trust to function.
Consumers
Consumers are increasingly reliant on digitally available services. They do not have millions of pounds or the latest technology invested in defence teams to protect them from harm. The consumer population is dense with individuals who may not be digitally or financially literate to any great degree. This makes consumers increasingly vulnerable as a demographic.
How can a consumer verify whether a video that they see on social media is authentic or not? Is it fair to expect the consumers of financial products and services to be able to discern a plausible ‘get rich’ temptation from one which is too tempting to be true, and to be able to protect themselves from losing their life-savings? What can be done to make consumers less unsuspecting and protect themselves more effectively from financial risks? What can genuine firms do to set themselves apart from the bad actors?
Responsibility
Where does the responsibility for mitigating these risks sit? How much of it sits with regulated entities? How much of it with the consumers of financial products and services, who may need to become more sophisticated in their judgement, research and analytics before engaging? How much of it with the social media platform providers that carry advertisements, both good and bad, for profit?
In the EU there is a movement to make social media platforms more accountable for the wrongs that stem from online advertising by mandating that they verify advertisements before allowing them to be published on their platforms. This approach could mean that other players in the fraud chain are forced to adopt a stronger position in the lines of defence, and that in the fight against increasingly sophisticated frauds and scams, these defence-lines are not drawn only from the financial services sector.
UK regulatory efforts
Tackling financial crime is one of the FCA's main priorities. Given the potential for significant harm to consumers, the FCA has for many years dedicated vast resources to the growth of different forms of complex fraud. The FCA issues thousands of scam warnings, it has engaged with finfluencers, collaborated with other regulators, tried to educate the public using consumer-facing campaigns, worked with regulated firms to strengthen their anti-fraud systems, used its deterrent and enforcement powers against fraudulent actors, and it has worked with the social media platforms to prohibit paid-for adverts for UK financial services that are not approved by an FCA authorised firm. But, these efforts still leave room for bad actors, always at least one-step ahead of the regulators, to use advances in technology to gain from crime. FCA statistics from the end of 2024 suggested that its engagement with the social media platforms was very effective in tackling malicious online financial services adverts. The latest news stories indicate that the dial might now have turned back in favour of the ever-bolder scammers and that renewed and updated collaborative regulatory efforts are urgently needed.
Questions for firms
Some questions for firms to consider as the AI boom, both good and bad, continues at pace: How are you tackling financial crime? Do you know whether criminals are using AI to target your business and your customers? Are you investing in the technology needed to address emerging risks? Are you meeting standards of good regulatory practice? Are you collaborating with other firms and industry bodies to keep up to speed with the latest developments and initiatives? Are you informing your customers of emerging risks? If you wanted to deploy an avatar to advertise your financial services and products, how would you do this safely and responsibly, and in line with regulatory expectations? If a malicious avatar popped up and threatened your business, how would you deal with this avatar?
Regulatory focus
If you would like to discuss how current or future regulations impact what you do with AI, please contact me, Tom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.
You can read more thought-leadership like this by subscribing to our monthly financial services regulation update by clicking here and clicking here for our AI blog and here for our AI newsletter.
Criminals are using technology including AI to target consumers and firms. In recent years they have been able to circumvent banking controls by using sophisticated social engineering techniques to trick victims, making detection much more challenging. Firms must ensure that systems and controls keep up with the increasing sophistication of criminal groups and should use the advances in technologies to help prevent financial crime. Firms must calibrate how they use technology to their individual requirements to be as effective as possible. But that does not mean they should calibrate once and then ‘plug and play’ forever. Firms need to keep fine tuning their response to combat the changing threat.
https://www.fca.org.uk/publications/corporate-documents/reducing-and-preventing-financial-crime