/Passle/5d9604688cb6230bac62c2d0/SearchServiceImages/2025-04-15-15-34-48-658-67fe7c983048bb4711b18785.png)


Imagine receiving a video call from your boss asking you to arrange an urgent payment. The call seems genuine, your boss looks and sounds like they normally look and sound. So how would you know the difference between your boss and an AI-cloned version of them (also known as a ‘deepfake’) and how can you check if the request is genuine?
Listen to this clip at the end of this article and see if you can guess which is the real audio.
They are actually both cloned – Chris in our tech team uploaded a voice sample and from this the AI tool replicated his dulcet tones to read a script, creating an instant audiobook. It’s impressive to see how convincing the cloned audio can be, but is there a darker side?
AI cloning involves using AI to develop a digital, interactive version of a real person. As part of this series, we will explore AI cloning from a number of different legal perspectives. This article focuses on the commercial perspective.
What is AI cloning?
A use case of a series of rapidly evolving technologies which create a synthetic digital version of a real person who can interact via voice, photo or video materials. The outputs are produced using a combination of real footage (often only a few seconds, can be audio, video or both), built upon with deep-learning algorithms to create increasingly convincing virtual replicas.
The foundation materials used to create clones are often widely available (e.g. from photos and videos online, voicemail greetings, social media) and just seconds of real material are needed to create a hyper-realistic version that can then be used to speak, move and behave like a real person.
How is it used?
Some organisations are using cloning technology as a productivity tool – if you can send your clone to a Teams meeting, then you can effectively be in two places at once. Sometimes information can be disseminated more clearly and effectively by an avatar – some airlines have used avatars to deal with basic issues so that employees can focus on more complex problem solving. Other organisations are developing online learning courses with replicated voices or using AI cloning to recreate the voices of actors for dubbing or postproduction or deceased loved ones (as featured in the Storyville documentary ‘Eternal You’).
The darker side of cloning is that it can be used to trick people into thinking they are speaking to one person when they are actually speaking to someone completely different, with the goal, for example, of convincing family members and friends to reveal sensitive information such as passwords, or to send money to a specified account. Starling Bank found that over a quarter of UK adults say they have been targeted by an AI voice cloning scam at least once in the previous 12 months (from September 2024). There has also been an exponential rise in deepfakes used in pornography, featuring the faces and voices of women who have not consented and who often struggle to find the perpetrators and get damaging material removed.
How sophisticated is the technology?
AI cloning technology, particularly in the realm of voice synthesis, has reached remarkable levels of sophistication. Modern AI voice cloning can create synthetic voices that are virtually indistinguishable from human speech, capturing nuances such as intonation, emotion, and even subtle pauses. This is achieved through advanced neural network algorithms and extensive voice data training. As AI voice cloning continues to evolve, it is crucial for risk management and legal frameworks to keep pace, ensuring responsible and ethical use of this powerful technology.
What are the legal implications?
The legal implications of AI cloning are multi-faceted and evolving. We have summarised some key themes below and subsequent articles in this series will cover additional perspectives from different practice areas:
- Impersonation and cyber attacks
Perhaps most importantly, organisations should be aware of AI clones being used as part of social engineering in an attempt to obtain unauthorised access to data or systems. Cloning technology provides an incredibly powerful tool for attackers, as it allows them to convincingly mimic senior board members or other key stakeholders and give instructions to staff to do something. This could be as simple as calling an employee pretending to be the CEO and asking them to make a money transfer, or for a key access code. Or it could be a more complex web woven to enable a multijurisdictional manipulation. Either way, the technology is cheap, readily available, fast, effective and powerful.
Organisations’ information security policies, processes and procedures (including cyber security and physical security) should be mobilised to (as far as possible) prevent these circumstances and where necessary, respond appropriately.
This is a relevant consideration for all businesses, but particularly those in the financial sector given the large volumes of financial data stored and used. TechUK reported in October 2024 that “a study by Santander found that 54% of Brits are worried about deepfakes being used to steal money, while 78% believe this technology will become a standard tool for fraudsters.”
- Intellectual property and data protection
Key concerns include copyright infringement, For example, AI-generated voices can replicate the voices of real individuals without their consent potentially leading to legal disputes due to the unauthorised use and violation of the rights of the original voice owners.
Organisations should also consider the data protection implications. When training the AI clone (for example, by using real human voices and images) it is essential that, if the use of the voice and/or images constitutes processing of personal data, the organisation must identify a lawful basis in accordance with the UK GDPR and the Data Protection Act 2018, ensure the processing is transparent (usually by providing a privacy notice explaining how the individual’s personal data will be used) and ensure the personal data is processed securely (through policies, procedures and staff training).
- Procurement
If an organisation is procuring a system that provides AI cloning functionality, it should undertake due diligence to ensure that all permissions and consents were obtained and applicable laws complied with in respect of the data used to train the model, as well as any input data the organisation provides for the purposes of using the system.
If allowing tools to access personal data, the organisation should be mindful of its ongoing data protection related obligations as discussed briefly above and take this into account as part of the procurement process.
What should organisations be doing?
- Take time to understand the technology and have an awareness of how it is being used within the market.
- Think about how AI cloning could be used internally and externally in respect of your organisation.
- Ensure your AI policy and data protection policies are appropriate and train your staff accordingly and consider if there are any practical steps you can take.
- In respect of impersonation and cyber attacks, if you have a cyber insurance policy, speak to your broker to determine what you need to do to ensure that the policy remains effective if there was an AI clone used in a cyber attack.
If you would like to discuss further, please contact Madelin Sinclair McAusland or Liz Smith or any other member of our Technology team.