Responsible AI

Introduction

Our vision is to enable and leverage the use of artificial intelligence tools (“AI”), data, processes and technology to enhance client service delivery. 

Burges Salmon has an established Digital Enablement Programme and at its core, the Programme brings together and leverages AI, data, processes and technology, together with internal expertise, to enhance the firm’s client service delivery offering and enable it to continuously improve and adapt for the future.

Responsible use of AI

While AI offers significant advantages, we recognise that it can pose potential risks, including:

Inaccuracies: We are aware that, occasionally, the outputs from AI may carry biases or inaccuracies. Our policy requires the review and validation by our people of any retained outputs to ensure their reliability, accuracy, and applicability.

Data privacy and compliance with applicable laws: We do not input client information into generative AI tools that are open source or used to train the software developer’s model. Privacy and data security are fundamental to our business as a law firm.

We will only use products which are licensed and have appropriate technical and organisational measures in place to protect any information processed through them. Save for when specifically requested, or it is necessary for us to disclose your information to a service provider in the course of our instruction (e.g. to an e-disclosure provider), client data will not be accessible to any third party when we use our selected AI tools.

We will discuss with you the use on your matter of any new AI tools outside of the Copilot, Harvey and Search & Summarize suite.

We adhere to legal and regulatory requirements, including the UK General Data Protection Regulation (UK GDPR). Any use of AI will align with these standards. For information on how we process personal data, please see our Privacy Policy on our website.

We rely on legitimate interest as our lawful basis where personal data may be processed either directly or incidentally through Copilot or other permitted AI. If you would like to discuss our use of AI, please get in touch with your usual Burges Salmon contact.

Information Security

Burges Salmon is committed to the highest standards of information security, governance, and compliance. The firm holds UKAS-accredited certifications including ISO 27001 (Information Security), ISO 9001 (Quality Management), ISO 22301 (Business Continuity), and Cyber Essentials Plus. Information Security is governed by senior management, with dedicated roles such as an Information Security Manager and quarterly Information Security Forum’s and Executive Management Security reviews.

Comprehensive security policies cover data protection, confidentiality, risk management, and business continuity. All our people receive regular security training, and we have robust procedures in ensuring suppliers assisting in our legal service delivery are assured to the highest levels. Data is stored securely within the UK/EU, with industry standard encryption and we conduct regular independent audits and testing to ensure client information remains confidential.

Use of AI and third-party technologies is subject to rigorous due diligence and ongoing oversight to ensure the three security tenets, Confidentiality, Integrity and Availability, are maintained.

Clients can be assured that our approach is proactive, transparent, and designed to protect client interests at every stage.

Training and educating our people

We ensure that our people are provided with appropriate training, best practice information and our policy on the responsible and ethical use of AI for the purposes of supporting the effective adoption of these new tools.

We commit periodically to reviewing our AI practices to ensure these (and any emerging) risks are monitored.

Responsible AI Board

We have a Responsible AI Board which helps ensure that our adoption and use of AI supports and aligns with our obligations, objectives and values. Our full board includes representatives from our Legal, Risk, Responsible Business, Environmental, Innovation, IT, and Information Security Teams.

The Board’s work is supported by our market-leading AI advisory team.

What AI tools we use

In addition to long-standing and embedded AI products within our systems such as facial recognition, video conferencing, e-disclosure platforms, email filing, document analytics tools to assist with due diligence, e-signing etc, and Microsoft 365 Copilot, a generative AI tool and AI-powered assistant embedded within our Microsoft 365 applications, we are exploring and investing in various legal-specific AI tools and agentic AI application into our suite of available technologies. Where we put these technologies in place we use solutions that meet our information security, confidentiality, and data protection standards, and we do not permit such tools to use your data to train AI models. We are committed to using AI responsibly and have put in place appropriate policies and guidelines for our people.

Tools, such as those listed below, may be used in the provision of our services to you and in the operation of our firm. If you have any questions or concerns about our use of AI or other technology, please raise them with your Burges Salmon contact. We welcome the opportunity to work with our clients in exploring the capabilities of these tools.

Copilot and Copilot studio agents accelerate many everyday tasks across various Microsoft 365 applications, including Word, PowerPoint, Excel, Outlook, and Teams such as (but not limited to) assisting our people in creating summaries/notes of meetings or documents, creating task lists/actions from emails or documents, and helping them organise information from various sources (e.g. emails, meetings and calendar). Copilot is integrated into our Microsoft applications and is therefore automatically protected by our security infrastructure and subject to our existing compliance, privacy policies and processes.

Features such as two-factor authentication, compliance boundaries, and privacy protections, make Copilot a trusted AI solution. Prompts, responses, and data accessed through Microsoft are not used to train foundation large language models (LLMs), including those used by Copilot.

Microsoft does not retain prompts or responses generated by our people when they are using Copilot. Microsoft has no ‘eyes-on’ access to our data. Microsoft’s privacy and security information for Copilot can be viewed here.

A tool designed for litigation which extracts facts to help lawyers get to the heart of the matter, through natural language questions, data extraction, chronology building and more. Documents are uploaded into Wexler’s platform, and content is analysed against a submitted synopsis of key issues and prompts. Wexler can be used to produce chronologies, dramatis personae, and, for large document sets, can extract information and analyse it using natural language. Output includes references to underlying sources for verification.

Wexler’s privacy policy and security information can be viewed via the links below:

(a)        https://www.wexler.ai/legal/privacy-policy

(b)        https://www.wexler.ai/security

(c)        https://trust.wexler.ai/

Any data input into Wexler is not used to train any proprietary AI models without explicit permission. All data is encrypted at rest and in transit and processed within the EEA by default. Where possible we have elected UK data storage locations with Wexler.

Thomson Reuter’s Practical Law Search and Summarize offers a more efficient way of surfacing relevant Practical Law content when carrying out legal research. Queries can be posted using natural language, it then provides a summarised answer based on Practical Law’s guidance and other materials. Links to materials used to generate the answer are provided, making it easier to validate and review the answer. If the query cannot be answered using Practical Law content, Search & Summarize will confirm this. 

As part of our pilot of legal specific AI tools, we explored the capabilities of Legora. Legora is a web-based legal generative AI assistant, which can analyse and generate content based on documents and questions inputted into the platform. Documents are uploaded into Legora’s platform or Word add-in, and content is analysed using prompts generated by the user interfacing with Legora directly (e.g. asking questions about a document, or assessing the document based on key criteria in prompts submitted). Whilst Legora has not been chosen as our primary legal-specific LLM, we have retained and continue to use some Legora licences.

Legora’s privacy policy and security information can be viewed via the links below:

https://legora.com/legal

https://security.legora.com/

Information input into Legora, and content generated by Legora, will pass through the providers generative/base AI models used by Legora from time to time (and in accordance with their terms and conditions). Legora and its sub-processors process personal data outside the UK, in the EEA/EU. Data is stored in the Netherlands and Ireland, and processed in the Netherlands, Ireland, France and Sweden.

Following a structured pilot, we have selected Harvey to form part of our technology stack.  Harvey is a legal-specific generative AI platform designed to help our people in a wide range of tasks and activities including, drafting, document review and analysis, work across large volumes of documents and data extraction. Documents are either uploaded into Harvey’s platform or used within the MS Word add-in, and content is analysed using prompts generated by a user interfacing with Harvey directly.  Harvey has enterprise‑grade controls and security measures designed to ensure confidentiality and meet our information security standards. Harvey will complement our foundation of Microsoft 365 Copilot strengthening the quality, consistency and pace of the work we deliver.  All outputs are subject to human review.

Harvey’s privacy policy and security information can be viewed via the links below:

• https://www.harvey.ai/legal

• https://www.harvey.ai/legal/security-addendum

Harvey is headquartered in the USA. Information input into Harvey, and content generated by Harvey, will pass through the providers generative/base AI models used by Harvey from time to time (and in accordance with their terms and conditions). Harvey and its sub-processors will process any data received through the service in the EU and Switzerland. Limited processing (e.g. development and engineering) may take place by Harvey employees based in the USA as well as the processing of authentication data and usage data. In so far as possible, we have elected EU data storage locations with Harvey.

Copilot and Copilot studio agents accelerate many everyday tasks across various Microsoft 365 applications, including Word, PowerPoint, Excel, Outlook, and Teams such as (but not limited to) assisting our people in creating summaries/notes of meetings or documents, creating task lists/actions from emails or documents, and helping them organise information from various sources (e.g. emails, meetings and calendar). Copilot is integrated into our Microsoft applications and is therefore automatically protected by our security infrastructure and subject to our existing compliance, privacy policies and processes.

Features such as two-factor authentication, compliance boundaries, and privacy protections, make Copilot a trusted AI solution. Prompts, responses, and data accessed through Microsoft are not used to train foundation large language models (LLMs), including those used by Copilot.

Microsoft does not retain prompts or responses generated by our people when they are using Copilot. Microsoft has no ‘eyes-on’ access to our data. Microsoft’s privacy and security information for Copilot can be viewed here.

A tool designed for litigation which extracts facts to help lawyers get to the heart of the matter, through natural language questions, data extraction, chronology building and more. Documents are uploaded into Wexler’s platform, and content is analysed against a submitted synopsis of key issues and prompts. Wexler can be used to produce chronologies, dramatis personae, and, for large document sets, can extract information and analyse it using natural language. Output includes references to underlying sources for verification.

Wexler’s privacy policy and security information can be viewed via the links below:

(a)        https://www.wexler.ai/legal/privacy-policy

(b)        https://www.wexler.ai/security

(c)        https://trust.wexler.ai/

Any data input into Wexler is not used to train any proprietary AI models without explicit permission. All data is encrypted at rest and in transit and processed within the EEA by default. Where possible we have elected UK data storage locations with Wexler.

Thomson Reuter’s Practical Law Search and Summarize offers a more efficient way of surfacing relevant Practical Law content when carrying out legal research. Queries can be posted using natural language, it then provides a summarised answer based on Practical Law’s guidance and other materials. Links to materials used to generate the answer are provided, making it easier to validate and review the answer. If the query cannot be answered using Practical Law content, Search & Summarize will confirm this. 

As part of our pilot of legal specific AI tools, we explored the capabilities of Legora. Legora is a web-based legal generative AI assistant, which can analyse and generate content based on documents and questions inputted into the platform. Documents are uploaded into Legora’s platform or Word add-in, and content is analysed using prompts generated by the user interfacing with Legora directly (e.g. asking questions about a document, or assessing the document based on key criteria in prompts submitted). Whilst Legora has not been chosen as our primary legal-specific LLM, we have retained and continue to use some Legora licences.

Legora’s privacy policy and security information can be viewed via the links below:

https://legora.com/legal

https://security.legora.com/

Information input into Legora, and content generated by Legora, will pass through the providers generative/base AI models used by Legora from time to time (and in accordance with their terms and conditions). Legora and its sub-processors process personal data outside the UK, in the EEA/EU. Data is stored in the Netherlands and Ireland, and processed in the Netherlands, Ireland, France and Sweden.

Following a structured pilot, we have selected Harvey to form part of our technology stack.  Harvey is a legal-specific generative AI platform designed to help our people in a wide range of tasks and activities including, drafting, document review and analysis, work across large volumes of documents and data extraction. Documents are either uploaded into Harvey’s platform or used within the MS Word add-in, and content is analysed using prompts generated by a user interfacing with Harvey directly.  Harvey has enterprise‑grade controls and security measures designed to ensure confidentiality and meet our information security standards. Harvey will complement our foundation of Microsoft 365 Copilot strengthening the quality, consistency and pace of the work we deliver.  All outputs are subject to human review.

Harvey’s privacy policy and security information can be viewed via the links below:

• https://www.harvey.ai/legal

• https://www.harvey.ai/legal/security-addendum

Harvey is headquartered in the USA. Information input into Harvey, and content generated by Harvey, will pass through the providers generative/base AI models used by Harvey from time to time (and in accordance with their terms and conditions). Harvey and its sub-processors will process any data received through the service in the EU and Switzerland. Limited processing (e.g. development and engineering) may take place by Harvey employees based in the USA as well as the processing of authentication data and usage data. In so far as possible, we have elected EU data storage locations with Harvey.

Frequently Asked Questions

We are happy to discuss with clients the specifics of how we will deliver services to them, including whether and how AI can be used. Questions that are often asked include:

Yes, how and where will depend on the AI tool, its purpose, and your matter.

In so far as possible, we store and process data in the UK. We will let you know if specific AI tools process data outside the UK.

We recognise the need to maintain confidentiality of both our and our client’s data. We ensure that, in our adoption of any AI tools, your confidential data is not used to train or improve the underlying models.

We do not input client data into public or consumer-grade AI platforms. We will only use products which are licensed and have appropriate technical and organisational measures in place to protect any information processed through them.

Any output that is produced by a generative AI tool that will form part of your work product will always be reviewed for accuracy and approved by a member of the legal team.

Who to contact

If you are interested in our approach to AI, please contact [email protected] or one of our Responsible AI Board below.

For media enquiries, please contact Dan Baber.

Last updated on 28 January 2026

Key contacts from our Responsible AI Board

01
04