This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

CMORG delivers a guide to managing Gen-AI risk in financial services

Picture of Kerry Berchem
Passle image

Published earlier this year, a practical guide for the financial services industry that aims to assist firms to manage the risks associated with the deployment of Gen-AI. Although written specifically in relation to Gen-AI, this practical guide could easily be used to manage the risks associated with other emerging technologies. 

CMORG

The Cross Market Operational Resilience Group ("CMORG") is a collaboration between the Bank of England and UK Finance. It was created last year in response to the rapid adoption of Gen-AI and aims to enhance the resilience of the financial sector through collective action to identify systemic risks, develop sector-wide solutions and strategies, and share knowledge. 

AI Baseline Guidance Review 

CMORG's guide (the "Guide") is intended to be practical, flexible and future-proof in a way that firms can adapt to their own requirements as the risk landscape, and their exposure to it, evolves. The Guide is the product of a review of existing guidance and good practice and is intended to give practical insight and key takeaways in relation to five thematic categories:

  • The prevailing government and regulatory approach ("Regulatory");
  • Risk management principles and frameworks and their role in managing relevant operational, reputational and compliance risks ("Risk");
  • Technical implementation with a focus on data protection, privacy, cyber and model risk ("Technical");
  • Third-party and legal risks ("Third-party"); and
  • Education with a focus on upskilling and embedding a responsible AI culture ("Education").

Key takeaways

The Guide provides a series of key actions for firms working on the implementation of effective risk management for Gen-AI risk. Here is a snapshot of the Guide's essential takeaways:

  • Regulatory
    • Assess the impact of emerging regulations
    • Map out regulations that will apply and possible frictions between different requirements
  • Risk:
    • Look through different risk lenses, consult and agree on your risk appetite, desired outcomes, expected use and associated regulatory requirements
    • Update your governance and risk frameworks accordingly
    • Consider relevant certification frameworks which are evolving to tackle novel risks (for example, ISO, NIST)
  • Technical:
    • Implement robust controls for data protection and privacy, cyber and information security, and model risks
    • Constantly review and update your controls in response to evolving regulations and technologies
      • Data protection and privacy: 
        • Identify and understand your data storage, retention and transfers (including international transfers)
        • Set and enforce use case guidelines
        • Place appropriate controls around personal data
        • Implement assurance mechanisms (for example, audit, performance testing, identification of unintended outcomes)
        • Consider third-party risks (including roles, responsibilities and liabilities)
      • Cyber and information security:
        • Establish good practice and adopt a risk-specific approach to novel threats
        • Place controls around access rights to protect against inadvertent use and inappropriate disclosure
        • Keep a watching brief on emerging threats
      • Model risk:
        • Address specific risks (including hallucinations and inaccuracies, explainability, and bias)
        • Ensure quality data input
        • Implement ongoing testing and monitoring
        • Upskill your humans to ensure appropriate human-AI interaction, ownership and accountability, and mitigate black-box risks
  • Third-party:
    • Integrate Gen-AI risks into third-party risk and control frameworks, taking into account all current and proposed Gen-AI use cases 
    • Obtain appropriate legal expertise in relation to contractual risks, liabilities and intellectual property issues
  • Education:
    • Implement an acceptable use policy
    • Undertake extensive training throughout the firm to ensure effective uptake and adoption
    • Upgrade security education and awareness to counter AI-augmented threats

Regulatory

Around the world, governments and regulators are evaluating the risks and the benefits of emerging technologies, seeking to balance out the potential for growth with the need to maintain and build trust. Many different approaches are evolving and, despite an in-principle desire to foster co-operation (both between countries and between authorities within countries), the risk of regulatory fragmentation is real. This increases the potential for compliance complexities for firms, particularly those that are active in numerous locations. 

Risk

Many of the risks associated with Gen-AI are common to other technologies. There are also new and Gen-AI specific risks which require firms to make new considerations and fresh adjustments. Frameworks are starting to emerge which will assist firms to structure their risk management approaches around the identification, evaluation and management of risks. Over time, consistent adherence to recognised and trusted frameworks will have the potential to deliver industry-wide comprehensive and consistent methodology, which will assist in ensuring the ethical and responsible (and ultimately trusted) deployment of Gen-AI. 

Technical

Guidance and detail is emerging from authorities, industry bodies and technology vendors to assist firms in the design and implementation of controls to address a variety of different risks, including those relating to data, privacy, cyber and model risks. It is critical that firms stay tuned to industry-specific good practice guidance and standards and to evolving advice (for example, with updates to laws and regulations and to consultation plans). Firms must align their risk mitigation to current standards in intrinsically related and sometimes over-lapping areas, continually monitor and update, and ensure that their humans-in-the-loop are suitably skilled to ensure accountability for safe, ethical, and responsible use. 

In some spaces, firms will need heightened vigilance around the novel threats and risks that leveraging the advantages of Gen-AI will bring. For example, cyber and information security threats posed by bad actors leveraging Gen-AI make it critical that firms embed security as a central requirement and secure their Gen-AI deployment to protect against data losses and other security threats, aligning themselves with guidance and other resources from trusted industry sources. 

Third-party

All firms looking to deploy Gen-AI will need to comprehensively review and upgrade their third-party risk management processes. Firms will need to understand and assess the risks that exist throughout their supply chains. For example, where they use third-party-developed Gen-AI, where third-parties introduce Gen-AI, and where data is controlled and processed. This may require firms to upskill significantly, including in compliance and legal functions, in order to be able to review relevant contractual clauses (including those dealing with AI, data, privacy, risks and liabilities, and intellectual property rights) with confidence. 

Education

Last, but certainly not least, firms must embed a responsible AI culture. In order to do this well, firms will need to have a clear position on their approach to AI and communicate and support this position throughout (top to bottom and bottom to top), with clear messaging, appropriate guardrails, training in essential skills for all and enhanced training and upskilling where needed, to ensure understanding, awareness and accountability. 

The final takeaway….

The Guide is a useful resource for any firm considering an AI implementation whether they are at the start of their AI journey or part of the way through and taking a pause to reflect on the steps they have taken so far. It is packed with practical tips and reams of additional reading resources. 

Key messaging is focused on the need for a firm to identify and embed its AI culture (formulating an AI position or philosophy), clarify the cornerstones (accountability, transparency and acceptable use), monitor and measure (constantly check, test and review), stay current and aware of the latest developments, train across the full spectrum of the business (from general awareness to deep skills), and think in an outcomes focused way. A good culture will be key to ensuring that any firm using AI is ever-ready to do the right thing, however quickly and unexpectedly a situation changes or emerges.

If you would like to discuss how current or future regulations impact what you do with AI, please contact meTom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.

You can read more thought-leadership like this by subscribing to our monthly financial services regulation update by clicking here and clicking here for our AI blog and here for our AI newsletter

There are significant opportunities with artificial intelligence, but we must seize them responsibly. This guidance offers a comprehensive understanding of the complex and evolving risks associated with Gen-AI, encouraging firms to adopt a proactive governance approach that ensures the safe, ethical, and responsible adoption of Gen-AI. By aligning its key takeaways with a commitment to fostering a culture of continuous evaluation and collaboration, firms will be better equipped to unlock Gen-AI’s full potential.

https://www.ukfinance.org.uk/news-and-insight/press-release/cmorg-ai-taskforce-publishes-ai-guidance