This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

AI Governance in the Boardroom: Guidance for Business Leaders

Picture of Charlotte Hamilton
Passle image

Recent advancements in generative AI have sparked widespread discussion of its potential benefits and risks across the economy and wider society. AI is no longer only technical, it has become a strategic, operational, ethical, legal, and political issue.

The Institute of Directors’ (IoD) paper AI Governance in the Boardroom: The essential governance questions for your next board meeting provides a practical, actionable roadmap for directors seeking to harness AI's opportunities whilst managing their risks responsibly. A deeper understanding of AI's operational and societal impacts is an essential boardroom priority. 

The March 2025 IoD Policy Voice survey captured the views of nearly 700 directors and business leaders (across a variety of sectors, business sizes and regions) on AI adoption. Nearly two-thirds of directors now personally use AI tools to aid their work, and half of directors report that their organisations use AI across various functions and processes. Yet a quarter remain concerned about the lack of an internal AI policy, strategy or data governance framework in their organisation. So whilst the benefits are recognised, scepticism continues and the survey identifies skills gaps, lack of trust in AI outcomes and security and ethical risks as the biggest barriers to adoption. As adoption of AI technology increases, the need for clear governance frameworks grows.

The 12 Principles for Responsible AI Governance

The IOD’s paper puts forward updated principles to reflect the current regulatory developments in the UK and EU (including ISO/IEC 42001 and 5259), based upon the 12 principles first developed by Pauline Norstrom in 2020. The paper includes a set of questions for each principle and directors are encouraged to use these to establish an effective AI position for their organisation. The IOD is clear that is no longer acceptable for AI to be confined to the IT function within the business, it must be on the board’s agenda.

1. Monitor the Evolving Regulatory and Geopolitical Environment

Organisations should consider at board and management level whether they are clear on the regulatory expectations for AI use within their sector and whether they have mapped exposure to both UK and EU regulatory frameworks (and others where relevant).

As of June 2025, the UK Government is taking a decentralised approach to AI regulation, with no central legislation, with governance delegated to existing sector-specific regulators. For UK-based organisations operating in the EU, or working with EU-based partners, the EU AI Act represents a fundamental shift in regulatory expectation, having entered into force in August 2024 and introducing a risk-based framework. See our previous posts on the EU AI Act and our flowchart for navigating the EU AI Act for further information.

2. Continually Audit and Measure AI Use, Principles, Process and Controls

Directors must ensure all AI systems used across organisations are identified, audited, and measured on an ongoing basis to recognise how systems evolve and integrate new data. In addition, it should be recognised that ‘shadow’ AI use (where employees independently adopt tools like LLMs) is common practice.

3. Undertake Impact and Risk Assessments

Impact assessments are essential for responsible AI governance, helping boards understand technical, human, and cultural implications. Boards should ensure comprehensive assessments, including risks to workforce roles, transparency and compliance, as well as impact on customers, suppliers, investors, regulators, and the wider society.

4. Establish Board Accountability and Management Responsibilities

AI governance is a board-level responsibility because it involves strategic, ethical, reputational, and legal implications, not just technical considerations. The board should retain veto powers over AI implementation or continued use where significant commercial, regulatory, safety, or reputational risks exist, and ensure both the board and management have the capabilities and confidence to ensure accountability for AI oversight.

5. Set High-Level Strategic Goals Aligned with Business Objectives

Every application of AI within an organisation should be guided by a clear, high-level set of goals, shaped by the organisation's wider vision, purpose, values, and stakeholder commitments.

Examples of high-level goals might include: 

  • augmenting human intelligence and creativity;
  • improving the speed, consistency, and quality of decision-making;
  • enhancing accessibility, inclusion, and fairness in products or services;
  • protecting stakeholder wellbeing, ensuring no harm to employees, customers or communities; and
  • supporting climate and sustainability targets, including responsible use of energy and computing resources.

6. Empower a Cross-Functional Independent Review Committee

AI governance must be more than a paper exercise, it should be actively practised. The paper suggests that one way to achieve this may be to have an independent committee to oversee AI, digital and/or data, equipped with the skills, authority and independence to make principled decisions.

7. Validate, Document and Secure Data Sources

Data is the basis of every AI system and so boards must ensure that its provenance, integrity and relevance are properly governed. For example, before deploying AI organisations must document data sources, assess data quality, and mitigate bias, including risks from synthetic data.

8. Train and Upskill People to Use AI Effectively and Responsibly

The IOD is clear that people need to be empowered to use, question and improve AI systems for them to effective and aligned with an organisation’s purpose. Training is therefore essential (both as part of onboarding for new employees and on an ongoing basis) and should be audience-specific, accessible and inclusive as well as regularly refreshed to account for evolving systems, risks, and regulations.

9. Comply with Privacy Requirements

Boards must ensure that AI systems are adopted, developed and deployed in compliance with relevant data protection laws and organisational policies.Privacy-by-design means: 

  • embedding privacy controls into the architecture and logic of the system;
  • minimising the use of personally identifiable information (PII) where not strictly necessary;
  • ensuring valid consent, data subject rights, and appropriate safeguards; and
  • building systems that are transparent and explainable, especially where decisions may affect individuals.

10. Comply with Security-by-Design Requirements

AI systems are only as trustworthy as they are secure. Boards must ensure that AI systems are designed, developed, and deployed with robust security controls in place, which in turn are regularly reviewed and updated to respond to emerging threats.

11. Test and Evaluate Systems and Remove from Use if Harms are Discovered

Before deployment, an AI system must be rigorously tested to ensure alignment with the organisation's ethics, safety and/or responsibility frameworks, performance expectations, and legal obligations. There must be regular, ongoing testing of such systems and where a system is found to introduce or reinforce bias, cause harm, or deviate from its original purpose, there must be a clear and tested mechanism to pause, remediate, or retire it.

12. Review Systems, Policies and Governance Practices Regularly

Further to principle 11 above, AI systems must also be subject to regular reviews to look at their functionality more widely. For example, ensuring that they continue to serve their intended purpose, operate safely and fairly, and remain in alignment with the organisation's ethical commitments and risk appetite.

The Bottom Line

Effective AI governance isn't about saying "no" to innovation. It's about creating the structures that allow businesses to say "yes" responsibly and strategically. The organisations that will thrive are those that:

  • have a real-time inventory of AI systems in use;
  • retain board-level veto over AI deployment;
  • train and upskill people to use AI effectively and responsibly;
  • rigorously test systems before deployment and remove them if harms are discovered; and
  • review systems, policies and governance practices regularly.

AI adoption and governance are highly context-specific, there is no 'one-size-fits-all' approach. Effective governance depends on a range of organisational factors including sector, size and maturity, and must be adaptable to reflect the constantly evolving technology and policy landscape. Boards must commit to agility, a culture of curiosity and innovation, and continuous learning in order to responsibly steer AI's strategic application in alignment with core organisational values and long-term goals.

By rigorously working through the checklists provided in the IOD paper and embedding the 12 principles set out above into an organisation, boards can build trust with stakeholders, ensure regulatory compliance and position their organisations well in an AI-driven future.

For further information and resources, please consult the full IoD Business Paper on AI Governance in the Boardroom.