20 April 2023

There is no denying that artificial intelligence (AI) is playing an increasingly prevalent role in our everyday lives and individuals, employers, trade unions and indeed governments are getting increasingly concerned. A quick look at the news reveals Italy has banned ChatGPT, Elon Musk (and other AI experts) are calling for a pause on the development of AI due to the perceived risks to society, and this week the UK trade unions are holding a conference to discuss the implications of AI in the workplace. 

Despite this, with its time and cost-saving capabilities and its ability to make the impossible possible – see M for Metaverse – the ongoing use and development of AI is likely to continue and business will be looking at how to capitalise on the opportunities it offers.

Regulation of the development and use of AI in the UK has been limited to date despite calls for change from various groups including the Institute for the Future of Work. The Trades Union Congress have also expressed concerns about the impact that AI may be having on workers’ rights.

By way of response, the UK government has now published its long-awaited White Paper, ‘A pro-innovation approach to AI’. Whilst recognising the gaps in AI regulation, as the title reveals, the government’s aim appears to be to adopt a regulatory (as opposed to a legislative) approach that doesn’t stifle innovation.

So what does the White Paper say, and what will it mean for employers?

The Government’s plans – the White Paper and its principles

There is currently no single regulatory body in the UK dealing with the use of AI. Instead, its use is regulated through existing legal frameworks, for example by way of the Financial Services and Markets Act 2000 and data protection regulation. The government considers that risks arise across, or in the gaps between, these existing regulatory remits.

However, despite calls for a dedicated AI regulator, the government has opted not to go down this route. Nor, does it seem, will it be introducing an AI specific legislative regime. Instead, the White Paper sets out the government’s plan to “leverage and build on existing regulatory regimes, whilst intervening in a proportionate way, to address regulatory uncertainty and gaps posed by the use of AI”.

The government is not, proposing, at this stage, to introduce new legislation specifically to deal with the use of AI. Instead, they will be relying on existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and the Employment Agency Standards Inspectorate (EASI) – to move forward protections in this area. Regulators will be expected to interpret and apply five new “values-focused cross-sectoral principles” to address any AI risks which fall within their remits in accordance with existing laws and regulations. So, for example, the Information Commissioner’s Office (ICO) has responsibility in the UK for enforcing the UK GDPR. It will be expected under the framework outlined in the White Paper to issue guidance on the application of the five AI principles in the context of existing data protection legislation. The government’s view is that such an approach will enable the UK’s framework to adapt as the technology develops. 

The five principles are:

1. Safety, security and robustness – to ensure that AI systems are technically secure and function reliably as intended throughout their life cycle.

2. Appropriate transparency and explainability – to ensure that appropriate information about an AI system is communicated to the relevant people and to make sure that relevant parties can access, interpret and understand the decision-making processes of an AI system.

3. Fairness – to ensure that AI systems do not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair outcomes.

4. Accountability and governance – to ensure that clear expectations for compliance and best practice are placed on appropriate organisations in the AI supply chain.

5. Contestability and redress – to ensure that the outcomes of AI use are appropriately contestable.

Over the next 12 months, regulators will need to issue new guidance on these principles, and/or update existing guidance, to set out how organisations should implement these new five principles within their respective sectors. Regulators are also encouraged to publish joint guidance on one or more of the principles where AI use crosses multiple regulatory remits. Organisations will be expected to adhere to guidance that applies across all sectors, as well as adhering to guidance issued by relevant sector-specific regulators.

Work is already been done in certain fields. For example, the ICO issued updated guidance on AI and data protection in March 2023.

The government has said that it will work with regulators to identify any barriers to the application of the principles, and will evaluate whether the non-statutory framework is having the desired effect in due course. In the longer term, the government anticipates introducing a statutory duty on regulators requiring them to have due regard to the principles.

What does this mean for employers?

As to what happens next, as noted above, we can expect new guidance to be issued by existing regulators on the use of AI. In addition, we can expect joint guidance to be issued by relevant regulators. In particular, the White Paper envisages that the EHRC and the ICO will be supported and encouraged to work with the EASI and other regulators and organisations in the employment sector to issue joint guidance. Such guidance is likely to focus in particular on the principle of fairness, given the existing risk that some AI programmes may show unfair bias in favour of certain characteristics, and/or may discriminate unfavourably (and potentially unlawfully).

The White Paper does set out a case study designed to show the potential impact such guidance might have on a company that uses AI systems to accelerate their recruitment process. It claims the guidance would make things clearer for businesses by:

  • Clarifying the type of information businesses should provide when implementing AI systems
  • Identifying appropriate supply chain management processes such as due diligence or AI impact assessments
  • Suggesting proportionate measures for bias detection, mitigation and monitoring
  • Providing suggestions for the provision of contestability and redress routes

Specific regulatory guidance on what is expected would be helpful for employers as they will not only need to ensure that their use of AI doesn’t pose any legal risk (for example, discrimination claims where algorithms are proven to demonstrate preferences for particular characteristics to the detriment of others who don’t share that characteristic), but that it also adheres to the new regulatory frameworks and principles.

If your sector is not regulated by a specific regulator, you will still need to consider any relevant guidance issued by cross-sector regulators such as the EHRC, HSE and the ICO.

Although the UK may not currently be legislating specifically in relation to AI, employers need to remember that there are already existing laws which will be relevant here. For example, if an employee or prospective employee believes that they have been discriminated against by an employer’s use of AI software, then that might give rise to a claim under the Equality Act 2010. Employers also need to be mindful of not breaching the implied contractual duty of mutual trust and confidence that exists between employer and employee and which may be called into question in certain instances where AI is used intrusively for example. How adaptable existing laws will be in terms of accommodating new issues that the use of AI will inevitably throw up remains to be seen – will an employer be able to establish fairness where they have used AI to justify selection decisions in redundancy exercises for example?

Equally, employers need to pay heed to the global picture (in particular in the EU and US) which will affect how AI systems are developed and may apply to organisations who have operations in other jurisdictions or are looking to do business internationally. Whilst the UK may no longer be in the EU, employers should not ignore what is happening there. The European Commission is proposing to introduce an EU AI Act (expected later this year) to address the risks posed by AI which may have implications for UK employers, particularly those who do business in the EU, as it will apply to the use of any AI systems in the EU, regardless of where the employer is based. In addition, the USA, whilst not having an equivalent to the EU AI Act, is also showing signs of activity in terms of regulating this area with New York City recently introducing legislation that prohibits the use of AI tools to hire candidates or promote employees unless those tools have been independently audited.

Timetable for implementation

The White Paper contains over 30 consultation questions for individuals and organisations and is open for responses until 12 June 2023. The government then plans to issue the cross-sectoral principles to regulators, together with initial guidance for their implementation, through an AI regulation roadmap. It is envisaged in the White Paper that this will be published by the government in the next 6 months, alongside the government response to the consultation.

Actions for employers

Although it may be some time before any regulators issue their AI-specific guidance, there are some preparatory steps that employers may wish to take now to help “future proof” their approach to the use of AI:

1. Liaise with others in your organisation to identify and audit the AI-based technologies that are used within your business and consider how its use may measure up against both existing employment and data protection legislation as well as against the proposed principles. Whilst there isn’t a formal ‘AI impact assessment’ as yet, the Institute for the Future of Work has published guidance on conducting an ‘algorithmic impact assessment’ that may be useful to consider.

2. Ensure any colleagues currently dealing with or considering the use of AI within the business are familiar with the government’s White Paper proposals and flag the opportunity for consultation.

3. Identify which of the sector specific regulators may apply to you and look out for updates.

 

If you would like to discuss the implications raised in this article please contact Jamie Cameron.

 

Source: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#annexc

Key contact

Jamie Cameron

Jamie Cameron Director

  • Employment
  • Business Immigration Services
  • Employment Disputes

Subscribe to news and insight

Burges Salmon careers

We work hard to make sure Burges Salmon is a great place to work.
Find out more