This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

How is AI used in the Health and Safety Executive’s regulated sectors? 2025 AI report published

Picture of Tom Whittaker
Passle image

In May 2025, the Health and Safety Executive (HSE) published a report titled “Understanding how AI is used in HSE regulated Sectors” (here). The report is based on self-reported survey responses to uses of AI across a broad spectrum of industries, including construction, energy, manufacturing, public services and utilities. The report’s aim is to increase understanding of how AI is being used within industrial settings to assist with the HSE’s approach to regulating AI within such settings. 

From the research, HSE identified four key areas where AI is being utilised which might impact health and safety:

  1. Maintenance systems 
  2. Health and safety management
  3. Control of equipment and process plant
  4. Occupational monitoring

Over 250 current and potential uses of AI were identified in the research as potentially impacting health and safety. Some of the examples include drone-based inspections, generative AI risk assessment tools, automated operations and real-time workplace safety monitoring systems. 

The report also highlights significant risks. These include over-reliance on AI, deskilling of the workforce, algorithmic bias, and system failures due to poor data or lack of oversight. Respondents expressed concerns about warning fatigue, data privacy, and the opacity of AI decision-making.

However, the report also highlighted key mitigations through assurance techniques and control measures. For example:

  1. Developing and deploying processes, practices and standards in a safe and robust manner (e.g. though reviewing in the procurement process, controlled trials, testing, verification of system predictions, and data protection/security measures); and
  2. Managing and guiding the behaviour and operation of AI systems through technical and procedural mechanisms (including encrypted training, human review for high risk decisions, fail safe controls on autonomous vehicles, human-free zones, regular auditing and maintenance).

The respondents also identified a number of challenges faced in implementing AI solutions, covering a range of technical, human and business factors, such as integration with existing systems, accounting for bias within algorithms, awareness around inaccuracy of results, lack of training amongst individuals, distrust within the workforce, data issues and installation costs.

The report is further to the HSE's report on how it intends to regulate AI (here), which was made further to the UK government's White Paper on AI regulation.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by David Harrison.

Related sectors