This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Legal updates

Legal Risk Management: Lessons from three high-profile failures

Picture of Lloyd Nail

Incidents such as the Challenger and Columbia Space Shuttles, the Hatfield train crash, the Piper Alpha oil platform explosion, the Grenfell Tower fire, the Post Office scandal and the Nimrod and Boeing 737 Max 8 air crashes have been studied for their systemic failures and tragic consequences. Beyond their immediate impact, these events offer enduring lessons for legal professionals (and others) engaged in risk management.

In this article, we aim to unpack the salient details of some of these failures to suggest lessons which legal professionals might seek to apply to their own legal risk management practices.

Explore each lesson below

The Nimrod air disaster showed that multiple prior incidents provided missed opportunities to identify key risks, in particular the “the elephant in the room, which nobody saw because nobody was looking for it”.

We examine what this tells us about good practice for legal risk identification.

Read more

The Boeing 737 Max 8 crashes demonstrated failures in assumptions about pilot response times and lack of pilot training on a critical system.

We consider what this tells use about the need to consider the wider system and its interfacing elements, including human and technological components, when advising on risk and appropriate mitigations. 

Read more

Lessons from the first Report of the UK’s Covid-19 Inquiry illustrate pitfalls in the way risk-based advice may be commissioned and delivered to decision-makers.

We explore what this reveals about the importance of two-way dialogue and transparency to ensure advice includes uncertainty and options, enabling informed risk-based decisions.

Read more

Recent inquiries have shown the critical importance of not just making recommendations on future risk mitigation, but ensuring their proper implementation, in order to avoid failing without clear accountability and sustained oversight.

We consider how failures to act on previous lessons may stem from diffused responsibility and look at how legal and compliance-related changes can be genuinely delivered.

Read more
01
04

Why we seek to manage risk

The incidents above illustrate how harms resulting from risk (including legal risk) are rarely the result of a single misstep, but emerge from a confluence of factors. In some senses, this should be unsurprising. The outputs of complex organisations (whether positive or disastrous) are the product of culture, strategy, technology, infrastructure, organisational structure, people, processes, procedures and other factors. Most of these layers contain risk mitigations or safeguards, whether by design or otherwise. So for a risk to crystallise enough to cause massive harm usually requires failures or ‘holes’ on multiple levels. This is the central thesis of James Reasons ‘Swiss Cheese Model’ of complex system failure.

While the scale and visibility of such disasters may seem far removed from day-to-day legal operations, the underlying dynamics — particularly the failure of multiple safeguards and common human and system features of those failures — hold lessons for us all.

When things go so wrong that they result in the scale of harm illustrated in the above examples, it is usually because risk management has failed on multiple ‘levels’ – and within those levels of failure, there are often repeating flaws. Of course not all high-profile failures have the same flaws. But the extent of overlaps and repetition give pause for thought. We should not assume that at least some of these flaws are not present in our own organisations or in others with whom we work. This might risk history repeating itself.

This article is largely focused on flaws in legal risk management in terms of inputs (risk identification), processes (legal and other expert advice) and structures (such as committees). However, whilst it is important to seek to get these right, they are the journey not the destination. We mustn’t forget why we seek to manage risk. Flawed inputs, processes and structures can deliver flawed outputs: real harm. There is a risk of being lulled into a false sense of security, by appearing to be complying or even attempting to comply, particularly in circumstances where harm has not (or not yet) occurred. This is what Lord Haddon-Cave referred to as conflating ‘the appearance of safety’ with ‘the reality of safety’.

There are many offences in UK law, particularly those which generate large corporate fines, where it is a defence to show that the company has implemented ‘reasonable procedures’, for example, failure to prevent the facilitation of tax evasion, failure to prevent fraud and Bribery Act offences. In such cases, the focus is not on whether risk management procedures have the appearance of being functional but whether the procedures were effective in practice. More pointedly, the defence for core health & safety offences is to show that it was ‘not reasonably practicable to do more than was in fact done’. The availability of the defence is depending on ‘the reality of safety’, not ‘the appearance of safety’.

The new ‘Section 196’ approach to attributing criminal liability to corporates for the acts of their managers, and implementation of Provision 29 of the UK Corporate Governance Code (the requirement for Board sign off on control framework effectiveness) from 1 January 2026, provide further reasons to ensure measures are effective both on paper and in reality.

In many cases, the standard by which your exposure will be measured – not only in terms of legal liability but potentially also in the court of public opinion – will not just be by reference to the quality of the inputs, processes or structures but by the outcome.

HMS Ark Royal aircraft carrier and flagship of the British Royal Navy at sunset 05 12

Lesson one

Identifying Key Risks: An example from the Nimrod Report

Facts:

On 2 September 2006, RAF Nimrod MR2 XV230 suffered a catastrophic mid-air fire, fatal to all on board.

The crew had been alerted to the fire around 90 seconds after completion of an air-to-air refuelling of the tanks.

There was ultimately nothing the crew could have done to avert disaster.

However, a Report later concluded that the disaster could have been averted if the risk posed by the combination of components in this case had been identified earlier, in particular in reflecting on certain pre-cursory incidents, and in the design of the aircraft’s Safety Case.

What were the missed opportunities to identify this key risk?

Failures:

An enormously simplified explanation is that a fire broke out on the aircraft because fuel got into an area that was supposed to stay dry. In this space, there were both pipes carrying fuel and pipes that got extremely hot.

The fuel leaked or was pushed into this area and ended up touching one of these hot pipes, causing the fuel to catch fire. The materials meant to keep things safe had gaps or had worn out, making it easier for the fire to start.

Precursory incidents

In his ‘Nimrod Review’, the Rt Hon Lord Justice Haddon-Cave (as he is now, as a judge in the Court of Appeal) identified that there had been seven previous significant incidents which, in hindsight, were missed opportunities to identify the risks posed by the combination of components that led to the fire in this case.

Of these incidents, the one involving RAF Nimrod XV227 “should have been a ‘wake up call’ but appears not to have been”. The XV227 incident was a ‘near miss’ which involved the same combination of components but a different ‘failure sequence’.

These previous incidents illustrated the risks from split fuel seals, the potential for leaks from fuel couplings to migrate back down the plane, the fire risks from fuel leaking onto a hot duct and the risk that this could result in an ignition – all relevant to the incident.

Lord Justice Haddon-Cave found that of these incidents “Most tended to be treated in isolation as ‘one-off’ incidents with little further thought being given to potential systemic issues, risks or implications once the particular problem on that aircraft was dealt with. Rarely did anyone attempt to grasp the wider implications of a particular incident for the future”.

“It was the elephant in the room, which nobody saw because nobody was looking for it.”
Safety case failure

A safety case is a structured and evidence-backed argument (usually brought together in a single document) demonstrating that an asset (such as an aircraft) or system or process is acceptably safe for its intended application.

Lord Justice Haddon-Cave did not mince his words in reaching the conclusion that the Nimrod Safety Case had not successfully identified key risks; “The Nimrod Safety Case [drawn up between 2001 and 2005] represented the best opportunity to capture the serious design flaws in the Nimrod which had lain dormant for years”. In particular, he pointed to the importance of drawing on appropriate data when seeking to identify, assess and mitigate risks.

He concluded that “The Nimrod Safety Case process was fatally undermined by… a widespread assumption by those involved that the Nimrod was ‘safe anyway’ (because it had successfully flown for 30 years)”. He was concerned that “the task of drawing up the Safety Case became essentially a paperwork and ‘tick-box’ exercise”.

Lord Justice Haddon-Cave found that this was no excuse for not revisiting the NSC once the facts of the Nimrod XV227 incident had become apparent; “the Nimrod [Team] failed properly to manage the NSC post its production and failed in any meaningful sense to treat it as a “living” document.”

Focus on the cause, not the process

It is not uncommon for legal advisers (internal and/or external) to be engaged in the aftermath of a near miss or accident, both to consider the immediate exposures (legal and otherwise) but also often to consider future exposure and mitigating measures.

When an organisation is subject to a ‘near miss’ or actual incident it is important to consider lessons learned. In particular, this includes the identification of relevant risks which have become apparent from what happened.

It is important not to ‘rush to implement process’: a not uncommon instinct in an organisation seeking to demonstrate to stakeholders that it is ‘doing something’ in response. Priority should be given to this initial groundwork of risk identification.

It is vital to commit sufficient time, resource and energy to this exercise. That may be a difficult task for a variety of reasons. Particular difficulties which we have seen play out detrimentally later include:

  1. where risk identification, or perhaps a proper understanding of impact and/or likelihood of resulting harm) relies on expert knowledge,
  2. risk identification requires the knowledge of the interaction of components (systems, objects or organisation departments) which is not superficially obvious,
  3. where the resulting risk is hard to quantify, for example because the risk is less obviously financial and instead legal or human.

Seeking to ensure the right people are at the discussion table, and the right information is available for consideration, is an important step, although not guaranteed to result in a complete outcome.

Whilst examples like Nimrod are salutary, there are also respects in which they are exactly the opposite. In particular, a key risk can appear more blatant in hindsight.

Those seeking to identify and mitigate risks are not only unarmed with the benefits of hindsight, but are required to exercise foresight in circumstances where (not infrequently) a particular risk has never resulted in harm before, potentially leading to false confidence that ‘nothing bad will happen’ or ‘it won’t happen to us’.

These factors (amongst others) help explain why it would be a mistake to assume that misidentification of risk is restricted to bad organisations and/or could not happen to our organisations.

The initial exercise of risk identification (or, more correctly, the process of identifying the ‘hazard’ with potential to cause harm and then assessing the risk that it will do so) sits at the root of the many processes and actions which follow.

If a risk is not correctly identified then the subsequent steps of the risk management cycle – identification of mitigations (or risk transfer), implementation of measures, and monitoring – will be corresponding flawed due to this gap in its foundations.

Back to top
HMS Ark Royal aircraft carrier and flagship of the British Royal Navy at sunset 05 12

Lesson one

Identifying Key Risks: An example from the Nimrod Report

Facts:

On 2 September 2006, RAF Nimrod MR2 XV230 suffered a catastrophic mid-air fire, fatal to all on board.

The crew had been alerted to the fire around 90 seconds after completion of an air-to-air refuelling of the tanks.

There was ultimately nothing the crew could have done to avert disaster.

However, a Report later concluded that the disaster could have been averted if the risk posed by the combination of components in this case had been identified earlier, in particular in reflecting on certain pre-cursory incidents, and in the design of the aircraft’s Safety Case.

What were the missed opportunities to identify this key risk?

Failures:

An enormously simplified explanation is that a fire broke out on the aircraft because fuel got into an area that was supposed to stay dry. In this space, there were both pipes carrying fuel and pipes that got extremely hot.

The fuel leaked or was pushed into this area and ended up touching one of these hot pipes, causing the fuel to catch fire. The materials meant to keep things safe had gaps or had worn out, making it easier for the fire to start.

Precursory incidents

In his ‘Nimrod Review’, the Rt Hon Lord Justice Haddon-Cave (as he is now, as a judge in the Court of Appeal) identified that there had been seven previous significant incidents which, in hindsight, were missed opportunities to identify the risks posed by the combination of components that led to the fire in this case.

Of these incidents, the one involving RAF Nimrod XV227 “should have been a ‘wake up call’ but appears not to have been”. The XV227 incident was a ‘near miss’ which involved the same combination of components but a different ‘failure sequence’.

These previous incidents illustrated the risks from split fuel seals, the potential for leaks from fuel couplings to migrate back down the plane, the fire risks from fuel leaking onto a hot duct and the risk that this could result in an ignition – all relevant to the incident.

Lord Justice Haddon-Cave found that of these incidents “Most tended to be treated in isolation as ‘one-off’ incidents with little further thought being given to potential systemic issues, risks or implications once the particular problem on that aircraft was dealt with. Rarely did anyone attempt to grasp the wider implications of a particular incident for the future”.

“It was the elephant in the room, which nobody saw because nobody was looking for it.”
Safety case failure

A safety case is a structured and evidence-backed argument (usually brought together in a single document) demonstrating that an asset (such as an aircraft) or system or process is acceptably safe for its intended application.

Lord Justice Haddon-Cave did not mince his words in reaching the conclusion that the Nimrod Safety Case had not successfully identified key risks; “The Nimrod Safety Case [drawn up between 2001 and 2005] represented the best opportunity to capture the serious design flaws in the Nimrod which had lain dormant for years”. In particular, he pointed to the importance of drawing on appropriate data when seeking to identify, assess and mitigate risks.

He concluded that “The Nimrod Safety Case process was fatally undermined by… a widespread assumption by those involved that the Nimrod was ‘safe anyway’ (because it had successfully flown for 30 years)”. He was concerned that “the task of drawing up the Safety Case became essentially a paperwork and ‘tick-box’ exercise”.

Lord Justice Haddon-Cave found that this was no excuse for not revisiting the NSC once the facts of the Nimrod XV227 incident had become apparent; “the Nimrod [Team] failed properly to manage the NSC post its production and failed in any meaningful sense to treat it as a “living” document.”

Focus on the cause, not the process

It is not uncommon for legal advisers (internal and/or external) to be engaged in the aftermath of a near miss or accident, both to consider the immediate exposures (legal and otherwise) but also often to consider future exposure and mitigating measures.

When an organisation is subject to a ‘near miss’ or actual incident it is important to consider lessons learned. In particular, this includes the identification of relevant risks which have become apparent from what happened.

It is important not to ‘rush to implement process’: a not uncommon instinct in an organisation seeking to demonstrate to stakeholders that it is ‘doing something’ in response. Priority should be given to this initial groundwork of risk identification.

It is vital to commit sufficient time, resource and energy to this exercise. That may be a difficult task for a variety of reasons. Particular difficulties which we have seen play out detrimentally later include:

  1. where risk identification, or perhaps a proper understanding of impact and/or likelihood of resulting harm) relies on expert knowledge,
  2. risk identification requires the knowledge of the interaction of components (systems, objects or organisation departments) which is not superficially obvious,
  3. where the resulting risk is hard to quantify, for example because the risk is less obviously financial and instead legal or human.

Seeking to ensure the right people are at the discussion table, and the right information is available for consideration, is an important step, although not guaranteed to result in a complete outcome.

Whilst examples like Nimrod are salutary, there are also respects in which they are exactly the opposite. In particular, a key risk can appear more blatant in hindsight.

Those seeking to identify and mitigate risks are not only unarmed with the benefits of hindsight, but are required to exercise foresight in circumstances where (not infrequently) a particular risk has never resulted in harm before, potentially leading to false confidence that ‘nothing bad will happen’ or ‘it won’t happen to us’.

These factors (amongst others) help explain why it would be a mistake to assume that misidentification of risk is restricted to bad organisations and/or could not happen to our organisations.

The initial exercise of risk identification (or, more correctly, the process of identifying the ‘hazard’ with potential to cause harm and then assessing the risk that it will do so) sits at the root of the many processes and actions which follow.

If a risk is not correctly identified then the subsequent steps of the risk management cycle – identification of mitigations (or risk transfer), implementation of measures, and monitoring – will be corresponding flawed due to this gap in its foundations.

Back to top
A commercial plane taking off at sunset

Lesson two

Understanding the risk within systems and getting the bigger picture: An example from the 737 Max 8 air crashes

Facts:

On 29 October 2018, Lion Air Flight 610 crashed into the Java Sea shortly after take-off, killing all 189 passengers and crew. Just over 4 months later, on 29 October 2018, Ethiopian Airlines Flight 302 crashed, six minutes after taking off from Addis Ababa airport, resulting in the death of 73 people.

These crashes led to a worldwide long-term grounding of the Boeing 737 Max 8 aircraft and multiple investigations (in the US and Indonesia) into how the aircraft was approved for passenger service.
The details of the incident were that:

  • The proximate cause of the crashes was that the onboard Manoeuvring Characteristics Augmentation System (MCAS) had overridden the pilot controls and put the planes into a nose-drive from which they never recovered.
  • Boeing had designed and installed the MCAS system as an important risk mitigation measure for the aircraft: in comparison to the Boeing 737-800 (from which the Max 8 was an evolution) the engines were larger and further forward on the wings. This changed the ‘manoeuvring characteristics’ of the plane: it had a tendency to ‘pitch-up’ as thrust was added (e.g. on take off). This pitch-up, if not adjusted for, might lead the plane to stall.
  • The MCAS was designed to make that needed adjustment: if the ‘Angle of Attack’ (AoA) Sensor (measuring the planes degree of pitch) detected that that plane was approaching stall conditions then the MCAS would automatically kick-in, force a ‘nose down’, and avoid the stall.
  • In both crashes, faulty AoA Sensor data led the MCAS to impose a forced nose-down leading the planes to crash.

Failures:

Investigation reports have concluded that these crashes were the result of corporate, cultural, design, build and regulatory failures.

Simulation versus reality

One of the critical (and fatal) assumptions made by Boeing and the US aircraft regulatory body (the FAA) was that, even if an automatic MCAS activation did occur in such a scenario, the risk of a crash would be averted because pilots could (and would) within a matter of seconds override MCAS by turning it off; reasserting manual control to pull the plane upwards from its ‘nose down’.

A particular flaw in this risk mitigation assumption was the assumed number of seconds it would take pilots to identify the MCAS activation, turn it off, and reassert manual control to avoid the nose down.

It was found by the US House Committee on Transportation and Infrastructure’s Investigation Report that prior to the first 737 Max 8s entering service:

“Boeing’s own analysis showed that if pilots took more than 10 seconds to identify and respond to a “stabilizer runaway” condition caused by uncommanded MCAS activation the result could be catastrophic. The Committee has found no evidence that Boeing shared this information with the FAA, customers, or 737 MAX pilots.”

Boeing considered, however, that this risk was mitigated. This was because, in simulator conditions, Boeing’s own flight simulator pilots had been able to identify and respond to this problem in under 4 seconds. The ‘4 seconds’ assumption also appears to have been based on understood industry norms and FAA guidance.

That assumption appears to have been flawed: as demonstrated by the tragedies which followed, in the real world reality of a cockpit alive with the sound of alarms and flashing displays, the pilots were not able to identify and respond to this problem in under 4 seconds, or in time to avert a plane crash.

Of further linked importance (and by deliberate design on the part of Boeing, in service of a number of non-safety related objectives): pilots were not told that MCAS could automatically activate itself to force a nose-down; pilots were not trained on what to do if this happened in a Max 8; written operating manuals and pilot training materials did not mention the existence of the MCAS system. These all contributed to pilots’ reaction times.

Faced with these factors, pilots in the real world did not respond in time. In light of this confluence of factors, as the subsequent report understatedly summarised:

“the accident pilot[s’ real world] responses to the unintended MCAS operation were not consistent with the underlying assumptions about pilot recognition and response that Boeing used… for flight control system functional hazard assessments, including for MCAS, as part of the 737 MAX design”

[National Transportation Safety Board, Safety Recommendation Report, ASR1901].

Consider the whole system.

Amongst the many lessons of Max 8 is one about the value of considering the holistic system – the system components, human components and organisational context – when seeking to identify risks and design mitigations.

For example:

  • To the extent that an organisation is implementing a technical measure to mitigate an identified risk (MCAS in this example; but perhaps a new piece of IT in your world) it is necessary to understand how the organisation and its people will interact with that technology.
  • To the extent that an organisation is implementing risk mitigations based on human intervention (pilot MCAS override in this example, but perhaps some form of manual ‘red flag’ escalation procedure in your organisation) it is necessary to build in sufficient tolerance for how your systems, organisation and its people will behave in the real world.

Where a legal team is asked to advise on the implications of a particular internal failure / problem, be alive to the risk of advising based on partial information and/or a preliminary diagnosis of the reasons for failure.

In particular, for a variety of reasons (including in some cases self-preservation instinct in face of something ‘going wrong’) a legal issue might initially be presented as arising from a limited set of facts and circumstances (a single contractual issue, a flaw in a particular part of a system, the fault of a particular individual, the result of the behaviour of a contractor, a one-off oversight).

Things are rarely this simple: there are often a number of interacting system, human and organisation issues at work.

Back to top
A commercial plane taking off at sunset

Lesson two

Understanding the risk within systems and getting the bigger picture: An example from the 737 Max 8 air crashes

Facts:

On 29 October 2018, Lion Air Flight 610 crashed into the Java Sea shortly after take-off, killing all 189 passengers and crew. Just over 4 months later, on 29 October 2018, Ethiopian Airlines Flight 302 crashed, six minutes after taking off from Addis Ababa airport, resulting in the death of 73 people.

These crashes led to a worldwide long-term grounding of the Boeing 737 Max 8 aircraft and multiple investigations (in the US and Indonesia) into how the aircraft was approved for passenger service.
The details of the incident were that:

  • The proximate cause of the crashes was that the onboard Manoeuvring Characteristics Augmentation System (MCAS) had overridden the pilot controls and put the planes into a nose-drive from which they never recovered.
  • Boeing had designed and installed the MCAS system as an important risk mitigation measure for the aircraft: in comparison to the Boeing 737-800 (from which the Max 8 was an evolution) the engines were larger and further forward on the wings. This changed the ‘manoeuvring characteristics’ of the plane: it had a tendency to ‘pitch-up’ as thrust was added (e.g. on take off). This pitch-up, if not adjusted for, might lead the plane to stall.
  • The MCAS was designed to make that needed adjustment: if the ‘Angle of Attack’ (AoA) Sensor (measuring the planes degree of pitch) detected that that plane was approaching stall conditions then the MCAS would automatically kick-in, force a ‘nose down’, and avoid the stall.
  • In both crashes, faulty AoA Sensor data led the MCAS to impose a forced nose-down leading the planes to crash.

Failures:

Investigation reports have concluded that these crashes were the result of corporate, cultural, design, build and regulatory failures.

Simulation versus reality

One of the critical (and fatal) assumptions made by Boeing and the US aircraft regulatory body (the FAA) was that, even if an automatic MCAS activation did occur in such a scenario, the risk of a crash would be averted because pilots could (and would) within a matter of seconds override MCAS by turning it off; reasserting manual control to pull the plane upwards from its ‘nose down’.

A particular flaw in this risk mitigation assumption was the assumed number of seconds it would take pilots to identify the MCAS activation, turn it off, and reassert manual control to avoid the nose down.

It was found by the US House Committee on Transportation and Infrastructure’s Investigation Report that prior to the first 737 Max 8s entering service:

“Boeing’s own analysis showed that if pilots took more than 10 seconds to identify and respond to a “stabilizer runaway” condition caused by uncommanded MCAS activation the result could be catastrophic. The Committee has found no evidence that Boeing shared this information with the FAA, customers, or 737 MAX pilots.”

Boeing considered, however, that this risk was mitigated. This was because, in simulator conditions, Boeing’s own flight simulator pilots had been able to identify and respond to this problem in under 4 seconds. The ‘4 seconds’ assumption also appears to have been based on understood industry norms and FAA guidance.

That assumption appears to have been flawed: as demonstrated by the tragedies which followed, in the real world reality of a cockpit alive with the sound of alarms and flashing displays, the pilots were not able to identify and respond to this problem in under 4 seconds, or in time to avert a plane crash.

Of further linked importance (and by deliberate design on the part of Boeing, in service of a number of non-safety related objectives): pilots were not told that MCAS could automatically activate itself to force a nose-down; pilots were not trained on what to do if this happened in a Max 8; written operating manuals and pilot training materials did not mention the existence of the MCAS system. These all contributed to pilots’ reaction times.

Faced with these factors, pilots in the real world did not respond in time. In light of this confluence of factors, as the subsequent report understatedly summarised:

“the accident pilot[s’ real world] responses to the unintended MCAS operation were not consistent with the underlying assumptions about pilot recognition and response that Boeing used… for flight control system functional hazard assessments, including for MCAS, as part of the 737 MAX design”

[National Transportation Safety Board, Safety Recommendation Report, ASR1901].

Consider the whole system.

Amongst the many lessons of Max 8 is one about the value of considering the holistic system – the system components, human components and organisational context – when seeking to identify risks and design mitigations.

For example:

  • To the extent that an organisation is implementing a technical measure to mitigate an identified risk (MCAS in this example; but perhaps a new piece of IT in your world) it is necessary to understand how the organisation and its people will interact with that technology.
  • To the extent that an organisation is implementing risk mitigations based on human intervention (pilot MCAS override in this example, but perhaps some form of manual ‘red flag’ escalation procedure in your organisation) it is necessary to build in sufficient tolerance for how your systems, organisation and its people will behave in the real world.

Where a legal team is asked to advise on the implications of a particular internal failure / problem, be alive to the risk of advising based on partial information and/or a preliminary diagnosis of the reasons for failure.

In particular, for a variety of reasons (including in some cases self-preservation instinct in face of something ‘going wrong’) a legal issue might initially be presented as arising from a limited set of facts and circumstances (a single contractual issue, a flaw in a particular part of a system, the fault of a particular individual, the result of the behaviour of a contractor, a one-off oversight).

Things are rarely this simple: there are often a number of interacting system, human and organisation issues at work.

Back to top
Closeup Shot Of Microscope With Metal Lens At Laboratory

Lesson three

Making risk-based decisions: The distinction between the role of advisors and decision-makers: A lesson from Covid-19 Preparedness

Facts:

The context of the UK’s response to the Covid-19 pandemic needs little introduction. The Covid Inquiry was established in 2021 and set its Terms of Reference in June 2022.

The first ‘module’ of the Covid Inquiry was “The resilience and preparedness of the United Kingdom”. A key finding of its ‘Module 1 Report’ was that:

“In 2019, it was widely believed… that the UK was not only properly prepared but was one of the best-prepared countries in the world to respond to a pandemic. This Report concludes that, in reality, the UK was ill prepared for dealing with a catastrophic emergency, let alone the coronavirus (Covid-19) pandemic that actually struck.”

Within the eight most significant flaws summarised in the Report, there are two overlapping ones which concern the way in which advisers advise on risk, and the way in which decision-makers then decide what to do about it.

Failures:

Commissioning the ‘right’ advice

Decision-makers are often not sufficiently expert or apprised of relevant detail meaning that “those making the requests may not know exactly how they should be framed”.

Although not a factor examined by this Inquiry, there might of course be the converse risk that decision-makers know enough that they deliberately frame the ‘ask’ in a particular way (“I want to be able to say that the answer is x”).

In either case, incorrect framing can impact the quality and value of the resulting advice, particularly if (for various reasons) advisers feel the understandable need to ‘stick to the brief’. As the Covid Inquiry found: “The way experts were asked to advise limited their freedom to advise”; “Often their remit was set too narrowly”; “The content of the [advisory] meetings was very much commissioned by [the decision-makers]… there was no expectation or explicit encouragement to consider issues beyond the specific commissions.”

Potential solutions were found to include those commissioning advice “to think more carefully about the remits of, and questions put to, the expert” and a culture of setting a remit which sets an “expectation or explicit encouragement to consider issues beyond the specific commissions”.

Giving the ‘right’ advice

Decision-makers are often constrained by time and the need to process multi-agenda items within that time. This raises the vexed question of the ‘right level of detail’ to provide to the decision-maker.

A key risk here is that the drive for conciseness, together with other factors (for example wanting to be seen to be ‘constructive’ (or at least not ‘obstructive’) or to streamline the decision), leads to the omission of sufficient detail to enable both an appropriately informed decision and, as importantly, the ability to challenge the advice.

At its highest, presenting advice which is a fait accompli removes true decision-making autonomy or reduces it to an underinformed binary outcome (go/no-go).

The Covid Inquiry found that “policy-makers were usually presented with a consensus view… [instead of] a range of options”. It recommended that: “[Decision-makers] should… be aware of the fact that they may be presented with uncertainty, and experts should be prepared to present it. An integral part of any advice is its inherent uncertainty. The advice of experts is no different. If a minister is to challenge effectively, those who provide advice to ministers should ensure that they communicate this uncertainty.”

In the specific context of Covid-19 preparedness it found: “the purpose of preparedness is to allow policy-makers to consider and interrogate policies in advance. The purpose of presenting ministers in advance of a pandemic with a range of options, each with scientific evidence and uncertainty of varying degrees, is to allow them to choose the most appropriate response when the crisis happens. This is necessarily a value judgement – underlining the importance of political leaders making the ultimate decisions”.

The value of ‘access to the expert’

One solution to the above is to ensure direct dialogue between the advisor(s) and the decision-maker.

In the context of Covid-19 preparedness, the Inquiry stressed the value of “two-way discussion between ministers and experts. There should be a system that invites a back-and-forth between… advisers and decision-makers. This would enhance the quality of both the questions asked and the advice provided”.

It would also increase the prospect of more effective interrogation and effective challenge to the advice provided which the Inquiry considered was highly valuable:

“The quality of the decision-making of ministers will only be as good as the depth and range of advice they receive, as well as their interrogation of that advice.”

The right advice the right way

Key parts of the job of an in-house adviser are the giving of legal advice to inform strategic and/or operational decision-making, or the commissioning of external expert advice (whether legal or from other professionals) to inform such decision-making.

A particular aspect of this role is the provision of advice to Boards and committees which may be taking important risk-based decisions.

As the failures highlighted above demonstrate, there are particular risks here around the way in which the advice is commissioned (‘the ask’), and the way in which (often with best intentions) that advice is filtered for decision-maker use.

The Covid Inquiry is not the first (and certainly won’t be the last, even in the near future) to recommend that a form of ‘two way dialogue’ between the adviser and the decision-maker is an important mitigation of these risks and likely to improve the quality of the decision-making.

There are often logistical, cultural and other challenges in trying to get decision-makers and advisers in the same room together. However, the benefits of two-way dialogue are obvious.

Back to top
Closeup Shot Of Microscope With Metal Lens At Laboratory

Lesson three

Making risk-based decisions: The distinction between the role of advisors and decision-makers: A lesson from Covid-19 Preparedness

Facts:

The context of the UK’s response to the Covid-19 pandemic needs little introduction. The Covid Inquiry was established in 2021 and set its Terms of Reference in June 2022.

The first ‘module’ of the Covid Inquiry was “The resilience and preparedness of the United Kingdom”. A key finding of its ‘Module 1 Report’ was that:

“In 2019, it was widely believed… that the UK was not only properly prepared but was one of the best-prepared countries in the world to respond to a pandemic. This Report concludes that, in reality, the UK was ill prepared for dealing with a catastrophic emergency, let alone the coronavirus (Covid-19) pandemic that actually struck.”

Within the eight most significant flaws summarised in the Report, there are two overlapping ones which concern the way in which advisers advise on risk, and the way in which decision-makers then decide what to do about it.

Failures:

Commissioning the ‘right’ advice

Decision-makers are often not sufficiently expert or apprised of relevant detail meaning that “those making the requests may not know exactly how they should be framed”.

Although not a factor examined by this Inquiry, there might of course be the converse risk that decision-makers know enough that they deliberately frame the ‘ask’ in a particular way (“I want to be able to say that the answer is x”).

In either case, incorrect framing can impact the quality and value of the resulting advice, particularly if (for various reasons) advisers feel the understandable need to ‘stick to the brief’. As the Covid Inquiry found: “The way experts were asked to advise limited their freedom to advise”; “Often their remit was set too narrowly”; “The content of the [advisory] meetings was very much commissioned by [the decision-makers]… there was no expectation or explicit encouragement to consider issues beyond the specific commissions.”

Potential solutions were found to include those commissioning advice “to think more carefully about the remits of, and questions put to, the expert” and a culture of setting a remit which sets an “expectation or explicit encouragement to consider issues beyond the specific commissions”.

Giving the ‘right’ advice

Decision-makers are often constrained by time and the need to process multi-agenda items within that time. This raises the vexed question of the ‘right level of detail’ to provide to the decision-maker.

A key risk here is that the drive for conciseness, together with other factors (for example wanting to be seen to be ‘constructive’ (or at least not ‘obstructive’) or to streamline the decision), leads to the omission of sufficient detail to enable both an appropriately informed decision and, as importantly, the ability to challenge the advice.

At its highest, presenting advice which is a fait accompli removes true decision-making autonomy or reduces it to an underinformed binary outcome (go/no-go).

The Covid Inquiry found that “policy-makers were usually presented with a consensus view… [instead of] a range of options”. It recommended that: “[Decision-makers] should… be aware of the fact that they may be presented with uncertainty, and experts should be prepared to present it. An integral part of any advice is its inherent uncertainty. The advice of experts is no different. If a minister is to challenge effectively, those who provide advice to ministers should ensure that they communicate this uncertainty.”

In the specific context of Covid-19 preparedness it found: “the purpose of preparedness is to allow policy-makers to consider and interrogate policies in advance. The purpose of presenting ministers in advance of a pandemic with a range of options, each with scientific evidence and uncertainty of varying degrees, is to allow them to choose the most appropriate response when the crisis happens. This is necessarily a value judgement – underlining the importance of political leaders making the ultimate decisions”.

The value of ‘access to the expert’

One solution to the above is to ensure direct dialogue between the advisor(s) and the decision-maker.

In the context of Covid-19 preparedness, the Inquiry stressed the value of “two-way discussion between ministers and experts. There should be a system that invites a back-and-forth between… advisers and decision-makers. This would enhance the quality of both the questions asked and the advice provided”.

It would also increase the prospect of more effective interrogation and effective challenge to the advice provided which the Inquiry considered was highly valuable:

“The quality of the decision-making of ministers will only be as good as the depth and range of advice they receive, as well as their interrogation of that advice.”

The right advice the right way

Key parts of the job of an in-house adviser are the giving of legal advice to inform strategic and/or operational decision-making, or the commissioning of external expert advice (whether legal or from other professionals) to inform such decision-making.

A particular aspect of this role is the provision of advice to Boards and committees which may be taking important risk-based decisions.

As the failures highlighted above demonstrate, there are particular risks here around the way in which the advice is commissioned (‘the ask’), and the way in which (often with best intentions) that advice is filtered for decision-maker use.

The Covid Inquiry is not the first (and certainly won’t be the last, even in the near future) to recommend that a form of ‘two way dialogue’ between the adviser and the decision-maker is an important mitigation of these risks and likely to improve the quality of the decision-making.

There are often logistical, cultural and other challenges in trying to get decision-makers and advisers in the same room together. However, the benefits of two-way dialogue are obvious.

Back to top

Lesson four

Managing Change: Further Lesson’s from recent Inquiries

Facts:

In 2009, bush fires in the Australian state of Victoria resulted in over 150 deaths and massive property damage. An Inquiry (a Royal Commission) examined the facts and made 67 recommendations.

As is now common practice for Australian Inquiries, an ‘Implementation Monitor’ role was created to track whether these recommendations were being implemented. This proved to be highly valuable.

One of the recommendations was to have ‘Incident Control Centres’ ready and waiting so that fire-fighting efforts could be co-ordinated when a fire broke out.

The entity responsible for actioning this subsequently reported that this had been implemented.

However, when the Implementation Monitor sent his team in to inspect the work it was found that this was not true: the equipment was there but it was not installed and not ready to function.

When the Implementation Monitor reported this conduct it “resulted in a dramatic change in the attitude of the people responsible for implementation, because they realised that someone was actually out there looking at what they were doing and signing off on it.”

(Evidence of Neil Comrie, Australian Implementation Monitor, to the UK House of Lords Statutory Inquiry Committee).

Failures:

When an incident causes harm on any large scale, and in particular if it results in loss of life, the subsequent investigation and its outputs, usually a Report containing recommendations for the future, are public domain. What happens next usually is not.

Therefore whilst the lessons that could be learned are publicly available, evidence of whether those lessons are implemented in practice is often far less visible.

Diffused accountability

Where ‘change failure’ sometimes resurfaces in the public domain is where some variation on the original incident re-occurs, or a subsequent incident contains elements which would have been addressed had previous ‘lessons learned’ been implemented.

A commonly cited example is that patient deaths investigated by the Mid-Staffordshire Hospitals Inquiry in 2013 might have been less likely if certain recommendations made by the 2001 Inquiry into Children’s Surgery at the Bristol Royal Infirmary had been implemented. “It seems quite extraordinary that the general acceptance of the importance of clinical governance, and in particular clinical audit, which had been recognised nationally from the time of the Bristol Royal Infirmary Public Inquiry report… had failed to permeate sufficiently into Stafford to result in a functioning, effective system by 2009”.

There can be many reasons why organisations fail to learn the lessons of previous incidents (even when those incidents occur within their own organisation).

One is when the responsibility for change is allocated to a particular sub-committee or group. We have observed this as a frequent response to a requirement to implement organisational change and we have also observed the risks of doing so.

The Covid-19 Inquiry Module 1 Report (July 2024) on the UK’s Preparedness for Covid-19 found that:

“The Inquiry has noted… a number of areas where there was a failure to implement or complete recommendations from simulation exercises. Unfortunately, the various boards and groups set up to oversee this work proved to be largely ineffective… [T]hey were focused on creating groups, sub-groups and documents… [H]ad the actions, recommendations and learning from past exercises been properly implemented, the UK would have been far better prepared for the Covid-19 pandemic that ensued.”

Beyond committees and box-ticking

The appropriate mechanism(s) for the implementation and monitoring of mitigations, corrective actions or change measures will differ between organisations and depending on the circumstance (including the objectives pursued and the organisation’s resources).

However, it is a common approach to set up ‘working groups’ or ‘subcommittees’ tasked with ensuring and monitoring the implementation of measures within the organisation. Sometimes this role is assigned to the designated Compliance function within the organisation.

Legal professionals are sometimes involved in such sub-committees, or are asked to advise on setting up such structures to see that changes are implemented. Legal professionals are also sometimes asked to prepare Board (or Committee) reports updating on progress towards implementation and assessed ongoing risk exposures (legal or otherwise).

Setting up a special working group can demonstrate to stakeholders that an organisation has recognised the need for action and intends to follow through.

However, it is important to avoid taking undue comfort from the fact that the process has been put in place and at that responsibility has been assigned. There are inherent risks in ‘hiving off’ the crucial task of delivering change and monitoring effective implementation.

There is a risk that top level management adopt the view that these actions are ‘all in hand’ because they are now sat with a specialist committee. There is also a risk that the committee itself will have a diffused sense of responsibility (in comparison to a specific individual who can be held accountable and responsible).

There are a variety of ways this could arise in the context of legal risk. An incoming piece of legislation may require that a particular department (e.g. finance, HR or IT) takes the lead in delivering a particular change to address legal compliance.

A recent dispute or litigation (perhaps involving the organisation itself) might alert the organisation to the need to revisit particular provisions in their standard terms of contract and to implement a process of amendments to its web of existing contracts.

Some important lessons here are:

  • to ensure that the process for delivering and monitoring such legal changes is clear (including clarity on accountability and responsibility);
  • to seek to avoid the situation (as seen in the Australian wildfire example) where those responsible for delivering change are unduly focussed on ‘ticking the box’ against each recommendation rather that properly interrogating whether the required change has been actually delivered;
  • and seeking to avoid delivery becoming a ‘hived off’ issue which becomes forgotten or dangerously deprioritised.
Back to top

Lesson four

Managing Change: Further Lesson’s from recent Inquiries

Facts:

In 2009, bush fires in the Australian state of Victoria resulted in over 150 deaths and massive property damage. An Inquiry (a Royal Commission) examined the facts and made 67 recommendations.

As is now common practice for Australian Inquiries, an ‘Implementation Monitor’ role was created to track whether these recommendations were being implemented. This proved to be highly valuable.

One of the recommendations was to have ‘Incident Control Centres’ ready and waiting so that fire-fighting efforts could be co-ordinated when a fire broke out.

The entity responsible for actioning this subsequently reported that this had been implemented.

However, when the Implementation Monitor sent his team in to inspect the work it was found that this was not true: the equipment was there but it was not installed and not ready to function.

When the Implementation Monitor reported this conduct it “resulted in a dramatic change in the attitude of the people responsible for implementation, because they realised that someone was actually out there looking at what they were doing and signing off on it.”

(Evidence of Neil Comrie, Australian Implementation Monitor, to the UK House of Lords Statutory Inquiry Committee).

Failures:

When an incident causes harm on any large scale, and in particular if it results in loss of life, the subsequent investigation and its outputs, usually a Report containing recommendations for the future, are public domain. What happens next usually is not.

Therefore whilst the lessons that could be learned are publicly available, evidence of whether those lessons are implemented in practice is often far less visible.

Diffused accountability

Where ‘change failure’ sometimes resurfaces in the public domain is where some variation on the original incident re-occurs, or a subsequent incident contains elements which would have been addressed had previous ‘lessons learned’ been implemented.

A commonly cited example is that patient deaths investigated by the Mid-Staffordshire Hospitals Inquiry in 2013 might have been less likely if certain recommendations made by the 2001 Inquiry into Children’s Surgery at the Bristol Royal Infirmary had been implemented. “It seems quite extraordinary that the general acceptance of the importance of clinical governance, and in particular clinical audit, which had been recognised nationally from the time of the Bristol Royal Infirmary Public Inquiry report… had failed to permeate sufficiently into Stafford to result in a functioning, effective system by 2009”.

There can be many reasons why organisations fail to learn the lessons of previous incidents (even when those incidents occur within their own organisation).

One is when the responsibility for change is allocated to a particular sub-committee or group. We have observed this as a frequent response to a requirement to implement organisational change and we have also observed the risks of doing so.

The Covid-19 Inquiry Module 1 Report (July 2024) on the UK’s Preparedness for Covid-19 found that:

“The Inquiry has noted… a number of areas where there was a failure to implement or complete recommendations from simulation exercises. Unfortunately, the various boards and groups set up to oversee this work proved to be largely ineffective… [T]hey were focused on creating groups, sub-groups and documents… [H]ad the actions, recommendations and learning from past exercises been properly implemented, the UK would have been far better prepared for the Covid-19 pandemic that ensued.”

Beyond committees and box-ticking

The appropriate mechanism(s) for the implementation and monitoring of mitigations, corrective actions or change measures will differ between organisations and depending on the circumstance (including the objectives pursued and the organisation’s resources).

However, it is a common approach to set up ‘working groups’ or ‘subcommittees’ tasked with ensuring and monitoring the implementation of measures within the organisation. Sometimes this role is assigned to the designated Compliance function within the organisation.

Legal professionals are sometimes involved in such sub-committees, or are asked to advise on setting up such structures to see that changes are implemented. Legal professionals are also sometimes asked to prepare Board (or Committee) reports updating on progress towards implementation and assessed ongoing risk exposures (legal or otherwise).

Setting up a special working group can demonstrate to stakeholders that an organisation has recognised the need for action and intends to follow through.

However, it is important to avoid taking undue comfort from the fact that the process has been put in place and at that responsibility has been assigned. There are inherent risks in ‘hiving off’ the crucial task of delivering change and monitoring effective implementation.

There is a risk that top level management adopt the view that these actions are ‘all in hand’ because they are now sat with a specialist committee. There is also a risk that the committee itself will have a diffused sense of responsibility (in comparison to a specific individual who can be held accountable and responsible).

There are a variety of ways this could arise in the context of legal risk. An incoming piece of legislation may require that a particular department (e.g. finance, HR or IT) takes the lead in delivering a particular change to address legal compliance.

A recent dispute or litigation (perhaps involving the organisation itself) might alert the organisation to the need to revisit particular provisions in their standard terms of contract and to implement a process of amendments to its web of existing contracts.

Some important lessons here are:

  • to ensure that the process for delivering and monitoring such legal changes is clear (including clarity on accountability and responsibility);
  • to seek to avoid the situation (as seen in the Australian wildfire example) where those responsible for delivering change are unduly focussed on ‘ticking the box’ against each recommendation rather that properly interrogating whether the required change has been actually delivered;
  • and seeking to avoid delivery becoming a ‘hived off’ issue which becomes forgotten or dangerously deprioritised.
Back to top

We will be launching our legal risk management toolkit at The Lawyer’s Managing Risk & Litigation conference on 23rd September 2025. Visit our legal risk management hub for more

Find out more