24 May 2021

The UK government has recently published its Draft Online Safety Bill (the 'Draft Bill') which sets out the proposed legislative measures initially highlighted by the government in its initial White Paper released in April 2019 and its subsequent response in December 2020. In our last update, we provided a recap of the key provisions and issues contained within the White Paper and the government’s response, and the Draft Bill is largely in keeping with the government’s direction of travel in this area. As well as providing additional detail that clarifies some of the government’s initial proposals, the Draft Bill also contains some new measures that go further than might have been previously expected.

Who should be paying attention?

The Draft Bill is drafted to apply to 'user-to-user services' and 'search services'. The first category encompasses internet services that allow users to generate, upload and share content with other users. The second captures search engines that do not fall within the first category. The exact extent of the territorial and extra-territorial nature of the provisions were previously unclear, however the Draft Bill clarifies that services provided both within and outside the UK are in scope provided the service has links with the UK, meaning broadly a significant number of UK users, or where the UK forms a target market for the service. However, the Draft Bill is only extended to apply to the design and operation of the service in the UK or as it affects UK users.

Particularly noteworthy is the addition of 'Category 1' services, reserved for the largest online user-generated content platforms and imposing additional measures as outlined further below.

As expected, a number of services are expressly carved out of the scope of the Draft Bill including services relating purely to emails, SMS and MMS, as well as internal business services. 

What do digital service providers need to do to comply?

There are a number of obligations outlined in the Draft Bill that all service the same goal of establishing a duty of care on online providers to protect users of their services against illegal and harmful content. Illegal content largely encompasses content relating to terrorism and child sexual exploitation and abuse (CSEA). Harmful content is not specifically defined in the Draft Bill but includes content that the service provider should reasonably identify as having a material risk of an adverse physical or psychological impact on a child or adult of 'ordinary sensibilities', taking into account how many users may encounter such harmful content and how rapidly such content may be shared (or ‘go viral’) through that service. However, the potential financial impact of any content is not a criterion in determining whether it is harmful, meaning that financially fraudulent online content and cyber-scams will not be caught by the ‘harmful content’ provisions in the Draft Bill. Despite this, such activity may still be caught where it is deemed to constitute illegal content that is specifically 'user-generated' by individuals (see more details on the Draft Bill’s impact on online fraud below). 

Importantly, the obligations applicable to each service provider depend on their status under the proposals. Outlined below are the different categories of service providers and their corresponding obligations:

1. All providers of user-to-user services are required to:

  • conduct an illegal content risk assessment;
  • take proportionate steps to mitigate and manage the risks of harm identified by the illegal content risk assessment;
  • have systems and processes in place that minimise the presence and dissemination of illegal content and the length of time such content is present on the site, and swiftly take down illegal content when alerted to its presence;
  • specify in its Terms of Service how individuals are protected from illegal content and ensure that Terms of Service are clear, accessible and consistently applied;
  • have regard to protecting users from unwarranted infringements of privacy as well as users’ right to freedom of expression in safety policies and procedures;
  • have appropriate reporting systems and complaints procedures on which the service provider can take appropriate action; and
  • keep necessary written records pertaining to the above.

2. Providers of user-to-user services likely to be accessed by children are required to:

  • comply with all of the obligations outlined in point 1 above;
  • conduct a children’s risk assessment;
  • take proportionate steps to mitigate and manage the risks of harm to children both identified in the children’s risk assessment and present on the service;
  • have systems and processes in place that prevent children from encountering harmful content; and
  • specify in its Terms of Service how children are prevented from encountering harmful content and ensure that such Terms of Service are clear, accessible and consistently applied.

3. Providers of user-to-user services within the scope of Category 1 are required to:

  • comply with all of the obligations outlined in points 1 and 2 above;
  • conduct an adults’ risk assessment;
  • specify in its Terms of Service how harmful content to adults is dealt with by its service and ensure that such Terms of Service are clear, accessible and consistently applied;
  • conduct an assessment of the impact that safety policies and procedures would have on the protection of users from unwarranted infringements of privacy as well as users’ right to freedom of expression, conduct an additional assessment following the adoption of such policies and procedures, and specify the positive steps the provider has taken in response to the impact assessment;
  • have systems and processes that take into account the importance of the free expression of content of democratic importance in certain decision-making, including with respect to diversity of political opinion, and specify its relevant policies and processes in its Terms of Service (again ensuring such Terms of Service are clear, accessible and consistently applied); and
  • have systems and processes that take into account the importance of the free expression of journalistic content in certain decision-making as well as a dedicated and expedited complaints procedure in respect of journalistic content, and specify its relevant policies and processes in its Terms of Service (again ensuring such Terms of Service are clear, accessible and consistently applied).

Very similar obligations in respect of points 1 and 2 above also apply to relevant providers of online search services. It will also be incumbent upon Ofcom (the proposed online safety regulator under the Draft Bill) to prepare codes of practice in relation to terrorism content and CSEA content, compliance with which will help service providers demonstrate compliance with their duties under the Draft Bill. This is further to the previous voluntary and non-binding interim codes of practice published by the government prior to the Draft Bill. 

What should those in Category 1 in particular be paying attention to?

As well as the additional requirements in respect of the duties to protect adults from harmful content, Category 1 service providers have additional duties to protect content of democratic importance and journalistic content. 

Content of democratic importance applies to both news publisher content and user-generated content that appears to be specifically intended to contribute to democratic political debate in the United Kingdom. Journalistic content also applies to both news publisher content and user-generated content but in relation to content generated for the purposes of journalism. Importantly, this specific duty in relation to journalistic content is not limited in its application to professional journalists, as the Draft Bill makes clear that creators of content, including individuals, should benefit from the same dedicated and expedited complaints procedures in relation to decisions to remove or restrict access to such content. 

Online platform providers caught within the scope of the Draft Bill will likely be conscious of how this balancing act between removing harmful content and protecting both content of democratic importance and journalistic content will play out in practice. 

In addition to Category 1, Ofcom will be required to establish a register of Category 2A and Category 2B services. As mentioned, Category 1 will include the largest user-to-user services and will be relevant to the additional obligations imposed on Category 1 service providers outlined above. Category 2A will only include regulated search services whilst Category 2B will include other user-to-user services that do not meet the threshold conditions of Category 1. Service providers falling within all three categories (1, 2A and 2B) will be required to produce annual transparency reports outlining the steps they are taking to tackle online harms. Even if service providers do not meet the threshold conditions for any of the categories highlighted, the obligations outlined above would still be applicable where service providers fall within the scope of the Draft Bill. 

Consequences of non-compliance

Ofcom will have powers to issue fines of £18 million or 10 per cent of qualifying worldwide revenue (whichever is higher) for non-compliance. This penalty however must be appropriate and proportionate to the online service provider’s failures. Ofcom also has a range of enforcement powers relating to business disruption measures and use of technology warning notices under the Draft Bill.

In certain circumstances, criminal liability may also fall on the shoulders of senior members of non-compliant service providers where they have failed to take reasonable steps to prevent offences from being committed.

What is the Draft Bill’s relevance to online fraud?

Further to the government’s own remarks on the Draft Bill, the Draft Bill is intended to tackle user-generated fraud and place responsibility for fraudulent user-generated content, such as posts on social media, on online platforms. The government’s comments specifically reference romance scams and fake investment opportunities, but also cite that fraud via advertising, emails or cloned websites will not be in scope because the Draft Bill focuses on harm committed specifically through user-generated content. Specific reference to these types of frauds are also not mentioned in the Draft Bill and it will therefore rely on obligations outlined above in relation to illegal and harmful content to capture these types of activity. 

Next steps

Before the Draft Bill is able to come into effect, it will first be scrutinised by a joint committee of MPs before a final version is formally introduced to Parliament. Despite momentum gathering and pressures mounting on introducing effective regulation for the internet, it may still be some time before we see the proposals put forward in the Draft Bill receiving Royal Assent.

If you have any questions or would otherwise like to discuss any issue raised in this article, please contact David Varney.

Key contact

A photo of David Varney

David Varney Partner

  • Data Protection and Cybersecurity
  • Technology and Communications
  • Outsourcing
 

Subscribe to news and insight

Burges Salmon careers

We work hard to make sure Burges Salmon is a great place to work.
Find out more