Online Safety Act Network

User-to-User Illegal Content Duties

Tags:

The first phase of Ofcom’s consultations on implementing the Online Safety Act focuses on the illegal content duties as they apply to user-to-user services. In this explainer, Prof Lorna Woods and Dr Alexandros Antoniou set out what these duties do and what they require from regulated services in response. A complementary explainer is available on how the illegal content duties apply to search services.

There are three types of duty for user-to-user (U2U) services relating to illegal content:

  1. risk assessment;
  2. safety duty; and
  3. ancillary duties.

Note the duties in relation to fraud are separate duties.

Each of these recognises two types of content:

  • priority illegal content; and
  • non-designated illegal content.

The Act contains a definition of “illegal content” (s.59) which is the benchmark or trigger for these particular duties. The definition is linked to what the Act terms a “relevant offence”. There are two aspects that need to be considered. The first aspect considers the type of offences in issue. It defines “priority illegal content” by reference to offences listed in Schedule 5 (terrorism offences), Schedule 6 (child sexual abuse offences) and Schedule 7 (a range of other offences). (This table shows the full list of priority offences along with CPS guidance, where available.)

Any other relevant offences (called here, non-designated illegal content) are those where the victim or intended victim is an individual. The other aspect states that content amounts to a “relevant offence” if its use, possession, or publication or the viewing or accessing of it is an offence. We will return to the more detailed meaning in another blog.

Risk Assessment

All services should carry out a “suitable and sufficient” risk assessment (s.9), in particular focussing on the risk of users encountering each of the types of priority illegal content (so that is looking at each type separately) and then non-designated illegal content. In doing this, the service should bear in mind the “risk profile” that applies to the particular service. The risk profile is something that Ofcom will develop following a market-level risk assessment. Ofcom will also produce other documents to help services carry out their obligations:

  • guidance about how to do a risk assessment; and
  • guidance about illegal content judgements.

In assessing risk, the service should bear in mind how “easily, quickly and widely” illegal content may spread. The service needs also to consider the level of risk that the service could be used for the commission or facilitation of a priority offence. The service should assess the risk of harm to individuals resulting from this as well as illegal content. The service should identify particular functionalities that are particularly risky in this regard. So, while the concern is about harm caused by content, the risk assessment needs to look at the role of the service, its design and the way it is operated in contributing to that.

The risk assessment must be kept up to date. Ofcom may take enforcement action against a service provider if it takes the view that the risk assessment is not suitable and sufficient.

Safety Duty

The U2U safety duty (s.10) includes, in relation to all illegal content (priority illegal content and non-designated illegal content):

A duty to effectively mitigate and manage the risk of harm to individuals as identified in the most recent illegal content risk assessment of the service. (s. 10(2)(c))

Note, this says nothing about content. Presumably service providers can choose to mitigate and manage harm. One example would be changing the incentives around content creation. Another might be the introduction of limitations around account sign-in. There were concerns that “throw away accounts” were responsible for hate speech directed to footballers; user identifiability might assist in tracking perpetrators in the context of image-based sexual abuse and thus constitute a disincentive. A third example could be the introduction of more friction in content creation/upload/dissemination process. Certainly, the Act makes clear that the duties apply “across all areas of a service”, and so mitigations can operate in all those areas.

The Act provides a non-exhaustive list of possibilities, and more can be found in the Explanatory Notes that accompanied versions of the Bill (e.g., see para 92 here) and in previous draft codes in this area (see interim CSAEM code and interim terrorism code):

  • compliance and risk management systems (effective oversight and review mechanisms, and crisis response mechanisms);
  • design of functionalities and algorithms (e.g., changing the design of algorithms to prevent users from being directed to illegal content; extent to which image scraping tools are prohibited);
  • policies on terms of use (e.g., prohibiting illegal content, specifically highlighting priority illegal content and ensuring policies are informed by up-to-date understandings of the threats from illegal content on the platform);
  • access policies (e.g., preventing repeat offenders using their services; checks to ensure that content is not non-consensual);
  • content moderation systems (e.g., content removal) including use of automated tools for example hash, URL and/or keyword lists (especially in relation to CSEAM and terrorism content), or through employing human moderators; systems to prevent re-upload of taken down material – absent a successful appeal, and an effective appeals process is necessary to allow users to challenge false positives - which take into account different forms of communication (e.g., text, image, video) and static or livestreamed;
  • user empowerment tools; and
  • staff policies and practices (e.g., ensuring staff are properly trained and supported).

The Act requires Ofcom to ensure that the systems are effective and proportionate to the kind and size of service, and to the number of UK users and that there are adequate controls over access to the service.

There is potential overlap between systems and processes decided upon in relation to this duty, and those relating to the priority illegal content duties.

A further duty applies to both priority illegal content and non-designated content:

A duty to operate a service using proportionate systems and processes designed to, where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content. (s 10(3))

The obligation is not a takedown obligation but rather an obligation to have a system in place to respond to notifications about content. Service providers are required to provide systems for the reporting of illegal content and complaining about illegal content (see below) – and depending on content type, services could think about streamlined reporting processes (or for certain trusted flaggers).

The system is to be proportionate – so taking into account what the risk assessment says about likelihood of content or harm of particular types arising, as well as the size and capacity of the service provider. This and the fact that the obligation focuses on the process of design (“designed to”) means that system does not need to have 100% success; rather, that it be appropriate to the types and prevalence of risk. It also means that this obligation does not require a decision on individual items of content (though presumably systems set up to comply with the duty would need to do so). It may be that the proportionality requirement allows a service provider to distinguish between different categories of illegal content, treating high harm (e.g., CSAEM) or high prevalence content differently from lower harm or less likely to occur categories.

This obligation must also be understood against the general obligations to have regard to users’ freedom of expression and, for the subset of Category 1 services, the duties to protect content of democratic importance, to protect news publisher content, and journalistic content. 

Ofcom will provide more detail on how to implement this obligation in codes of practice – and those codes will focus on systems, design and functionalities (see Sch 4). In this context, it seems reasonable to assume that as well as reflecting the findings of the risk assessment in terms of the content prioritised, Ofcom will look at the resources allocated (whether they are adequate to the number of UK users) and, in so far as automated processes are used, at the degree of tolerance for false negatives and false positives (consider, for example, the level of certainty at which tools flag matches, which hash sets are being ingested and the overall size of the hash data set), as well as the effectiveness and speediness of any internal review or user appeal mechanisms.

The Act requires the service provider to set out how it aims to protect its users from illegal content, specifically identifying its approach to priority illegal content categories. If a service provider is using “proactive technology” (as defined in the Act; s. 231) for these purposes, it must say so in its terms of service. The terms of service must be written in clear and accessible language, and must be applied consistently. Additionally, a Cat 1 service provider must summarise in its terms of service the findings of the most recent illegal content risk assessment.

Additional Duties in relation to Priority Illegal Content

Duty to Prevent Users from Encountering

There is a duty to take proportionate measures relating to the design or the operation of the service to prevent individuals from encountering priority illegal content by means of the service (s. 10(2)(a)). As before this is not focussing on individual items of content but about the systems and processes that might be in place to seek to achieve the objective.  The fact that the system is to be proportionate means that perfection is not the threshold for satisfying the duty. 

As with the other duties, the measures could bite at different stages of the information flow process. The obligation does not specify upload filters and it would seem that a range of other options could be deployed (individually or in combination) instead. Other examples include the limitation of certain terms in relation to discovery tools (internal search/hashtags) on the platform. Signposting to alternative content or advice pages could – depending on the type of priority illegal content in issue – be helpful.

While this may well have been caught by measures taken under the general mitigation duty (above), the inclusion of the separate duty (i.e., “prevent individuals from encountering”) means that this will specifically have to be tackled – and the mitigation measures have the effect specified in this duty. It is also the case that some of the priority illegal content categories may have characteristics that permit specific interventions: for example, in relation to child sexual abuse and grooming systems, it could be appropriate to prevent unknown adults approaching children..

Duty in relation to Commission or Facilitation of Priority Offence

A second duty is to take proportionate measures relating to the design or the operation of the service to effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence (s 10(2)(b)), as identified in the most recent illegal content risk assessment of the service. 

The NSPCC raised concerns that the draft Bill (which did not contain this provision) failed to tackle how abusers use platforms to organise in plain sight and post ‘digital breadcrumbs’ that signpost to child abuse images (e.g., ‘tribute sites’  - using the name of a known survivor and used by perpetrators to form networks; account/group names that are overtly suggestive of CSEAM or sexual interest in children or the use of QR codes). The duty here is requiring services to take steps to disrupt this signposting. More generally, it may be that the sharing of techniques or tools would also be caught.  Examples could include paedophile manuals; the sharing details of sites carrying ‘leaked’ images/videos or for trading image-based sexual abuse material; arguably even the sale of extremist merchandise and “survival kits”. It may be that this obligation also tackles some cross-platform risks. For example, the case of groomers targeting children on one site and then encouraging them to move to another site (e.g., a livestreaming site).

Duty to Minimise Length of Time Priority Content is Available

Thirdly, there is a duty to operate a service using proportionate systems and processes designed to minimise the length of time for which any priority illegal content is present (s 10(3)(a)). This could be about speed and effectiveness of review as well as ease of re-posting the impugned material. As before, some of the techniques in relation to the general mitigation duty may be relevant here.

Ancillary Duties

In addition, service providers must operate systems allowing users and ‘affected persons’ to easily report content that they reasonably consider to be illegal content (s. 20). One question is whether, for priority content such as CSEAM, procedures should be in place to deal with complaints relating to material that might not itself be illegal content but could still be connected to child sexual exploitation and abuse (e.g., self-generated images can indicate a child is being groomed and coerced into producing images). Another question is whether specific types of content require special reporting systems, for example image-based sexual abuse. The UN Special Rapporteur on Freedom of Expression recently likened the reporting process as “shouting into a void” noting that reporting mechanisms are cumbersome, confusing, require the reporting on an item-by-item basis – which for some priority offences (e.g., harassment or controlling and coercive behaviour) is not appropriate – and require users to fit their experience into pre-defined categories. (“Promotion and Protection of the Right to Freedom of Expression”, para 98) Ofcom’s Guidance on women and girls (when produced) may be relevant here.

There must also be systems to allow users and affected persons to make complaints if the user thinks the service provider is not complying with the illegal safety duties, or by a user who thinks action against him or her is wrong (including complaints about proactive technology use) (s. 21). The complaints system must allow for appropriate action to be taken if the complaint is upheld. The system must be easy to use (including by children if the service is accessible by children) and must be transparent.

The policies and procedures that govern handling of complaints must be set out in a service provider’s terms of service, and these must be easily accessible, including for children. This is to ensure that users and affected persons can easily find and use the complaints policies and procedures, though it says little about the quality of those processes.

As noted, there are duties in relation to freedom of expression and privacy which act as a counter-weight to the concern about over-moderation. We will return to these issues in a separate blog.