Online Safety Act Network

Search Illegal Content Duties

Tags:

The first phase of Ofcom’s consultations on implementing the Online Safety Act focuses on the illegal content duties. In this explainer, Prof Lorna Woods and Dr Alexandros Antoniou set out what these duties do in relation to search services and what they require from regulated services in response.  A complementary explainer is available on how the illegal content duties apply to user-to-user services.

There are three types of duty for search services relating to illegal content:

  1. risk assessment;
  2. safety duty; and
  3. ancillary duties.

While the framework is similar to that for user-to-user services, the details of the duties do differ between the two categories of service provider, arguably reflecting the different nature of those services.  The obligations of search services only apply to “search content”, a defined term which excludes advertising content (s.57).

Note the duties in relation to fraud are separate duties.

Each of the risk assessment and safety duties recognises two types of content:

  • priority illegal content; and
  • non-designated illegal content.

The Act contains a definition of “illegal content” (s.59) which is the benchmark or trigger for these particular duties. The definition is linked to what the Act terms a “relevant offence”. There are two aspects that need to be considered. The first aspect considers the type of offences in issue. It defines “priority illegal content” by reference to offences listed in Schedules 5 (terrorism offences), 6 (child sexual abuse offences) and 7 (a range of other offences). (This table shows the full list of priority offences along with CPS guidance, where available.) Any other relevant offences (called here, non-designated illegal content) are those where the victim or intended victim is an individual. The other aspect states that content amounts to a “relevant offence” if its use, possession, or publication or the viewing or accessing of it is an offence. We will return to the more detailed meaning in another blog.

Risk Assessment

All services should carry out a “suitable and sufficient” risk assessment (s 26), in particular focussing on the risk of users encountering each of the types of priority illegal content (so that is looking at each type separately) and then non-designated illegal content. Explanatory Notes (to an earlier version of the Bill) state that “service providers will therefore need to assess how likely content is to be illegal, and therefore how likely it is that their search content contains illegal content, on the basis of the best information available to them” (para 156). In doing this, the service should bear in mind the “risk profile” that applies to the particular service. The risk profile is something that Ofcom will develop following a market-level risk assessment. Ofcom will also produce other documents to help services carry out their obligations:

  • guidance about how to do a risk assessment; and
  • guidance about illegal content judgements.

In assessing risk, the service should bear in mind the risks presented by the algorithms used and the way that the service indexes, organises and presents search results, as well as the impact of the design and operation of the service affects those risks. By contrast to user-to-user services, there is no obligation to consider the level of risk that the service could be used for the commission or facilitation of a priority offence.  Although the principal concern is about harm caused by content, the risk assessment needs to look at the role of the service, its design and the way it is operated in contributing to that.

The risk assessment must be kept up to date. Ofcom may take enforcement action against a service provider if it takes the view that the risk assessment is not suitable and sufficient.

Safety Duty

The search safety duty (s. 27) includes, in relation to all illegal content (priority illegal content and non-designated illegal content):

A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 26(5)(c)).

Note, this says nothing about content. Presumably service providers can choose how to mitigate and manage harm.  Certainly, the Act makes clear that the duties apply “across all areas of a service” (s. 27(4)), and so mitigations can be interventions in all those areas.

The Act provides a non-exhaustive list of possibilities and further examples can be seen in the interim codes:

  • compliance and risk management systems (effective oversight and review mechanisms, and crisis response mechanisms);
  • design of functionalities and algorithms (e.g., changing the design of algorithms to prevent users from being directed to illegal content; autocomplete functions suggesting illegal content; not suggesting further material based on previous searches);
  • develop in house tools or use third party tools to keep up to date about search terms indicating priority illegal content (e.g. CSEA)
  • directing users to reporting mechanisms for illegal content, warning them about the possible consequences and directing them to alternative sources of support or information
  • policies on terms of use (e.g., prohibiting illegal content, specifically highlighting priority illegal content and ensuring policies are informed by up-to-date understandings of the threats from illegal content on the platform);
  • providing functions that allow users to control content they encounter in search results;
  • content moderation systems including use of automated tools for example hash, URL and/or keyword lists (especially in relation to CSEAM and terrorism content), or through employing human moderators and systems to respond to user concerns;
  • consider streamlined reporting systems for trusted flaggers and similar organisations, especially in relation to Schedule 5 and 6 priority illegal content
  • user support; and
  • staff policies and practices (e.g., ensuring staff are properly trained and supported) and staff support where appropriate.

The Act requires Ofcom to ensure that the systems are effective and proportionate to the kind and size of service, and to the number of UK users and that there are adequate controls over access to the service.

There is potential overlap between systems and processes decided upon in relation to this duty, and those relating to the priority illegal content duties.

A further duty applies to both priority illegal content and non-designated content:

A duty to operate a service using proportionate systems and processes designed to minimise the risk of individuals encountering search content of the following kinds—

(a) priority illegal content;

(b) other illegal content that the provider knows about (having been alerted to it by another person or become aware of it in any other way). (s 27(3))

This obligation is different from the user-to-user duty. It does not refer to the removal of content but to the risk of users encountering certain types of content.  Moreover, the obligation is to have systems and processes in place which are aimed at this objective. It is not an obligation to ensure that users never encounter the content. (See also ancillary duties) Note the difference in treatment between priority illegal content and non-designated content. The duty to minimise applies to all illegal content, whether or not the provider knows about  that content; by contrast for non-designated content, the obligation is conditional on the provider’s awareness of the content.

The system is to be proportionate – so taking into account what the risk assessment says about likelihood of content or harm of particular types arising, as well as the size and capacity of the service provider. This and the fact that the obligation focuses on the process of design (“designed to”) means that system does not need to have 100% success; rather, that it be appropriate to the types and prevalence of risk. It may be that the proportionality requirement allows a service provider to distinguish between different categories of illegal content, treating high harm (e.g., CSAEM) or high prevalence content differently from lower harm or less likely to occur categories.

This obligation must also be understood against the general obligations to have regard to users’ freedom of expression (s 33). 

Ofcom will provide more detail on how to implement this obligation in codes of practice – and those codes will focus on systems, design and functionalities (see Sch 4). In this context, it seems reasonable to assume that as well as reflecting the findings of the risk assessment in terms of the content prioritised, Ofcom will look at the resources allocated (whether they are adequate to the number of UK users) and, in so far as automated processes are used, at the degree of tolerance for false negatives and false positives (consider, for example, the level of certainty at which tools flag matches, which hash sets are being ingested and the overall size of the hash data set), as well as the effectiveness and speediness of any internal review or user appeal mechanisms (see below).

The Act requires the service provider to set out how it aims to protect its users from illegal content, specifically identifying its approach to priority illegal content categories. If a service provider is using “proactive technology” (as defined in the Act) for these purposes, it must say so in its terms of service. The terms of service must be written in clear and accessible language, and must be applied consistently. Additionally, a Cat 2A service provider must summarise in its terms of service the findings of the most recent illegal content risk assessment (s 27(9)).

By contrast to the user-to-user duties, there are not additional obligations in relation to priority illegal content.

Ancillary Duties

In addition, service providers must operate systems allowing users and “affected persons” to easily report content that they reasonably consider to be illegal content (s 31). One question is whether, for priority content such as CSEAM, procedures should be in place to deal with complaints relating to material that might not itself be illegal content but could still be connected to child sexual exploitation and abuse (e.g., self-generated images can indicate a child is being groomed and coerced into producing images). Another question is whether specific types of content require special reporting systems, for example image-based sexual abuse. The UN Special Rapporteur on Freedom of Expression recently likened the reporting process as “shouting into a void” noting that reporting mechanisms are cumbersome, confusing, require the reporting on an item-by-item basis – which for some priority offences (e.g., harassment or controlling and coercive behaviour) is not appropriate – and require users to fit their experience into pre-defined categories. (“Promotion and Protection of the Right to Freedom of Expression”, para 98). Ofcom’s guidance on women and girls (when produced) may be relevant here.

There must also be systems to allow users and affected persons to make complaints if the user thinks the service provider is not complying with the illegal safety duties, or by a user who thinks action against him or her is wrong (including complaints about proactive technology use) (s 32). The complaints system must allow for appropriate action to be taken if the complaint is upheld. The system must be easy to use (including by children if the service is accessible by children) and must be transparent.

The policies and procedures that govern handling of complaints must be set out in a service provider’s terms of service, and these must be easily accessible, including for children. This is to ensure that users and affected persons can easily find and use the complaints policies and procedures, though the Act says little about the quality of those processes.

As noted, there are duties in relation to freedom of expression and privacy which act as a counter-weight to the concern about over-moderation, as is the right to complain about de-indexing or demotion in search results. We return to these issues in a separate blog.