Online Safety Act Network

Bringing small high-harm platforms into the online safety regime: how one word changed the game

Tags:

The online safety regime splits in-scope regulated services into categories: Category 1, Category 2A and Category 2A. Category 1 services, the largest user-to-user services with certain ‘functionality’ (defined in cl. 234), receive the toughest risk mitigation duties: having to provide ‘user empowerment tools’ (cls 15-16) and effective terms and conditions in relation to content that would have formerly been referred to as content ‘harmful to adults’. Category 1 services are also under obligations regarding fraudulent advertising (as are Category 2A search services), and more detailed obligations generally. For an overview of the different requirements on services in each category, you can refer to this comparison table.

The threshold criteria for Category 1 services are to be defined in a Statutory Instrument (Sch. 11, para. 1) based on the number of users of a service and the service ‘functionalities’ (i.e., what the service allows users to do) as well as ‘any other characteristics […] or factors […] the Secretary of State considers relevant’. In July 2023, Ofcom published a call for evidence to inform its advice to Government on the thresholds for these categories.

The Government’s addition (at Commons Report stage) of ‘any other characteristics’ to the categorisation criteria partly recognised the gap in the regime. However, as former DCMS Minister Jeremy Wright argued, this change did not go far enough to stop size being the dominant criterion for platforms’ categorisation.

Some small platforms pose a very high risk to users, but, because of their size – under the previous provisions in the Bill - they would not have met the Category 1 threshold. These small high-harm services, which include dedicated hatred and harassment sites, would not be subject to appropriate ‘Triple Shield’ risk mitigation measures for the level of harm they pose to users. This opened a large gap in a risk-based regime, both in terms of users’ protection and Ofcom’s ability to intervene. For example, these small services could evolve into high-harm sites organically, or they could – once the regime is in force – become such as a means of escaping the regulatory requirements that will otherwise be imposed only on larger sites.

The House of Lords attempted to address this issue at Committee stage, by means of Amendment 192 tabled by Baroness Morgan and Amendment 192A tabled by Lord Griffiths. Despite cross-party support during Committee debate, the Government stated they were ‘not taken by these amendments’. The rationale for this centred around the power of ‘public discourse online’ and ‘highest risk’ being concentrated in the largest platforms. Amendment 192 was therefore withdrawn, and Amendment 192A not moved.

To address concerns about unmitigated risks posed by small but high-harm platforms, Baroness Morgan subsequently tabled a simple amendment (i.e., Amendment 245) to Schedule 11, supported by Baroness Kidron, Lord Stevenson of Balmacara and Lord Clement-Jones. In relation to determining which services would fall within Category 1, the Amendment would ‘move from a test of size “and” functionality to a test of size “or” functionality’. Baroness Morgan called a vote; the government was defeated and the amendment was agreed. By changing the requirement for Category 1 from a size ‘and’ functionality threshold in Schedule 11, para. 1(4) to an ‘or’ criteria, Ofcom can now bring smaller types of sites into this category and make the regime more risk based.

Amendment 245 closed the gap and now provides a greater level of protection from the Triple Shield to adult users of services, requiring Ofcom to consider functionality independently of size when determining a service’s categorisation. The overall effect is that the forthcoming OSA now creates a risk management regime of all social media, with bigger or riskier companies having greater responsibilities.