Regulatory certainty? The Government's response to the SIT Committee report on Social Media, Misinformation and Harmful Algorithms

Tags:

The Government and Ofcom last month published their responses to the Science, Innovation and Technology Committee’s report on Social Media, Misinformation and Harmful Algorithms. We have published a statement on the matters relating to online advertising, where the Government rejected the Committee’s recommendations. It has taken the same approach in relation to all of the other Committee recommendations too: we focus in this blog on those pertaining to the Online Safety Act and related issues and the rationale presented by the Government for rejecting them.

The Government’s position

In short, the Government’s position on the Committee’s report can be summed up as: we agree with the analysis but we have the Online Safety Act and therefore we are not minded to do anything more at present; please refer to Ofcom’s response where appropriate. (Ofcom’s response - inevitably - is to defend its approach and not engage with the Committee’s practical suggestions, such as what might constitute a robust crisis response protocol.)

Harm and its mitigation

Recommendations relating to research and data gathering are deferred to Ofcom. A recommendation that, based on such research, the government should “publish conclusions on the level and nature of harm that these platforms promote through their recommendation systems” and that platforms should publish the actions they take in response, is met with a response (p5) that refers to the duty that platforms have under the OSA to mitigate risks - without acknowledging that due to the “safe harbour” baked into the Act, companies only have to follow measures in Ofcom’s narrow codes of practice to be in compliance with their duties. (We discuss these issues further below.)

Mis/disinformation and factchecking

In response to the Committee’s recommendation that social media platforms “embed tools within their systems that identify and algorithmically deprioritise factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm” (p6), the Government rejects this and claims that the OSA “focuses on the worst kinds of mis- and disinformation – that which is illegal or harmful to children”. There is no indication that the Government has done any assessment of what the “worst” kinds of mis- and disinformation actually are and, if it has done, whether it is satisfied that the types not covered by the Act - eg election misinformation, climate change disinformation, anti-vaccination (and other public health) misinformation or conspiracy theories - are not bad, or even harmful. The Government goes on to note, in its response to this particular recommendation, that ”most of the major social media platforms already employ fact-checking mechanisms as part of their content moderation strategies within the UK”; as demonstrated by Meta’s decision to end fact-checking across its platforms in the US in January this year, there is no guarantee that companies will continue to employ these mechanisms and there is nothing in the OSA to stop them rolling back on such protections in the UK. (See our comment piece on this from earlier in the year.)

Data and recommender algorithms

A proposal to mandate platforms to give users a “right to reset” their data, with that displayed prominently on their home page, is met with a convoluted description that overstates user rights under data protection law and comes to the surprising conclusion that “when taken together, the OSA and UK GDPR provide citizens with significant influence over how recommendation algorithms affect them”, with no suggestion that - at the very least - these rights might be better publicised by the platforms. This part of the response is also wrong in its reference to the provisions in the OSA that apply to Terms of Service and User Empowerment, which do not apply to recommender algorithms.

Access to data for research

Recommendations relating to independent third-party research are answered with reference to the provisions in the Data (Use and Access) Act for social media data access - with no confirmation as to when the regulations to bring that regime into place will be forthcoming; this is particularly disappointing given that the calls for this access date back to the passage of the Online Safety Bill, over four years ago.

Small platforms

Calls for a new “small but risky” category of platforms are rejected on the basis that the Government “decided” there was no need to use existing powers to bring such platforms into category one under the existing categorisation provisions; what the response doesn’t mention is that this was based on advice from Ofcom (a position we, and other civil society partners, fundamentally disagreed with). Moreover, following the legal challenge from Wikimedia in the summer, there are now significant knock-on delays to the implementation of phase three of the regime as a result of this decision.

AI-generated harms

Recommendations for urgent action to address harms arising from AI-generated content online are also rejected: “The government does not presently agree with the Committee’s recommendation for additional legislation in this regard. Doing so prior to full implementation of the OSA would complicate and undermine this process. The OSA lays the foundation for strong protections against illegal content and harmful material for children online, including content which is AI generated.” What the response doesn’t say is that “full implementation” of the OSA is not likely to be complete until well into 2027. It’s highly likely that the current Government will have run out of time at that point to develop new legislation, introduce it and see it implemented before the next election.

Risk assessment and mitigation

Our main focus in the rest of this blog is on the recommendation that Ofcom and DSIT should confirm that services are required to act on all risks identified in risk assessments, regardless of whether they are included in Ofcom’s Codes of Practice. This is something we have argued for since the publication of the draft illegal harms codes of practice back at the end of 2023. It would address the gap, evidenced in this table [link] between the comprehensive assessment of risks carried out by Ofcom, relating the incidence of harm to specific features and functionalities and the limited number of measures recommended by the regulator in its codes to reduce the risk of that harm. Ofcom’s justification for not including such a measure in its codes is that they are constrained by the Act’s requirement that code measures should be “clear” and “detailed”.

Interestingly, the Government takes a different tack in its response to the Committee’s call - we quote it in full here:

The Online Safety Act establishes duties on providers to carry out ‘suitable and sufficient’ risk assessments in primary legislation. The primary legislation establishes a wide range of risks that providers need to assess for, such as risks that users will encounter certain kinds of harmful content via the service. The Government can update these risks, through changes to the underlying kinds of content and offending that providers have duties for. However, providers are not generally obliged to identify risks beyond the matters that primary legislation stipulates. This is an important safeguard to give regulatory certainty to providers and protect against regulatory overreach and arbitrary enforcement. Subsequent to their risk assessments, providers have wide-ranging duties to take proportionate steps to protect their users. These duties are proportionate to the findings of their risk assessments and the size and capacity of the provider among other matters.
Ofcom sets out in codes of practice steps that different kinds of providers with different risk levels can take to fulfil their duties. Again this is an important safeguard for regulatory certainty. It gives providers direction about how far they need to go to mitigate identified risks, where a lack of such direction could affect users’ rights and Ofcom’s ability to enforce the duties effectively. Therefore the Act establishes that Ofcom needs to create robust and comprehensive codes of practice that ensure providers offer greater protection to UK users, while also giving these providers clarity. Ofcom has now published its codes of practice for the illegal content duties and children’s duties under the OSA regime. These recommend that all providers – including small providers with low risk levels - should take steps to deal with illegal content and to protect children on their service. These steps are comprehensive and cross-cutting. For example, they set out that providers should put in systems for moderating content. The expectation will be that relevant providers implement these in such a way that they operate effectively and deliver protections for UK users from relevant risks, including where these manifest through different kinds of features or functionalities.
Ofcom has stated that it intends to develop its codes of practice iteratively. In line with this, on 30 June Ofcom published various proposals for new measures for the OSA codes, for public consultation. These include additional steps that relevant providers should take in relation to livestreaming, recommender systems, use of proactive scanning technology, and crisis response. (p12)

There are a number of fundamental errors in this response. Firstly, that providers’ risk assessments are based on content stipulated in the Act which needs to be updated by the Government; they’re not. Secondly, that the Committee’s recommendation was asking that providers “identify risks beyond the matters which primary legislation stipulates”; they weren’t. The gap between the risk assessment and its mitigation is precisely because providers are required to identify risks relating to all the matters which primary legislation stipulates to be in compliance with their risk assessment duties. But they do not have to do anything about all of them; they only need to follow the measures in the codes of practice which relate to a small subset of risks. A significant volume of identified risks can be left unmitigated as a result of Ofcom’s narrow codes - and providers will still be in compliance with their duties because of the safe harbour written into the Act. We will be publishing more soon on how these various provisions in the Act inter-relate, leading to the approach taken by Ofcom which has been widely criticised as “unambitious”.

As the Liberal Democrat Peer, Lord Clement-Jones, said in a recent House of Lords debate on Ofcom’s protection of children’s codes:

“the criticisms that have been made are completely justified. These codes are too cautious; they fail to incorporate civil society expertise; and they are undermined by the safe harbour provision and an incremental approach that leaves gaps, which leave children vulnerable”.

The Government response - in a sentence that could have been drafted by tech lobbyists - then talks about this approach being necessary “to give regulatory certainty to providers and protect against regulatory overreach and arbitrary enforcement.

Similarly this sentence would come as a surprise to all the Parliamentarians, particularly those in the House of Lords, who participated in debates on the Online Safety Bill as it went through Parliament: “Therefore the Act establishes that Ofcom needs to create robust and comprehensive codes of practice that ensure providers offer greater protection to UK users, while also giving these providers clarity.”

In fact, the Act established - as set out in section 1 - a framework which “imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm” and for providers to ensure that their services are “safe by design”. The requirement for companies to take steps to mitigate the risks that they identify in their risk assessments - many of which will be associated with design decisions, systems and processes - is entirely in keeping with the objectives of the Act. For the Government to argue otherwise is ludicrous.

It looks particularly odd given that the Code in relation to the recently enacted Product Regulation and Metrology Act (PRAMA) notes “The enabling powers set out in the Act are broad by necessity to ensure that the regulatory system can adapt quickly to pressing threats and technological advances.” The general safety duty for products is broad - that products be “safe” (and the obligation to make places “safe” can also be found in the Health and Safety at Work Act - a regime which has operated without concerns about regulatory overreach for a long time). The case law on legal certainty from the European Court of Human Rights (which, given it relates to possible intrusions on individuals’ fundamental human rights, sits at a higher standard than law in general) accepts in relation to a sub-set of cases on surveillance that there should be safeguards against abuse of powers. Ofcom is, however, constrained by the general framework on regulators in the UK and specifically on the provisions in the Communications Act that guide and limit how Ofcom should behave, particularly section 3(3).

We urge the Government to reconsider their response to this recommendation from the SIT Committee and confirm that regulated services are expected to act on all risks identified in their risk assessments, if necessary amending the OSA to make this requirement explicitly clear.

Overall, for a Government that is nearly 18 months old, defending a piece of legislation that was passed by the previous Government while simultaneously being well aware of its limitations - given that they campaigned vigorously in Opposition to address them - the position set out throughout this report is one that cannot be maintained.