Ofcom's Additional Safety Measures consultation: cross-cutting issues
Ofcom’s consultation on new safety measures for inclusion in both the illegal harms codes of practice and the protection of children’s codes of practice is welcome. It puts forward a number of proposals to fill urgent gaps in Ofcom’s approach to implementation and to bring consistency between the first iterations of the two sets of codes. In many areas, Ofcom has taken on board stakeholder feedback - particularly from civil society - and reflected on whether their approach could be improved or whether the evidence they have gathered can support a new safety measure. Their engagement with civil society organisations during this consultation process has been open and constructive - including deliberative workshops and other consultation opportunities - allowing for more detailed feedback on the pros and cons of the proposed measures and the opportunity for that feedback to be brought into consideration earlier in the process. The longer consultation deadline for responses was helpful for over-stretched organisations over the summer/early autumn period - though it is important to put on record that the supporting material published in this consultation was still very long and that three further Online Safety Act consultations have been launched by Ofcom in the meantime, all of them closing shortly after this one.
The measures - which we discuss in more detail in our full consultation response (below) - in many places suggest a more robust and confident approach to dealing with the impact of harm on users. There are some important “tidying up” measures, to ensure parity between the children’s and illegal harms codes, underlying the benefit of the “iterative” approach to continuous improvement.
But, as we commented at the time the consultation was published, this iterative approach is cumbersome and time-consuming. The gap between the original versions and the coming into force of the revised versions will be significant - these measures are unlikely to be implemented for at least another 12 months - leaving users unprotected from known risks for an unacceptably long period of time. In the case of the measures for proactive tech (or automated content moderation as they were previously called), a consultation on these was promised at the end of 2024; and, as we set out below, the risks relating to livestreaming, particularly to children, were well evidenced by Ofcom themselves in both their illegal harms and protection of children’s risk registers, published in November 2023 and March 2024 respectively. Meanwhile, new evidence of harm continues to emerge, leaving the regulator in a permanent state of “catch-up” rather than putting the onus for proactive risk mitigation (or at the very least, reactive harm reduction) onto the platforms.
This commentary blog draws together some of our thematic observations and concerns about Ofcom’s approach and the suite of new measures it proposes. It supplements our detailed measure-by-measure analysis, which will be submitted to Ofcom as part of our consultation response and is available as a PDF at the bottom of this page.
Cross-cutting issues
Technical feasibility
We have already published a detailed paper on Ofcom’s use of a "technically feasible” proviso for some of its measures - which is done in a way that does not appear entirely consistent - along with a number of detailed recommendations to reduce the risks of this “get-out-clause” that could leave users exposed to unacceptable risks. As noted in our analysis, one “consequence of including proviso wording in the measure is that this has the effect of allowing the service providers – at least in the first instance – to determine whether they need to comply with the measure or not” and, while in some places Ofcom is clear that it could investigate, there is no detail provided by Ofcom as to how or when they will assess whether this “self-declared technical infeasibility” is justified. None of this is incompatible with what Ofcom has so far done. Another consequence “is that the service provider will satisfy the measure (at least on its own assessment) and therefore benefit from safe harbour”, under the codes, without any obligation to expand their protections elsewhere to make up for this.
Among our recommendations arising from the analysis, we call on the Government (again) to remove the “safe harbour” provision in the OSA which allows regulated services to be in compliance with their duties if they have followed the codes of practice, regardless of the risks that they may have identified in their risk assessment; and to introduce a new “no rollback” clause, to prevent service providers taking away technical functionality in order to allow them to claim that introducing a specific safety measure is “technically infeasible”. (See our letter from February 2025 to the former DSIT Minister, Baroness Jones, with these recommendations.)
We also make a number of recommendations for Ofcom to clarify where the boundary between “technically infeasible” and “too costly to implement” lies for services of different sizes; set out a process for reviewing advances in technology that may make this proviso redundant; and to put in place additional monitoring and enforcement steps with regard to services which use this proviso to avoid the implementation of a particular measure.
Human rights framing
This has been a concern of ours for a while. (See, for example, analysis of Ofcom’s initial illegal harms consultation proposals here and of their draft protection of women and girls guidance here.) In this latest consultation, Ofcom has expanded the human rights it recognises which is welcome. There seems to be some recognition that victims have rights too, and that Ofcom has an obligation to protect these rights (para 1.35) - though this discussion seems to focus on victims as speakers (para 1.34) In the assessment of specific measures, there is however recognition of other rights. For example the discussion of hashmatching in relation to CSAE and IIA recognise the impact on the victims’ Article 8 rights. While we agree with the conclusion Ofcom reaches in its discussion, a concern remains. Moreover, at no point does Ofcom recognise that different types of speech may have different value: while political speech is highly valued, the European Court of Human Rights has taken a different approach to gratuitous insults, even before we consider hate speech. The general discussion posits that the question Ofcom asks is whether a measure constitutes an unjustified interference with human rights (para 1.36). While this is clearly a relevant and important question to ask, at no point does Ofcom ask if in not recommending a measure (for example because of costs), there will be a failure to protect rights. This framing therefore has significance for assessing whether further action is needed or, conversely, whether burdens on service providers are too great.
Ofcom’s approach to proportionality in this regard has also been a long-term concern, with the main driver being primarily economic: to avoid imposing costs on companies. We noted in our response to the illegal harms consultation that: “While the OSA requires regulated services take a "proportionate" approach to fulfilling their duties, and indeed requires Ofcom to look at resources, Ofcom is also required - among other issues - to look at the severity of harm”; and that “this focus on costs and resources to tech companies is not balanced by a parallel consideration of the cost and resource associated with the prevalence of harms to users (for example, on the criminal justice system or on delivering support services for victims) and the wider impacts on society (particularly, for example, in relation to women and girls and minority groups, or on elections and the democratic process).” We note in this new consultation that Ofcom has shifted its approach to proportionality in places; for example in para 1.33 “we therefore start from the position that UK users should be protected from the harms set out in the Act and place weight on all the specific evidence of harm set out in our Registers of Risk”, and para 1.35. But the regulator’s main focus is still on harms, not rights - and there is no explicit recognition of some of the articles in issue - notably Article 3 (freedom from torture and inhuman or degrading treatment) and Article 4 (freedom from slavery and forced labour) - which are not qualified at all.
Approach to Evidence and to Weighing Harm
This is also an area of long-standing concern (see commentary in our full response to the illegal harms consultation). In this consultation, Ofcom sets out very clearly why we have concerns: “While we may have evidence of risk of harm (or of how it manifests), we do not always have evidence about effective ways to proportionately mitigate this risk (as is needed when making recommendations in the Codes). For example, we do not always have evidence of which measures are effective or what unintended consequences they may have.” (Para 1.59) This approach fails to take account of the impact of the unintended consequences of not acting (especially when there is evidence that this inaction will lead to harm) or indeed to weigh these against the unintended consequences of acting where the evidence for the recommended action may not be complete. It also fails to deliver the underpinning objective of the Act - to put the onus on service providers to take steps to mitigate the risk of harm on their services when they identify it, rather than waiting for an “evidenced” measure to be codified by the regulator.
There is some recognition that a more partial approach might be appropriate, however, which we would like to see developed further: “Online service providers within the scope of the Act (and the technologies they use) are evolving rapidly, and new harms may emerge as a result. There is a need for prompt action to protect people online. Therefore, some of our proposals are based on an assessment of more limited or indirect evidence of impact and have a reliance on logic-based rationales. We welcome comments on this approach, as well as additional evidence in relation to any of our proposals in this consultation.” (para 1.66)
The most obvious example in relation to the difficulties relating to Ofcom’s weighing up of evidence is the livestreaming measures. The evidence base for the harm that can be caused by livestreaming was already well established at the time that Ofcom published its illegal harms consultation in November 2023 and further expanded in its risk register for the children’s codes consultation in March 2024, yet - as we flagged throughout our full response to that latter consultation - there were no recommended measures to mitigate the risks of that particular functionality. While it is welcome that Ofcom is now consulting on such measures, there is still a noticeable reluctance by the regulator to take a “safety-first” approach and recommend that, given the evidence of harm, children should be prevented from livestreaming entirely - or, if not a blanket prohibition, to explore a gradated approach to give 16-17 year olds access (with safeguards) while preventing younger children from using the functionality at all.
Moreover, these options are not even given as options on which views are sought, even if Ofcom’s preference is for a less prohibitive measure: instead, Ofcom says that it will only be “if we receive compelling evidence during the consultation” that they “are prepared to go further, which could include recommending that children are prevented from livestreaming entirely.” (Para 7.9) This reluctance to propose a measure is surprising given that Ofcom notes that “this is a step that some services have already taken as a matter of their own service design”. Instead therefore of levelling up - using best practice in some services (presumably based on their own risk assessment of the harms to children) - to set the bar for others, Ofcom is levelling down; allowing risky services to continue to take no action, putting the onus on consultation respondents to demonstrate why they should be compelled to do so, while providing no incentive to other services to “level up” and introduce similar measures in the meantime.
The timescales involved in iterating the codes are significant: two years since the first codes were consulted upon, a further year from now until the current proposed measures are likely to be in force, a further two years from whatever future consultation takes place to bring any strengthened measures into force. This brings in an unacceptable delay to the mitigation of risks of well-evidenced harm to children. As we note above, the lack of a “no rollback” provision also means that those services who have already prevented children from livestream could decide to reverse this and still be in compliance with the new versions of the codes. (We discuss the livestreaming proposals further in our full response.)
Safety by design
We have written extensively about “safety by design” and the shortcomings of Ofcom’s approach to deliver the Act’s objectives in this area (see our responses to the illegal harms and children’s codes and our concerns remain here. While some of the proposed measures - including automated content moderation (para 1.51) and livestreaming (p27) - are framed by Ofcom as being “safer by design”, these are primarily about ex-post mitigations for harmful content (reporting content, or relying on user action after harm has occurred) or introducing a form of safety tech (proactive tech measures) rather than embedding safe design at the level of systems and processes. There is still no understanding of what good service redesign should look like to ensure a more holistic orientation towards safety.
The proposed measures
Our detailed response to the proposed measures is attached as a PDF below.