Beyond bans and piecemeal interventions: a safety-by-design approach to online safety

Tags:

In the second of our series of blogs on the Government consultation on “Growing up in the online world”, we look at the interventions proposed by the Government in light of the legislative developments in the past week. Our first blog, on the framing of the consultation, is available here and further blogs will follow on compliance and enforcement, parental and other support, and what’s likely to happen next.

Background

As we set out in our first blog, the Government’s consultation was produced at speed earlier this year as a result of pressure from Parliamentarians and parental campaigners. The subsequent decision to introduce amendments to the Children’s Wellbeing and Schools Bill to take a number of “Henry VIII powers” to enact the findings from the consultation, including - potentially - an under-16 ban was also politically expedient: promising that something would be done, but not setting out what or when that would be, nor affording opportunities to civil society to influence, nor Parliament to consider, more extensive, standalone online safety legislation. You can read the analysis on the shortcomings of this legislative approach from our expert legal adviser, Prof Lorna Woods, here.

Through successive Parliamentary debates in the Commons and the Lords, the Government was forced to amend their proposals - not least as the House of Lords stood firm behind Lord Nash’s alternative proposals to bring in a social media ban for under-16s immediately. Not all Peers necessarily agreed with that as a final outcome, but they insisted on sending the amendment back to the Commons repeatedly until the Government committed to a more robust alternative. We set out below the detail of what is now enshrined in the Children’s Wellbeing and Schools Act and what that means for the consultation, which - at the time of writing - still has nearly three weeks to run.

The Government’s legislative commitment

The Government’s amendment - the text of which is at 38A here with the final amendments to that amendment at 38Z18 - 38Z21 - requires that the Secretary of State, following its “Growing Up in an Online World” consultation must “by regulations make provision requiring providers of specified internet services—

(a) to prevent access by children of or under a specified age to specified internet services which they provide, or to specified features or functionalities of such services;
(b) to restrict access by children of or under a specified age to specified internet services which they provide, or to specified features or functionalities of such services.”

In addition, the Secretary of State must, within three months of the Act passing (so by 29 July 2026), lay a statement in Parliament setting out “what progress has been made towards making the first regulations” and provide a timeline for making the first regulations. That timeline is specified in another provision: “for the first regulations to be laid before Parliament before the end of the period of 12 months beginning with the day on which the statement is laid under subsection”. If that is not done, another statement must be laid explaining why, with an extension of six months afforded for the regulations to be laid.

It is worth noting here that the earliest any of these regulations are likely to be laid (though they may take longer to be in force) is July 2027 (15 months); the latest is January 2028 (21 months). While discussions have seemingly gone forward on the basis that there will be a single use of the powers, this point is not made expressly in the terms of the Henry VIII power, unless we understand the framing of the action as having to take place and within a certain timeframe acting as a limitation on future exercise of the powers.

What does this mean for the Government’s way forward?

In any scenario, they will have to move fast - both to make the political and policy decisions as to what they are going to put in regulations in order to make a statement to Parliament in July and then to draft the regulations to the necessary level of detail to bring those decisions into workable form within the following year. In practice, it is likely that the political decisions on what measures to take forward as a result of the consultation will be taken before that consultation closes. With both either/or options - a “ban” or age-gating of specific features and functionalities - still in play, their decision to yield in this way to political pressure will not mean that the parental pressure in support of a ban lets off, either.

Bans

So, what does the consultation tell us that they now “must” do. The options are somewhat limited. The first set of options relate to where to set the age of access to social media services: respondents who support a legal requirement for a minimum age of access are asked whether that should be “at least 16” (as a yes or no answer) or “lower than 16”, with options given as 13, 14, 15 or “other”. A further question asks for views on the impacts of setting the minimum age “higher than 13”. Note that these options refer only to social media services, despite the consultation in other sections talking about other services and products, including gaming, and the section on potential scope asking for views on “how online services, including but not necessarily limited to social media, might be restricted to build our children’s wellbeing”. The Online Safety Act uses the phrase “user-to-user services” which covers social media services but also messaging, gaming and even some chatbot services. The framing of the question as “social media” is therefore narrower than the services regulated by the OSA. In terms of the general desirability and effectiveness of some kind of "ban", or age-restricted access to entire services, our legal adviser, Prof Lorna Woods, provided oral and written evidence to the Science, Innovation and Technology Committee in their recent standalone inquiry on the issue.

Age-gating features and functionalities

The second set of options, linked to the commitment to regulate via the Children’s Wellbeing and Schools Act, are restrictions to services based on features and functionalities. The consultation lists a number of examples: livestreaming, ability to send and receive images and videos containing nudity, location sharing, stranger-pairing and disappearing messages. The consultation questions ask whether these functionalities (or others, for respondents to specify) should be age-restricted and, if so, what the preferred minimum age should be. A further question asks to what extent respondents agree or disagree with the statement “restricting children’s access to these features/functionalities would provide for a safer online experience for children”, and views are also sought on what the impacts would be if age restrictions on these features were brought in.

It is however worth noting here that action on these features and functionalities could already have been taken by Ofcom under the OSA. With the exception of the ability to send and receive images and videos containing nudity, all of these features were flagged by Ofcom as potentially harmful to children in both their draft register of risks in May 2024 and in the final register of risks, published in April 2025. Yet the regulator chose not to introduce any measures relating to these features in either their illegal harms or children’s codes of practice - which respectively came into force last March and last July. The regulator has since consulted on adding livestreaming as an additional safety measure, with the detailed measure on that due to be incorporated into a further iteration of the codes later this year. (Our table setting out the gap between Ofcom’s identification of other risky features and functionalities and corresponding mitigation measures to address the risk across both their sets of codes is here.)

While it is welcome that the Government is in effect taking matters into its own hands here, and proposing to bring in regulations that specify in law the need for age-gating for individual features, the missed opportunity by Ofcom to take action over the past two and a half years since the OSA came into force is regrettable. It is also, as we set out below, likely to be compounded by what is now a rushed, piecemeal approach to playing catch up - picking out a limited number of individual features for post-hoc intervention (eg children can’t access them) rather than taking a more systemic approach to ensure safety for children across the entirety of the service and its design - bearing in mind the limited nature of the services that could be caught by the proposals. Moreover, age-gating services that include these ‘risky functionalities’ as part of their model, will do nothing to stop those design choices being made in the first place - and adult users, older teenagers (16 and up) as well as under-16s bypassing age verification, will continue to experience harm at the hands of online services, whether large or small.

A further set of questions, also in scope of the updated regulations, asks for views on “persuasive design” features associated with “addiction”; infinite scroll, autoplay, affirmation functions and alerts and push notifications singled out for consideration, along with personalised algorithms.

Respondents are asked which ones of those are “particularly ‘persuasive’”, which features should be age-restricted and at what minimum age. Additional questions ask for views on setting daily screen time limits for individual apps or restricting overnight access (so-called “curfews”). In its recent Parliamentary statements, the Government has also made clear that considerations on these latter measures are in addition to the either/or decision on a ban or age-restricted features and functionalities.

Harmful design

This welcome focus on features and functionalities demonstrates an understanding from the Government that platforms are making design choices that are actively harmful to their users to generate more engagement, and, subsequently, more profit. This has been extensively researched and evidenced by civil society and survivors alike, from algorithmically recommended self-harm and suicide content, targeted advertising and filter bubbles, through to sexually explicit search suggestions for children and the psychological harm caused by deceptive design choices. Work has also been undertaken in recent years by regulators on “harmful online choice architecture” and dark patterns (see this CMA paper) and on harmful design (work from the DRCF).

But again, there have been many missed opportunities under the existing OSA framework to address these issues. In the consultation document, the Government observes that “the DSIT Select Committee noted in their recent report, ‘Social media, misinformation and harmful algorithms’, thinking about these questions through the lens of the business model can be helpful”. What the Government doesn’t mention is that last summer it rejected every single one of that Committee’s recommendations to amend the OSA or take action on some of the structural and systemic problems the Committee had identified that cause harm - and on which the Government is now rushing to act. Ofcom has also taken no action via its regulatory powers relating to the business model, even though the OSA lists it as something companies must take account of when carrying out risk assessments; (see section 9 and section 11 for user-to-user services)

Scope

Mindful of the comparisons with the Australian social media ban - which the Government were keen to emphasise when they launched the consultation but which is only applicable to a very small number of services - respondents are also asked which type of services they think the restrictions should apply to. Do refer to Prof Lorna Woods’ explainer on the design of the Australian ban, its scope and some of the implementation issues; recent research on its effectiveness include this survey from Molly Rose Foundation although results from the eSafety Commissioner’s evaluation will take longer to emerge. A separate section looks at options relating to AI chatbots, on which we will publish commentary shortly.

So, given the imperative - the Government “must” bring in regulations that either ban children from platforms or age-gate particular features - and the fact that the consultation’s options in the latter regard are very limited, to a handful of features associated either with harm or with “addiction”, and with lots of uncertainty over the scope of the services that might be covered, is there scope for going further to ensure that we really - finally - see a step change in user safety online?

We believe there is.

A coherent, future-proofed, comprehensive way forward

Firstly, the Government must bring forward a targeted Bill in the next session of Parliament to enact all the changes we have called for in our 10 Point Plan for Strengthening the Online Safety Act. Most urgently - and requiring only minimal, targeted amendments to the legislation - three of our proposals are directly relevant to the failure by Ofcom to have acted earlier on harmful features and functionalities and are interlinked: introducing a requirement for regulated services to address all the risks identified on their services; removing the “clear and detailed” and “technically feasible” criteria for code measures; and removing with the “safe harbour” provision, which limits companies’ compliance with the Act to the measures in the codes. These issues need to be addressed urgently to prevent further gaps emerging in OSA implementation and enforcement in the coming years.

Secondly, the Government must ensure that in Ofcom’s forthcoming consultation on the phase 3 duties for categorised services under the Online Safety Act, which include enforcement of terms of service, regulated services are required to enforce the minimum age of access that already exists on most services. The Government says in its consultation that there is “No current minimum age for accessing social media set in law”. What it doesn’t mention is that the standard industry minimum age for access on most services - set out in their terms of service and providing the basis for questions on a user’s age when they set up an account - is 13. The general industry failure to enforce their services’ standard minimum age of 13 - which the European Commission has recently taken action against Meta on under the DSA - has been known about for years. If Ofcom fail to include the enforcement of platforms existing minimum age as a requirement in the forthcoming duties on categorised services then there is little hope for serious action from them if the Government decide to introduce measures to either restrict under-16s from accessing platforms entirely or introducing age-gating for individual features. Furthermore, in light of Meta rolling back their terms of services at the start of last year, civil society organisations have called for a minimum standards for platforms’ terms of service for category 1 services and include a “no rolling back” requirement such that ToS and safety measures must be maintained to an equivalent or greater protection to that at the time of OSA Royal Assent.

Finally, the introduction of a safety by design code - which is also one of our 10 Point Plan proposals, is directly relevant to the approach the Government is trying to take by identifying individual design features for post-hoc interventions. We explore that in more detail below.

A safety by design code: what it is and how it works

Safety by design has emerged as a central principle in digital regulation, reflecting a shift toward tech accountability that requires digital services to assess and mitigate risks to users from the earliest stages of product development and throughout the entire lifecycle of the product or service. The Online Safety Act (OSA) explicitly references the need for user-to-user services to be ‘safe by design’ on the face of the legislation in section 1(3). Furthermore, the Secretary of State for the Department for Science, Innovation and Technology (DSIT) sets out safety by design as a key priority in their Statement of Strategic Priorities (SSP) for Online Safety making clear that Ofcom, in having regard for the SSP, should ensure platforms:

Embed safety by design to deliver safe online experiences for all users but especially children, tackle violence against women and girls, and work towards ensuring that there are no safe havens for illegal content and activity, including fraud, child sexual exploitation and abuse, and illegal disinformation

Whilst this approach is a core tenet of the OSA, and both Parliament and Government’s expectations from it, Ofcom do not define what a safety by design approach must look like in any of their codes of practice. Furthermore, they have not taken a holistic approach to what this means in terms of the design and operation of services, their systems and processes or their business model, even - as we note above - where specific risks relating to features and functionalities have been evidenced in the risk register and corresponding mitigation measures could have already been included in their children’s code of practice. While there is broad consensus that safety by design involves proactively building protections into systems and designing out risks to ensure user safety, there remains less clarity around the specific measures platforms must adopt and how these principles translate into operational practice.

The Online Safety Act Network, in collaboration with the 5Rights Foundation, Molly Rose Foundation, NSPCC, End Violence Against Women Coalition (EVAW), Refuge, FlippGen, Glitch and the Internet Watch Foundation (IWF), have developed a Safety by Design Code of Practice which seeks to address this gap and which provides a practical overview of safety by design, set within the framework of the OSA and Ofcom’s existing codes and guidance.

This Code of Practice, which will be published later this month and will be submitted separately to Government as part of our formal consultation response, provides detailed guidance for all tech companies to help them understand safety by design and how safety by design principles might be applied in the context of digital services (including but not limited to services currently within scope of the OSA). It also serves as a template for adoption by the Government and Ofcom as a model for delivering on the Act’s requirement, set out in section 1, that regulated services are “safe by design” and realising, with no further delay, Parliament’s ambitious intent when it passed it.

It is within the Government’s gift to expedite this. Firstly, they can mandate that Ofcom produces a Code of Practice, as per the Network’s template, as part of their suite of measures to improve children’s safety online and make the UK the safest place to be online for all users. This could be achieved by just a few technical updates to the existing legislative framework. Ofcom is already required to produce codes to help services fulfil their safety duties (section 41). While the Act specifically requires a code dealing with terrorism and one dealing with child sexual abuse material, s 41(3) specifies that Ofcom should prepare one or more codes proposing measures to satisfy the safety duties - and that relates to the illegal content safety duties, the safety duties relating to protection of children as well as the duties on categorised services. Ofcom could already use this power to base its actions, and the safety by design code could underpin the requirements of the other codes. To ensure that this happens and signal their intent, we recommend that the Government should - with a small targeted amendment - update the OSA to require the production of a safety by design code by Ofcom. Alongside this, the Government should update the OSA to include a definition of safety by design to provide a clear objective for this requirement. We set out here what that definition might look like:

“For the purposes of this Act, a service is safe by design when it is designed and operated according to the following principles:
(a) that protection from harm related to regulated content is taken into account through the entire lifecycle of the service and the functionalities making up the service, including the following stages: design, development, deployment, management, and retirement;
(b) that protection from harm related to regulated content is taken into account across functionalities and features relating to the creation of accounts, the creation of content, the finding and curation of content, user engagement with content from other users, content moderation and appeals systems;
(c) that a service should seek first to reduce the risk of harm before seeking to mitigate and manage it, with remediation being the option of last resort.

Towards a more ambitious approach to online safety

There is a clear political consensus building around a safety by design approach, demonstrated in both the Commons and the Lords during the recent passage of both the Crime and Policing Act and the Children’s Wellbeing and Schools Act, which is set against the context of legal action being taken against services in the US for their addictive design. The Government can, and must, use this consultation and the broader momentum behind a more ambitious and considered approach to hold tech platforms, including AI chatbot providers, to account for the harmful design of their products, as is normal practice for other industries. A piecemeal approach will be out of date long before the legislation to enact it has passed. The Government must take this opportunity to act on the source of the problem and to deliver an outcome that is genuinely ambitious: an online ecosystem which is safe from the start for all users.

Our Safety by Design code will be available shortly.