What the King didn't say: the online safety-shaped hole in the Government's legislative agenda

Tags:

This week, in her explosive resignation letter, the Home Office Minister Jess Phillips revealed what civil society campaigners have known for months: that Keir Starmer’s No10 has been the blocker to any proposals to bring in new legislation on online safety, no matter how urgent or impactful. Referring to the implementation of new technology to stop children taking nude images of themselves, Phillips wrote:

“We could stop this abuse. It has taken me a year to get you to agree to even threaten to legislate in this space. Not legislate, just threaten. This is the definition of incremental change. Nothing bold about it.”

The Guardian subsequently reported on the frustrations of child safety campaigners at these delays. The King's Speech does not contain any of these measures.

“How many children were left without a safety net in the time we dilly dallied and worried about tech bosses?”, Phillips asked.

How many more will be waiting now?

Not legislating, not threatening

This is not the only pressing issue to do with online safety and AI missing from the legislation. Indeed, “online safety” is not even mentioned in the 129-page briefing pack that accompanies the text of the Speech; “artificial intelligence” is mentioned once, in the context of the new “Regulating for Growth” Bill, through which the Government will introduce a “suite of measures” to ensure that regulators “must prioritise growth more clearly than they do today” and will “actively support innovation, reinforced by clearer strategic steers from government to individual regulators”.

Good regulation to protect consumers does not have to be a brake on innovation. But “actively supporting innovation” without enhanced consumer safety regulation alongside it is a potential recipe for disaster - and is likely to lead to more costs to society in the long-term than the impact of stronger regulation would have on economic growth. Indeed in DSIT’s updated impact assessment for the OSA, they estimated that prevention of 1.3% of online harm would offset the costs of regulation to business and government.

Technology, by its nature, evolves fast and UK citizens will only be protected from emerging risks and unintended consequences if regulation keeps pace. Parliament gets this. Successive Select Committees, including the Science Innovation and Technology Committee, the Lords Communications and Digital Committee and the Foreign Affairs Committee along with the Speaker’s Conference, have detailed some of the issues that need intervention in their recent inquiries and have recommended legislative actions. The current Government has rejected all of them, even though they would - taken together - create a significant legislative package.

Beyond Parliament, the first report from the Southport Inquiry recently called for the Government to extend the Online Safety Act (OSA) to allow coroners to access social media activity of perpetrators and trailed a number of online safety related issues, including the use of VPNs, that it would be looking at in its second stage work. (While the Government in its Kings Speech briefing proposes that the new National Security Bill will respond to issues arising from the Southport tragedy, its commitment to “criminalise the creation and sharing of the most harmful violent material to stop the spread of content that glorifies, trivialises, or normalises serious violence” appear to cover existing offences and goes no further in addressing the Inquiry’s findings nor those recommended by the SIT Committee, amongst others. Moreover, if we have learnt one thing from the last decade of debates and delivery on online safety legislation, the criminal law has limited impact on the proliferation of online harms; creating more new offences without action to address the business model of the platforms which powers the spread of illegal content will have only limited impact.)

The current Secretary of State for Science, Innovation and Technology also seems to understand the need for an iterative approach to regulation. She told the Guardian earlier this year:

“Every year MPs have a finance bill to deal with the budget. In a world where technology is developing so quickly, we’ve got to be prepared to look at this much more, much more quickly. We can’t have a situation where you only legislate once every eight years to deal with some of these issues.”

One can only assume that the conversations she has had about this with Starmer’s Downing Street mirror those that Phillips had. Notably, Kendall's predecessor - Peter Kyle - did not at any point in his tenure suggest the laws he was responsible for were in any way inadequate. Quite the opposite: he claimed that the Big Tech companies should be treated like “nation states” and approached with "humility" and was more than happy to let Phillips face intolerable online abuse from Elon Musk on X without intervening, later arguing that Musk was someone the Government should “respect and engage with”.

Creating the safety net

Across civil society and academia, there is also a very long, well-evidenced list of specific recommendations to strengthen the OSA or fill gaps in its protections which have been presented to the Government repeatedly by multiple organisations and campaigners. A narrowly scoped Bill to bring these forward would still be a comprehensive legislative vehicle and, nearly three years on from the Act coming into force, it is entirely justifiable for Parliament to revisit it and improve it.

Components of this package might include:

  • Urgent amendments to strengthen the Online Safety Act to make it work better and improve its enforcement, as per our 10 point plan. For example
    • The insertion of an overarching statutory duty of care so that companies can be made to foresee and mitigate all reasonable risks to all people across their entire web property, not just bits of it: a simple, future-proofed measure.
    • Delivering on the Act’s ambition to make regulated services “safe by design”: a definition needs to be inserted into the Act to make it clear what this is and how it works, along with a requirement for a cross-cutting code of practice to sit beneath the existing content-based codes to bring this into operational force. We will be publishing more on this next week
    • Stronger protections for adults, not just relating to illegal things and criminal activity. As algorithmic media (both AI, social and search) takes over as the dominant source of information we need to protect the social goods we need in our information supply chain to have a healthy informed democracy and bring in protections against design-based harms, such as addictive design and emotional manipulation for adults too.
    • The removal of the “safe harbour”, which means Ofcom’s codes of practice act as a ceiling, not a floor, along with inserting a requirement that regulated services must take steps to address all the risks they have identified in their risk assessment. The “clear and detailed” and “technically feasible” criteria, which have also limited Ofcom’s selection of measures for their codes, need to be removed.
  • Specific powers to short circuit lengthy processes when there is an urgent high harm risk from individual platforms (such as suicide sites) and to compel immediate takedown of content related to priority offences. The legislation needs to be clear that Ofcom can use its power ‘to direct’ companies rapidly in an emergency - for example, with regard to deprioritisation, account suspension as well as takedown orders - with safeguards such as a company’s right to appeal against those measures or a requirement on Ofcom to apply to the Court (for emergency powers and business disruption). In addition, specific crisis rules and protocols need to be introduced to compel action at times of public emergency. There is no mention of such measures in the proposed scope of the National Security Bill.
  • A reorientation towards victims: many organisations have called for a clear alternative dispute resolution mechanism or route for redress for UK users victimised by harmful content or conduct online; a code of practice to protect candidates and campaigners during elections from online abuse (as recommended by the Speaker’s Conference and developed by the Network and key partners) is urgently needed, but not currently compatible with the scope of the Representation of the People Bill, despite the fact that the King’s Speech briefing repeats Government claims that the Bill will address “harassment and intimidation” of electoral officials and campaigners. Other campaigners have called for a clear statement in Section 1 of the Act that Ofcom’s primary objective is to reduce exposure to online harm, with a parallel statutory obligation for Ofcom to prioritise safety over industry growth; and that The regulator should also face a new overarching duty to deliver measurable reductions in exposure to harm among young people. Outcomes from the Law Commission’s current work to consider the introduction of consumer class actions may also be relevant here.
  • Action on app stores: this is unfinished business from the OSA; there is a requirement on Ofcom to produce a report but this is currently not expected until early 2027. Legislation is needed to bring in a new statutory Code of Practice for app store and device operating systems, without waiting for the publication of the report. This should be seen as a complement to, not a substitute for, effective age assurance at platform level.

Harms arising from unregulated AI also need to be urgently addressed. Even if the Government wants to resist a comprehensive “AI Bill”, covering all potential aspects of its deployment and use, its decision to leave its ongoing regulation to individual sectoral regulators, most of which is voluntary, will not provide the levels of comprehensive protections that consumers, children and vulnerable adults might expect in other areas of product safety. As noted above, the focus of the “Regulating for Growth” Bill potentially works against this basic set of consumer expectations. Legislative actions needed here include

  • Comprehensive regulation for AI chatbots, to rectify the fact - as we set out here - that the Government’s recent amendments leave many gaps in protection remaining. Forward-looking guardrails for agentic AI are also urgently needed.
  • Risk assessment and mitigation obligations should be placed on providers of products containing AI systems, and the Product Regulation and Metrology Act should be updated to cover digital as well as physical products.
  • Protection for citizens from the AI-exacerbated spread of misinformation, as per the recommendation from the SIT committee report last year, with generative AI platforms brought into line with other online services that pose a high risk of producing or spreading illegal or harmful content.

There are also a set of wider, systemic and structural issues which - without action to address them - limit the ability of the UK government to adequately protect its citizens, its society and democracy and its information ecosystem. These include

  • Action to address societal and information ecosystem threats: bringing in foreign ownership rules for social media companies and other parts of the digital information ecosystem and a fit and proper ownership test; reviewing the role of public service broadcasting, and due prominence of quality news; putting the National Security Online Information Team on a statutory footing.
  • Preventing harms arising from the profit-driven business model: this might include a conduct-based set of measures to address the behaviours and incentives that apply to tech managers and shape the commercial and product strategy of regulated firms i.e. as we see in financial services; and the introduction of “Know your Customer” requirements - as per financial services - to prevent the use of anonymous accounts or bot networks to cause individual and societal harm, or to carry out fraud.
  • Action on online advertising - successive governments have kicked this into the long grass, the Online Advertising Taskforce has led to no meaningful change and the Online Advertising Programme has never been implemented. This is a significant omission given the role of the advertising-based business model to fuelling online harms.
  • A stronger framework for regulatory cooperation and enforcement across the digital and online sphere that is fit for the challenges ahead.

Yet, amongst the 35 Bills that the Government’s PR claims amount to an “ambitious programme”, there is not one that provides a vehicle to address these issues, nor any mention in the narrative delivered by the King that these issues are a priority. If anything, with its “Regulating for Growth” Bill justified in the briefing pack by the fact that “nearly one third of UK AI start-up leaders are considering relocating overseas due to regulatory complexity and capital constraints”, the prospects for improved user safety (such as measures to prevent harms that some of these start-ups may well be exacerbating in an unregulated environment) may now actually be worse.

Nothing bold about it

The Government is in control of its legislative agenda and the King's Speech sets out its parameters. Parliamentary time is at a premium and there is no denying that the scale of the problems that the country faces outstrip the time available to resolve them in this session. But failing to provide any legislative vehicle to address even a fraction of the issues above is not good Government. Trying to block off a route for their resolution will only lead to significant time, resourcing and political challenges further down the line when the Government is forced into action - either by a crisis or by another concerted campaign. This approach does not make good laws.

The chaotic scramble at the end of the last Parliamentary session to shoehorn amendments into unsuitable legislative vehicles underlined that, as did the lengthy ping-pong stages for the Crime and Policing Act and the Children’s Wellbeing and Schools Act where - in the absences of any Government action - Peers attempted to widen the scope of the Government’s ambitions on child protection and AI chatbots. Even where the Government had accepted the well-evidenced recommendations of a comprehensive, independent review on pornography, which had cross-party support, it forced its author - Baroness Bertin - to lay her own amendments, and in many cases campaign all over again, before they were accepted. The subsequent climbdown on what should have been no-brainer issues like banning adults from portraying children in porn, or prohibiting so-called “incest porn”, only made the Government look all the more weak and foolish, presiding over an incoherent package of piecemeal solutions - such as the proposals on chatbots via the Crime and Policing Act, which only cover illegal content.

This failure to take ownership for the solutions upfront ultimately leads to Government Ministers and officials finding themselves on the backfoot, having to react to pressure from Parliament and campaigners with piecemeal, standalone proposals that - as we have highlighted with AI chatbots - open up even more gaps in protections and leave the legislative landscape messy and confused. This does nobody any favours: campaigners and advocates, wasting time and resources to deliver the “incremental change” Phillips highlighted; Parliamentarians, already stretched across multiple urgent legislative priorities, scrambling to eke out scrappy concessions; the Government, burning through civil servants’ time trying to avoid defeat rather than working with advocates to construct solutions; businesses and regulated services, who have no idea what the combined impact of the regulatory and legislative interventions will add up to, nor how or when they will need to comply.

The Government’s failure to include any online safety or AI-specific legislation in its current programme means that this will happen all over again. As Jess Phillips said in her letter, there is nothing bold about it.