Collective action for a stronger OSA: a Parliamentary rundown

Tags:

Last Monday (26 January), we launched our 10 point plan for the Government to strengthen the Online Safety Act (OSA) with a roundtable discussion in Parliament. With MPs and Peers from across both Houses in attendance - along with representatives from nearly 40 of our Network partners - we were delighted by the overwhelming support in the room for our proposals, and the shared desire to work swiftly to put them into practice.

We were pleased to be joined by DSIT Minister Kanishka Narayan MP, who stated clearly that nothing was off the table when it comes to online safety. The Minister identified three areas in which the Government would be paying particular attention; enforcement issues, so-called “legal but harmful” content, and systemic gaps in the legislation. On AI chatbots he was clear that prompt, urgent action to make sure all chatbots were brought into the scope of the OSA was needed. Alongside this, he said that the Government is committed to shutting any loopholes in the Act that are correlative with risk.

Our panel of speakers included Maeve Walsh and Professor Lorna Woods of the Online Safety Act Network, who talked about the Online Safety Act, two years on from Royal Assent, and the current context that has driven the need for more urgent action. Prof Woods pointed to the failure by Ofcom in its interpretation of the Act to compel platforms to act on the risks identified in their risk assessments, and how the safe harbour provision combined with Ofcom’s narrow approach to their codes has further exacerbated this. She also set out very clearly the ongoing need for an overarching duty of care to be placed on platforms.

Colette Collins-Walsh, Head of Policy and Public Affairs at 5 Rights Foundation, outlined how the failure to impose on platforms an underlying safe design incentive had led to Grok being able to generate thousands of non-consensual intimate images of users. She spoke about the need for platforms to be compelled to take a "safety by design" approach, one that is consistently understood via a clear definition in the OSA and supported by a Code of Practice.

Danny Stone, CEO of the Antisemitism Policy Trust, spoke about the scale of antisemitic content on small but high-harm platforms, most recently in the wake of the Manchester Synagogue attack. He highlighted that, despite best efforts in the House of Lords during the passage of the Online Safety Act, these small but risky platforms are not required to meet the strongest duties set out for category 1 services. As such, he urged the Government to help deliver Parliament’s intent that services can be categorised based on size or risk and bring small but risky platforms into category 1.

Lastly, Hanno Fenech from the Center for Countering Digital Hate (CCDH) discussed the critical need to establish minimum standards for platforms Terms of Service (ToS), and a no-rolling back clause, in light of platforms such as Meta removing vital policies on hate speech over the last year. Such developments led to an avalanche of harmful content that particularly impacted women and minoritised communities, and are demonstrative of a concerning erosion of fact checking and trust and safety teams on major platforms.

Parliamentary support

The discussion that followed was reflective of the wealth of expertise and experience in the room, with many Parliamentarians all too familiar with an Online Safety Act that took five years to reach Royal Assent. As such, fierce advocates for online safety in the room were clear on the need for the Government to use their powers to strengthen the OSA as a matter of urgency, seizing on the growing demand from the public for tech platforms to be held to account. Furthermore, attendees were quick to highlight the important role that Ofcom has to play to properly enforce the existing legislation, and whether they could be compelled to take a more ambitious approach.

In light of the recent Grok crisis, which saw thousands of women and girls’ images turned into sexualised deepfake content without their consent, there was an urgent desire to more effectively tackle the abuse of women and girls online, with suggestions for the introduction of a third-party advocacy body or complaints mechanism to deal with individual user redress for cases of intimate image abuse, as well as support for the stay-down provision set out in our 10 point plan. Alongside this, there were reiterated calls for Ofcom’s VAWG Guidance to become a Code of Practice; this would be in conjunction with the Plan’s recommendations for the removal of the “clear and detailed” constraints on code measures and the Act’s safe harbour provision, such that any change in statutory status would preserve the ambition of the guidance.

There were thoughtful contributions about the role of international collaboration when it comes to dealing with harmful AI technologies, and the need to strengthen societal harm duties in the OSA. On regulatory co-operation, there was agreement at the Parliamentary roundtable that the role of the ICO in relation to many of the harms linked to misuse of personal data on social media is crucial and has, to date, been under-powered; stronger co-operation between Ofcom and the ICO, in particular, was urgently needed.

The mood of the room was clear: the time for discussion was over. As Lord Clement-Jones put it, the question is “not what the merit of the 10 point plan is but how we put it into practice”. With numerous pieces of legislation currently going through the Commons and the House of Lords, the Online Safety Act Network stands ready with our amendment sheet to work with Parliamentarians to progress this plan, and ensure the Online Safety Act is strengthened to fulfil its original ambition: to make the UK the safest place to be online.