The Online Safety Bill completed its Parliamentary passage in September 2023 and was granted Royal Assent on 26 October 2023. The Online Safety Act 2023 sets out a broad, if complex, framework for regulation. Many of the provisions came into force as soon as the Act was passed (see our Commencement explainer here); it will be for Ofcom – as the designated regulator – to then fill in the detail of the implementation regime through a swathe of consultations, including risk assessments, codes of practice and guidance. Ofcom have set out how they intend to do this, with a three-phase consultation process starting in November 2023.
This blog post sets out the main provisions in the Act and what happens next along with a reminder of its journey to this point.
What the OSA Will Do - 12 Point Guide
- Ends the era of self-regulation – companies can no longer choose what to do and mark their own homework.
- Puts a powerful, independent regulator (Ofcom) in charge of overseeing a risk management regime of all social media, with bigger or riskier companies having greater responsibilities.
- Requires companies to understand the risks presented by the design and functionality of their service - both in spreading harmful content, and the design itself - and mitigate the most serious: they can no longer live in denial or pretend everything is fine.
- Hands strong powers to Ofcom to get to the truth of what is happening on services and improves transparency for users – no more dissembling and bullshit.
- Introduces a powerful punishment and sanctions regime that bites unambiguously on companies whose social media is used in the UK no matter where they are in the world.
- Holds companies to account if they do not prevent the most harmful content from reaching people, including:
- Child abuse and terrorism content
- Fraudulent or harmful adverts;
- Illegal content, including animal cruelty; and
- Facilitating suicide and self-harm; eating disorder behaviours; and, for children, pornography.
- Introduces new criminal offences, including to prevent the encouragement of facilitation of self-harm, intimate image abuse, cyber-flashing and epilepsy trolling.
- Creates strong protections for children from predatory adults, harmful content and badly designed services that set out to manipulate or exploit them; and prevents children from accessing services that are not designed for them.
- Gives bereaved parents the right to access their child’s data should a tragedy occur.
- Delivers greater protections for women and girls, who are disproportionately affected by harms online, through new criminal offences and Ofcom guidance to services.
- Empowers users to opt out of the some of the most harmful material on the largest or riskiest platforms including from anonymous accounts.
- Protects freedom of speech and privacy.
What Happens Next?
Many of the provisions of the Act (as listed in clause 240) came into force on the day it received Royal Assent. Ofcom’s first batch of consultations - including on the illegal content and CSEA codes of practice and its consultation on risk assessments - will be published on 9 November. The Act sets out (at clause 194) an 18-month timescale for the publication of much of the other codes and guidance that will inform the set-up of the regime. For those codes to come into force, the Government has to bring in the relevant provisions that introduce the duties to which they apply.
Secondary legislation – such as that to set the thresholds for categorisation of services – will flow through from summer 2024 onwards, once the consultations and subsequent advice to Ministers has concluded. Following a concession in the House of Lords, the Government has committed to ensuring that the relevant Select Committees have a full opportunity to scrutinise and respond to developments of these elements of the regime. It is, however, unlikely that the regulatory regime will be implemented in full much before end-2025.