Online Safety Act Network

Dedicated discussion forums and the Online Safety Act – the case of suicide


Media coverage over the last year has detailed the presence of a number of forums accessible from the UK and the tragic fact that a number of individuals, many of whom were children, have taken their own lives based on information or encouragement from these sites, or by using substances sourced online. Many, if not all of these sites, are small and based overseas.

In this explainer, we consider whether the Online Safety Act improves the ability of Ofcom to take action against these sites. We consider:

  • The geographical reach of Ofcom’s powers and the jurisdiction of the OSA

  • The illegal content duties and the duties relating to harms to children, including Ofcom’s consultation materials on its proposed approach to the criminal offences relating to suicide and self-harm as well as content relating to both issues

  • How the categorisation of services under the Act will affect this issue

  • Ofcom’s enforcement powers and their usability in this context

  • Whether there are other mechanisms which could be deployed in relation to these sorts of sites.


Note: this post does not consider the role of search engines which Ofcom has found can act as “one-click gateways” to self-harm and suicide content.

A Brief Overview of the Online Safety Act’s Structure

The Act has three main elements:

  • introduction of a regulatory regime for search and social media enforced by Ofcom (Part 3)
  • age verification obligations for pornography providers also enforced by Ofcom (Part 5)
  • introduction of criminal offences (Part 10), including an offence of encouraging or assisting serious self-harm (s 184).

It is the Part 3 regime that is relevant here.

The regulatory regime treats search and social media slightly differently, though both types of service are under duties with regard to illegal content and, where the service is likely to be accessed by children, children’s safety duties in relation to content harmful to children. Both types of service are under obligations to carry out risk assessments as well as undertake mitigation duties. Specific duties apply in relation to types of content identified by the Act as priority. There are further ancillary and cross-cutting duties, all of which are set out in this table. Ofcom has enforcement powers in relation to these duties.


The first question is whether overseas forums (which otherwise fall within the definition of a user-to-user service in the Act – s 3(1)) fall within the regulatory regime’s scope (this is a different issue from the territorial extent of the Act – see Explanatory Notes, para 40). The answer is, “probably yes”.

The Act provides that a service will be a “regulated service” if it “has links with the UK” (s 4(2) OSA). Two further sub-sections deal with the question of whether a service has such links. The first group of circumstances where such links exist are based on whether there are a significant number of UK users, or where the UK is a target market for the services (s 4(5) OSA). For small, dedicated topic sites, neither of these might be relevant. Section 4(6), however, might apply.

Section 4(6) provides that when a service is accessible from the UK and “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom” presented by user-generated content on the service, then the service would have links to the UK for the purposes of the Act. Note here that there is no quantitative threshold for users affected. Rather, it focuses on the severity of the harm to users. It seems to be designed to capture high-risk services which might not otherwise be captured by the Act. Nonetheless, it is likely that the risk of severe harm should be more than hypothetical. Ofcom has provided a checker tool for services to understand whether or not they are caught and it seems that providers should be ready to explain their decisions on whether they are caught by the regime or not. It would seem that sites on which users are encouraged to take their own lives or where poisons are sold could satisfy this harm threshold, bringing such sites, in principle, within Ofcom’s remit.

Illegal Content Duties

The first question is whether there is relevant content that might come up in a risk assessment and trigger the illegal content duties. Two sets of content immediately seem relevant:

  • content linked to section 2 Suicide Act 1961 (assisting suicide); and
  • content linked to s 184 Online Safety Act (encouraging or assisting serious self-harm).

In its draft illegal content judgment guidance on the priority offences, Ofcom notes that there may be political discussion around assisted suicide, which is not unlawful, and that some portrayals of or reporting on suicide are also lawful, as are jokes in poor taste (Annex 10, A 12.12-13). In its consultation (para 26.275), it suggests, however,

“that where specific, practical or instructive information on how to end one’s life is posted to a forum or within a chat in which suicidal ideation is discussed, it may be reasonable to infer that intent to assist (attempted) suicide exists by virtue of information having been posted. Where an encouragement to end one’s life is posted in response to what appears to be a credible threat by another user that are about to take their own life, it may also be reasonable to infer intent”.

It does not seem that Ofcom requires this information to be addressed to a specific person. This seems broadly in line with the CPS guidance, which states that, “[i]n the context of websites which promote suicide, the suspect may commit the offence of encouraging or assisting suicide if he or she intends that one or more of his or her readers will commit or attempt to commit suicide” (para 20). As a result, these sorts of content - even posted without a specific addressee - would likely fall within the definition of priority illegal content for the purposes of the Act.

The new s 184 OSA offence (outlined here) is not a priority offence, but it is still a relevant offence for the purposes of triggering some of the illegal content duties (see s 59 OSA). The offence was introduced as part of the revision of the regime. The Secretary of State noted when the bill was reintroduced in November 2022 that:

“I am aware of particular concerns around content online which encourages vulnerable people to self-harm. While the child safety duties in the Bill will protect children, vulnerable adults may remain at risk of exposure to this abhorrent content. I am therefore committing to making the encouragement of self-harm illegal.”

Note that the Criminal Justice Bill, if it had been enacted, would have repealed this offence and replaced it with a broader one (Explanatory Notes to Criminal Justice Bill, para 47); the Criminal Justice Bill fell as a consequence of the General Election being called. A similar distinction between legal content and illegal is flagged by Ofcom in its draft guidance here. The consultation notes self-harm is not illegal, nor is talking about it (Annex 10, A12.16). Ofcom suggests that, while blackmail or expressly egging someone on to carry out an act of self-harm would be caught, glorification (whatever that may mean) of self-harm would not be (Annex 10, A12.23) – glorification may be picked up as content harmful to children (see below). Note also that the offence requires “really serious injury”, although it does not have to reach the threshold of a threat to life and can include psychiatric as well as bodily harm (Annex 10, A12.18).

These are reasonably high thresholds – but this is the consequence of using the criminal law as a benchmark for regulatory action. There is also good reason to want not to overstretch the categories and inadvertently catch recovery or support material – something which many of the suicide prevention charities have emphasised throughout the development of the Act.

In terms of assessing whether content is illegal, no account need be taken of “whether or not anything done in relation to the content takes place in any part of the United Kingdom” (s 59(11)). The Explanatory Notes explain the significance of this (EN, para 336):

“content does not need to be generated, uploaded or accessed (or have anything else done in relation to it) in any part of the United Kingdom to amount to an offence under this provision. This is the case regardless of whether the criminal law would require any relevant action to take place in the United Kingdom (or a particular part of it).”

So the fact that relevant content may be created, uploaded or shared abroad does not take it outside the scope of the regime.

The two offences are treated differently by the Act. Under section 10, which contains the user-to-user illegal content safety duties, a regulated service would be required to take proactive action to minimise users’ exposure to the priority illegal content, including through systems to try to prevent users encountering it (s 10(2)(a) OSA) and through limitation of the time it is available (s 10(3)(a) OSA). Section 10(2)(b) requires systems to effectively manage and mitigate the risk of the service being used for the commission or facilitation of a priority offence. It has been noted that paedophile manuals in themselves could fall within this provision; it is not clear whether suicide guides would (the commission of suicide is not an offence) – Ofcom takes the view that “Content which provides specific, practical information on suicide methods is the content most likely to reach the threshold for illegality under this offence” (Annex 10, para A12.8). This suggests it may be unnecessary to consider this question as guides could be caught as a priority offence. Additionally, service providers are obliged to mitigate and manage the risks of harm to individuals from illegal content (whether priority or not) identified in the most recent illegal content risk assessment (s 10(2)(c) OSA). Terms of service relating to these measures must be enforced consistently (s 10(6) OSA). They must also run a system allowing for prompt take down of relevant content once on notice – whether through their own investigations or when notified by someone of the content (s 10(3)(b) OSA). Ofcom’s proposals for what this would mean in practice are here.

It is hard, however, to envisage that a pro-suicide site would want to comply with these obligations as it seems likely that a significant proportion of the hosted content would be caught (the situation may be somewhat different for pro-choice sites which host discussions about suicide/self-harm or provide support to those suffering, or general sites which host suicide discussions among many other topics).

Harm to Children

For sites accessible by children, there are further rules found in section 12: the children’s safety duties. Section 61 identifies two types of priority content in relation to these duties: “primary priority” and “priority”. Under section 12(3) service providers are under an obligation to have a system to prevent anyone under 18 from “encountering by means of the service, primary priority content…”. For this purpose, section 12(4) requires the service provider to use age verification or age estimation that is, according to s 12(6), “highly effective”. According to s 61 OSA, content which encourages, promotes or provides instructions for suicide or for an act of deliberate self-injury, or for an eating disorder constitutes primary priority content. The thresholds here for relevant content are lower than for the illegal content offences offering a wider scope of protection for children. There are no existing legal definitions for these categories of content and thus it is the responsibility of Ofcom to provide guidance on meaning (s 53 OSA).

While Ofcom recognises the importance of recovery content, the consultation also notes that some of it could be nonetheless harmful, especially to children who are in crisis (Vol 3, para 8.3.7). When assessing content, the fact that content might also be described at artistic (music, poetry or artwork) does not mean that it is not harmful. Moreover, there is no need to show intent to promote, encourage etc suicide (Vol 3 para 8.3.12). The consultation gives examples of both content that would constitute harmful content and that which would not satisfy the definition in the OSA. Examples of content that would not be caught include discussions of healthy coping mechanisms, the NHS website, academic articles related to suicide prevent methods or suicide rates or discussions about belief in or hope for an afterlife.

Ofcom notes in its consultation that there is potential overlap between non-criminal suicide content and other forms of priority content identified by the Act – for example dangerous stunts or harmful substances content (Children’s Consultation Vol 3, para 8.3.5). Presumably the more protective duties apply in the cases of the distinction between primary priority and priority content harmful to children. The categories of content harmful to children and illegal content are not mutually exclusive; overlap between the two basic types of content is thus possible. In the cases of overlap here, presumably both sets of duties apply. Identifying something as potentially harmful to children would not reduce the obligations on a service under s 10 if that content were also illegal.

By contrast to the position for illegal content, platforms are not required to have systems to remove content harmful to children; the obligation is to have mechanisms in place to prevent children seeing it. Adults are not necessarily protected by this though some mechanisms to protect children – eg banning certain types of content on a service – may have an impact on the content seen by adults. In the context of platforms which host a majority of primary priority content (and that could potentially also include illegal content), Ofcom is proposing that the entire site should be age-gated. Suicide forums could be caught by the proposal. This approach clearly would protect children, and would mean that there is a single clear-cut obligation that Ofcom would be enforcing against. It also means, however, that there is no regulatory pressure towards general measures (such as banning content – or sub-types of content) which could also benefit other vulnerable users. It may be that this is of less significance in this context, where the motivation of the service is at odds with the idea of regulation of this sort of content.

Over 18’s

“Category 1” services are subject to obligations to provide user empowerment tools (s 15 OSA); this is the “third shield” to protect adults (see [HCWS397]) . (See our blog post on the categorisation duties for more detail.) These tools should give users greater control over the content they engage with (EN, para 128). The tools however are only required to be provided in relation to a specified list of content types – but this includes content of the same types that constitute primary priority content harmful to children. While the view of the Government was that this would give vulnerable users “greater choice over how they interact with” the types of content listed in the Act (s 16 OSA), this approach was described as unworkable and insufficient (see here for example on incels and suicide).

During the passage of the Bill, both in the Commons (discussion on amendment 43 here, here, here and here, for example) and the Lords (see, for example, here and here), there was much debate about whether these tools should be default engaged or not. The solution chosen was to make users choose the default. The concern, of course, is that those engaging in suicidal ideation may well choose not to use these tools. Category 1 service providers are also required to implement their terms of service so, as the Government’s Explainer notes “[i]f a service says they prohibit certain kinds of suicide or self-harm content the Act requires them to enforce these terms consistently and transparently.” Services can, however, only act in accordance with those terms of service (ss 71 and 72(3) OSA). There is no set minimum for terms of service (outside obligations relating to the illegal content and content harmful to children duties), so no requirement that services terms of service deal with (non-criminal) suicide and self harm material. While these provisions may give extra protection, it is far from certain that suicide forums would fall within Category 1, given the emphasis placed on size for the purpose of that categorisation. Ofcom has suggested that only a very small number of services will be Category 1.


Enforcement does depend on Ofcom being aware of the problem in the first place and there is a question as to whether it is realistic to expect Ofcom to be aware of all small overseas sites that fall within the OSA regime. In Ofcom’s Illegal Content consultation, the enforcement strategy did not go into particular detail on this; the Children’s Consultation merely cross-referred back to the Illegal Content documentation. It seems likely that much enforcement will be reactive, once there have been complaints about a particular service – though this may come late in the day. Moreover, the standard approach to enforcement envisages a service that in principle is willing to comply, and so provides for a gradual increase in the severity of enforcement mechanisms. Attempting to block services, the most extreme remedy, following this model would mean enforcement would take place over many months. Ofcom does have some what might be termed “emergency” powers, but their effectiveness in part depends on Ofcom’s willingness to activate them.


Ofcom has enforcement powers in relation to all the duties, and these powers do not distinguish between the duties. There are information gathering powers to support it in this. It also has powers of entry and inspection, including without a warrant in certain circumstances (albeit with seven days’ notice) – though this power would likely be of limited utility in relation to overseas providers. Failure to comply with Ofcom in its information gathering can lead to criminal sanctions.

Notices of Contravention and Fines

The Act envisages a gradual enforcement process, starting with notification of a problem and giving the service provider a chance to deal with the issue or make representations as to whether there is an issue before confirming a breach has occurred or imposing sanctions. Initially, Ofcom will issue a provisional notice of contravention (s 130 OSA) to seek the provider’s views and then may issue a notice of contravention – a confirmation decision (s132 OSAet seq). Notices must set out which enforceable requirements (as set out in the legislation) need to be complied with. Ofcom may issue fines against providers of up to £18 million or, if higher, 10% of global annual turnover for breaches of the OSA (although fines must be appropriate and proportionate to the provider’s non-compliance – see Sch 13).

Business Disruption Measures

In respect of ongoing or serious cases, Ofcom may apply to a court for “business disruption measures” (s 144-8), following the usual rules of civil procedure. This means that Ofcom can apply to the courts for an order to require a third party who provides an ancillary service (i.e. a service that facilitates provision of a service regulated by the OSA) to withdraw or restrict its ancillary service where a provider has failed to comply with an Ofcom order or pay a penalty under the OSA.

This set of powers could apply to failure to comply with the illegal content duties, or the children’s safety duties, or the supporting and cross-cutting duties. The business disruption orders are divided into two categories: a service restriction order; and an access restriction order – and for both there can be interim orders. An ancillary service is a service that facilitates the provision of the regulated service – examples include payment processing services or an ad server (a non-exhaustive list can be found at s 144(12)). The other orders relate to access and could be directed for example at an internet access provider or an App store. The purpose of these measures, according to the Government, is to prevent a service “from generating money or being accessed from the UK”.

These measures are intended to be a last resort (EN, para 659) and Ofcom stated (Annex 11, A9.15) that it is:

“unlikely to find it appropriate to apply to the Courts for business disruption measures as a matter of routine where we have identified failures, or likely failures, to comply with enforceable requirements.”

This point is recognised in the drafting of the Act; one of the possible routes to applying for business disruption measures is that the provider has failed to comply with a confirmation decision (s 144(3)(c)(i)) or failed to pay a penalty (s 144(3)(c)(ii)).

While the assumption seems to be that these orders will be used after a confirmation notice has been issued and not complied with, the OSA does foresee the possibility that the disruption orders can be used where “the provider would be likely to fail to comply with the requirements imposed by a confirmation decision if given” (s 144(3) – the grounds in s 144(3) applying to access restriction orders by virtue of s 146(1)(a)). A court order might thus provide a short cut in the process if there is evidence the service provider would not or could not comply. Even absent this sort of concern, Ofcom may act without having given any of the provisional or confirmation notices, when those other measures would be unlikely to be sufficient to prevent significant harm arising to individuals in the UK (ss 144(3)(c)(iv) and 146(1)(b)(ii) in respect of a service restriction order or an access restriction order respectively).

Ofcom noted (Annex 11, A9.16) that when considering whether to apply for such measures, it would:

“consider the level and degree of any risk of harm to individuals in the UK from the failure we have identified, including whether there is actual or potential harm to children …”

Ofcom specifically identifies:

“where the level of risk of serious harm to UK users as a result of the failure is such that it would not be appropriate to wait to establish the failure before applying to the Court.”

This broadly paraphrases the terms of the OSA and does not really answer the question of what serious harm is here, and how many users would need to be so harmed. Given the extremity of the threat posed, sites where suicide is encouraged or means to carry out such acts could be seen as posing a real risk of serious harm to a number of UK users; reliance on these powers might well be appropriate.

The business disruption measures are similar to those proposed under ss 21 and 23 Digital Economy Act 2017 (DEA17) although, as the BBFC noted, there was no requirement in the DEA17 for court orders and that the inclusion of such a requirement “could limit speed of enforcement”. Ofcom has suggested that it:

“would engage with the third parties who may be the subject of a business disruption measure before making an application to Court for such an order”

though it did preface this with “where possible” (see also draft Enforcement Guidelines, Annex 11, para A9.19). Presumably the seriousness of a harm would be a factor in whether it would take the time to engage with the service provider. The Public Accounts Committee Report on Preparedness for Online Safety Regulation noted that:

“Ofcom lacks clarity about how it will identify and respond to non-compliance and when to use its enforcement powers. Ofcom estimates that there could be 100,000 or more service providers subject to regulation, with most of these being small businesses and / or based overseas. Ofcom will rely on automated processes to identify and collect monitoring data on the compliance of the vast majority of service providers but does not have these processes in place yet.”

Given the ideological motivation of at least some of these niche sites, it does not seems likely that they would welcome regulation, especially in relation to content that is not contrary to the criminal law. On this basis, it seems likely that Ofcom would need to rely on access orders. Access orders however have the potential for significant impact on freedom of expression, especially if used against large platforms hosting a range of content. One of the notorious suicide sites was for a while found on a subReddit; blocking all of Reddit for a single very harmful subReddit might well be seen as disproportionate, resulting in significant “collateral censorship” (Yildirim v Turkey, App No. 3111/10). Service restriction orders, however, would not have this side effect and might be more appropriate for general sites. A specialised site – where much of the content raises similar issues – might not have the collateral censorship side effect.

Some practical issues might also arise in relying on access orders, specifically around the issue of domain hopping – when sites move to a different url – or changing corporate name. Ofcom may need to think about lessons learned about blocking multiple sites and dynamic blocking derived from intellectual property enforcement and the extent to which they can be transposed to this context[MW1] .

Other Mechanisms

Ofcom seems to have powers that allow it to tackle the issue of small harmful sites, even those that are overseas. The issue is the speed at which Ofcom will act. In the context of pro-suicide forums it would seem that the level of harm that could eventuate is such that the threshold for use of the “emergency powers” would be satisfied. This – in the end – would be a decision for Ofcom.

In addition to this, there has been industry and third-party activity – for example developing non-platform user empowerment tools (see e.g. here), but also at different levels of the internet distribution chain. Each device connected to the internet has a unique numerical (or alphanumeric) address which other machines use to find it. These addresses are converted to the domain names we see. These are organised into different levels, with the root domain being at the top. Below this are top level domains (TLDs) which are either country code (ccTLD) (eg dotUK) or generic (gTLD) (eg dotCOM). These TLDs have subdomains (second level domains), which may also have sub-domains. Second level domains (eg are available to purchase from the registrar for the TLD.

As regards the UK position, sections 19-21 Digital Economy Act 2010 (DEA2010) (only recently brought into force) amend the Communications Act 2003 to introduce powers in relation to internet domain name registries, where there has been a serious misuse or unfair use of the domain name (section 124O Communications Act 2003). It deals with UK-related domain names, and includes, as well as the .UK ccTLD, some gTLDs – for example .SCOT, .WALES, .CYMRU and .LONDON.

Under these provisions, regulations would set out the policies that the registries in scope should adhere to, as well as clarifying the list of relevant misuses. The government consulted on this in 2023 and the government response to the submissions was published in February 2024. The government stated:

“our assessment is that misuses of domain names are types of harmful activity which are generally considered to be illegal. Therefore, registries should have in place adequate policies and procedures to mitigate against domain names being registered and deal with instances when they have been notified that domain names are being used, with the purpose of carrying out such misuses. Unfair uses are not necessarily illegal but can still result in harm to end users and therefore the registries are only expected to have in place an adequate dispute resolution procedure to deal with such unfair uses (p. 5).”

Nonetheless, the consultation response then limits the misuses to fairly technical offences, following a framework established by ICANN (eg spam). The consultation also suggests including CSAM, noting that there are other laws to deal with content issues and that registrars have policies to deal with criminal activity.

On this point, Nominet (which runs the .UK infrastructure) will, on receipt of official notification from specified UK law enforcement agencies, place a domain into “special status”, which could include suspending the domain name or redirecting to an information or help page. It reports on this annually, the most recent report relating to year ending 31 October 2023. Other registries may take different approaches. Notably, PIR, which controls the .ORG domain, has developed an Anti-Abuse Policy. This is also mainly aimed at technical abuses (eg phishing, malware and spam) but does cover a limited number of content-based abuses. This includes “credible threats to human health or safety”, which could cover some of the problematic suicide content. In such a case, PIR will seek to resolve the matter through engagement but ultimately could any domain name on registry lock or hold. This seems broader than the approach of Nominet or under the DEA 2010. Because this would affect all content on the domain it is a step with far-reaching ramifications and would be difficult to justify for a site with mixed content.

The DEA2010 provisions are limited to UK connected domains, leaving a gap as regards overseas domains. To address this, the Criminal Justice Bill sought to provide for the possibility of suspension of IP addresses and Internet domain names in case of serious crime (see cl 26 and Sch 3), provided a cumulative set of conditions were satisfied. These would have limited the circumstances in which the power could be used, linking the activities very closely to UK actors and UK-based victims. This is understandable given that there are limits to the legitimacy of extraterritorial jurisdiction. The powers are also limited in that the powers can only be used in relation to serious crime, again reflecting the seriousness of the intrusion such a measure would constitute. Serious crime is defined as one where someone with no previous convictions could reasonably be expected to be sentenced to imprisonment for 3 years or more. Given the limitations in particular to UK perpetrators this would have been unlikely to be useful in the context of suicide forums. However, as noted above, the Criminal Justice Bill did not progress far enough before the dissolution of Parliament so a future government would need to return to this issue if it thought that the issue was not being dealt with adequately enough through the systems established by OSA.