The Charlie Kirk Killing: Viral Violent Content and the Scope of the Online Safety Act
Introduction
On 10th September 2025, Charlie Kirk, an American political commentator and campaigner, was shot dead. As well as reporting by the traditional media, the event was recorded by many, and videos were uploaded to social media platforms. In traditional media, even news reporting would find it hard to justify publishing the moment of death; online, these videos included footage of Kirk’s death, very few of which contained content warnings. While some of the videos were simple recordings of the event, others had had background music, narrative and digital effects added, and misinformation was rife. The videos spread rapidly, and there was media criticism of the platforms’ failure to take effective action. Some of the videos appeared organically in feeds or upon first opening the apps; for many users, autoplay meant that they saw the footage whether or not they wanted to. Other footage was easily found by searching for keyword terms. Many users in the UK, including children, saw the content.
The question addressed in this blog is to what extent the Online Safety Act (OSA) would impact this content and whether regulated platforms were failing in their duties under the Act?
Jurisdiction
Although the event took place in the US, the fact that content was on services of a type regulated by the OSA and that have “links with the United Kingdom” means that they fall within the regime’s ambit. According to s 4 OSA, having “links” means either that there are a large number of UK users, or that “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom”. Given the position of the traditional media on violence and moment of death, as well as the provisions in the OSA itself on violent content, it seems that the “harmful to users” test could be satisfied even for those services which do not have large numbers of UK users. Platforms such as X, TikTok, YouTube, Instagram would all have large numbers of UK users. The OSA is thus brought into play.
Obligations of User-to-User Services
Regulated services have responsibilities to have systems in place to deal with “illegal content” and “content harmful to children”. So are either of these obligations triggered?
Illegal Content
It is unclear whether the criminal threshold would be exceeded by the content. Much may depend on whether the content was contextualised or not and whether any editorial additions served to explain/inform or were less helpful (eg hate speech and abuse). The criminal offence with what might be described as the lowest threshold is s 127(1) Communications Act. It covers content “that is grossly offensive or of an indecent, obscene or menacing character”. Ofcom’s Illegal Content Judgments Guidance notes the difficult boundary in respect of this offence.
There are, however, some criminal cases which may be relevant to understanding the boundary in this context. Following the Grenfell Tower tragedy, a man in the vicinity opened a body bag to take photographs of the body and then uploaded those photographs to social media. He was charged under s127(1) Communications Act, pleaded guilty and was sentenced to 3 months’ imprisonment. This perhaps suggests that disrespecting the dead, and by extension the moment of someone’s death, could trigger the criminal law - though the context of this case includes the fact that the photographs were taken before the fire was under control and it was not then known all the people who had died, together with the fact that the defendant left the photographs up despite responses on his social media account telling him they were inappropriate and to take them down. Another case following on from the Grenfell Tower disaster concerned a man who burnt a model of Grenfell Tower at a party, video’d it and then shared the video. Again, he ultimately pleaded guilty. Again, disrespect was a key element; a victim statement said “the overall reaction of the Grenfell community was one of shock, horror and outrage”.
It is to be noted that, while journalists report on death, disaster and war and are not criminalised for it (and see para 16.66 ICJG), they rarely (if at all) include close ups of a body and there would usually be context and/or warnings, rather than just raw footage. Editorial guidance for broadcasters emphasise the exceptionality of showing death (eg BBC Editorial Guidelines, para 5.3.1). IPSO has ruled on the inclusion of a website of CCTV in the moments leading up to someone’s death. IPSO found no violation of its editorial code but emphasised the fact the CCTV footage was indistinct and the victim’s face was not clear. It is also to be noted that the footage did not show the moment of death.
The point is not to suggest that users circulating these videos should be arrested, but that there is arguably a case that the duties for “non-designated” criminal offences could be triggered under OSA – and this example illustrates the difficulties in identifying the boundary at the lower end of the criminal scale.
Content Harmful to Children
While the illegal content duties may be relevant, it is more likely that the children’s safety duties apply. There is an overlap between some violent content and illegal content, but even content which is not sufficiently serious to constitute content linked to a criminal offence may still be harmful to children. Ofcom notes this in its Guidance on Content Harmful to Children. It also notes that there may be some violence in journalism but “where violence is graphic and depicts serious injuries without suitable context, it may be harmful to children, even if the content is journalistic or of democratic importance” (para 8.9). Again, the key word here is that of “context”; what journalism does is give context and explain situations – merely replaying those situations may not add anything to understanding. So a journalistic piece which might include depiction of some serious injuries would not necessarily be deemed harmful (Table 8.6) – though this would seem to exclude posts where the injury is the entire piece. By contrast, content depicting serious injury of a person in graphic detail and often including blood and gore would be harmful to children (see Table 8.3). This matches the approach taken in the various pieces of editorial guidance noted above – this sort of content would be hard to justify generally. Presumably, however, the threshold for content harmful to children should be lower than that for illegal content understood by reference to s 127(1) Communications Act. From Ofcom’s Guidance, this would seem to cover the videos of the shooting of Charlie Kirk and therefore for services accessible by children, the children’s duties would apply.
What Steps Would Services Be Required to Take?
While the content in issue (however classified) should be covered by a risk assessment, the mitigating steps a service would be required to take would depend in the first instance on whether the content was deemed to be illegal content and/or content harmful to children. For illegal content on a user-to-user service, given that this is not a priority offence, the relevant duties are the-obligation to have a take-down system, and the duty to mitigate (s 10 OSA). Looking at Ofcom’s codes, in practice this means that there are governance mechanisms in place, and some base-level obligations regarding content moderation and take down (since this is expressly required by the Act), rather than any requirements to disrupt upload and re-sharing of the content (some measures seeking to disrupt virality are currently under consultation but it is far from certain that they would apply here even if in force). There are no take-down obligations in the Act in relation to search (search engines do not host content). Instead they should mitigate harm. In the Illegal Content Codes for search, Ofcom has included measures requiring the removal of links to illegal content from search results or to give them a lower ranking (Measure ICS C1.4). This might have some impact on the virality of content.
As regards content harmful to children, this is not primary priority content so- according to the Act – the obligation to prevent children from seeing it does not apply. The Act imposes a general duty to mitigate in relation to priority offences. Ofcom’s guidance however suggests that measures can be targeted to children, in particular so this content is not pushed to them. So for example, Measure PCU E2 specifies that services likely to be accessed by children that are at medium- or high-risk of one or more specific kinds of priority content (which includes violent content) and have a recommender tool must exclude or give a low degree of prominence to content that is potentially primary content (as well as certain types of non-designated content). They should also enable
children to give negative feedback (Measure PCU E3). In general, services should provide age-appropriate support materials (Measure PCU F1). There are also obligations regarding user controls, allowing children to block accounts, or mute users (see Measure PCU J1), though it is uncertain how much help these controls would be in the situation under discussion. Search services’ obligations are slightly different. Measure PCS C1.2 specifies that a search engine should have in place systems and processes designed to review and take “appropriate moderation action” in relation to content harmful to children. Measure PCS C1.6 identifies what such moderation action should achieve: either the blurring of images or lower priority in search results. Defaults around auto-play and interstitials/content warnings are not required.
Category 1 services are subject to additional measures, one of which is to enforce their respective terms of service. Insofar as this content (if it is not illegal content) falls into categories prohibited under a service’s community standards, then those services are obliged to take the action specified in their terms of service. There is a question as to whether the platforms failed to enforce their own terms of service, though it has been argued that this sort of content falls outside “glorification of violence” which is the type of content that many services prohibit (eg Discord, Bluesky, Reddit), at least as understood by those platforms. Some services – for example Tiktok – prohibit gory or gruesome content which would seem to apply to this content. At the moment, however, these obligations are not yet in force; Ofcom has not yet designated which services will fall within Category 1. Moreover, while there is an obligation to enforce terms of service, there are no minimum standards for these terms (beyond the harms already in scope of the Act), so there is no requirement that this sort of content would be covered. As the above examples illustrate, there will be variation between services (though variation is not necessarily a bad thing if terms of service are appropriate for the services’ own users).
Conclusion
The wide circulation of the footage of the Charlie Kirk shooting has caused some shock and unease, particularly - but not exclusively - as regards its availability to children. There is a case to suggest that services should have been dealing with the footage of the shooting as illegal content, though this is far from certain. It also illustrates the difficulties of the hard edge in the Online Safety Act of relying on just the criminal law as a benchmark for action in relation to protection of adults (including the subjects of content as well as the viewers). Of course, in this context we might not necessarily want to focus on take down, but rather on other mechanisms to reduce virality or to warn users – which are not currently envisaged in the Illegal Content Code. Ironically, removing protections for adults – which the previous Government did in order to get the OSA through Parliament - may lead to more pressure on the criminal law at its outer edges.
Beyond this, it is clear that those services that are accessible by children should have been taking measures – as set down in Ofcom’s codes on Protecting Children. Of course, the nature of the Online Safety Act is to require systems to be in place, so – for example - failure to remove an item of content is not in itself a failure under the Act. When, however, there is an issue of virality of content – even if it relates to a singular event – it is less clear that this argument is applicable. Certainly one might have expected more in relation to service not pushing this content to children (especially with auto-play videos). Ofcom is only now consulting on a requirement to have a crisis response protocol. Ofcom has suggested that the protocol would be relevant in relation to certain types of content of which violent content on services accessible by children is one. Of course, the crisis protocol obligation is not yet in force but even if it were, it has no specific requirements in relation to operational standards and control of content. Rather, as currently envisaged, it is more about ensuring internal processes are in place. Of course, even this may have prompted a faster or more effective response to the circulation of the content in children’s feeds. This measure is, however, unlikely to be in force for another 12 months so the options for Ofcom in responding to a similar violent and/or viral event are, until then, limited.