Chatbots and the Online Safety Act
What is a chatbot?
A chatbot is a software tool that simulates human conversation. The term covers a wide range of tools, deployed for multitudinous purposes. They can be categorised into those which are rules-based (or based on a decision tree) or AI based – using a range of different AI techniques (eg Large Language Models (LLMs); Natural Language Processing (NLP); Natural Language Understanding (NLU)). Some chatbots use a combination of rules-based and AI. Increasingly, there is discussion of AI agents (though these are not new, but have tended to have been deployed in more closed systems).
There are three regulatory contexts to consider which are analysed in more detail below:
- a user who generates content uses chatbots (eg integrating customer service into social media)
- social media or search services integrate chatbots into their service to respond to or to interact with users of the service – eg Meta’s AI generated Instagram accounts (but now shut down) and Google’s AI Mode.
- the chatbot as a freestanding service (whether accessed via social media or not), such as chatGPT. There is a further distinction between those platforms - eg Character.AI; Replika Pro - which allow users to create or personalise their own chatbots, compared to those that don't.
Note that chatbots provided as services (so potentially relevant to scenario 1 but particularly the third scenario) vary in the degree of customisation that they permit. There are probably boundary issues – eg ChatGPT now has a search function – does this mean it is a search engine? And where would any boundary between search and chatbot lie? (See other examples: eg Andi Search; Bagoodex, etc.)
User-controlled Chatbots
This section assumes that the services through which the content is encountered are regulated services under the OSA – specifically user-to-user (U2U) services. The main question here is whether user controlled chatbots constitute regulated content which regulated services have to tackle to fulfil their duties of care under the Act. As Ofcom noted, “any AI-generated text, audio, images or videos that are shared by users on a user-to-user service is user-generated content and would be regulated in exactly the same way as human-generated content.”
There are two aspects:
- is this user-generated content; and
- are there any issues with the thresholds for harmfulness/illegality in relation to the Chatbot generated content?
The OSA seems to have dealt with the first question. In the definition of “user-generated content” (which is an essential element for content to be the sort of content that triggers service providers’ duties), s 55(4)(a) specifies:
the reference to content generated, uploaded or shared by a user includes content generates, uploaded or shared by means of software or an automated tool applied by the user.
This would cover chatbots – and the test is the application of the tool not the control of the tool. So users “hiring” chatbots from third party providers would seem to be covered too.
As regards content harmful to children, it would seem that the output from chatbots would be treated as if generated by a human. The question would be whether the outputs fell within either the categories of primary priority content or priority content, or – failing either of those – satisfied the test in s 60(2)(c) for non-designated content harmful to children. This question focusses on the impact of the content on the children rather than who (or what) created the content.
There is, however, a question about illegal content, notably the impact that the requirement to assess the mental element of crimes would have (see s 192(6)). Can chatbots have the requisite mental element or can we ascribe their actions to the user applying them? The OSA does not expressly deal with the question though it does note at s 59(12) that
references in subsection (3) to conduct of particular kinds are not to be taken to prevent content generated by a bot or other automated tool from being capable of amounting to an offence.
This provision does not go as far as to say that the mental element can be satisfied by a bot – it focusses on the action part of the crime not the mental element. In terms of whether it could be reasonable to infer that the requirements of an offence have been satisfied (s 192), it might be more convincing to ascribe intent where the chatbot is rules-based rather than when it is AI-based and potentially less predictable in its responses. The significance of this under the Act is that if you cannot satisfy the definition of illegal content, then you do not have content in relation to which the regulated services are required to act. So it all comes down to the extent that it is reasonable to infer that all the elements of an offence have been satisfied. While Ofcom has dealt with the question of whether AI generated content can be caught by OSA in the context of its discussion of CSAM in its illegal content judgments guidance, it has not dealt with the issue of chatbots and mens rea as a general question.
In terms of freedom of expression, presumably it is legitimate to say the chatbot itself has no rights and therefore the concern underpinning the narrow approach to inference in this context drops away. One argument would be to say that it is the user’s freedom of expression that is infringed – but if the user can associate itself with the chatbot’s content to claim rights, it seems unbalanced if they could then disassociate themselves for the purposes of assessing whether the speech is regulated speech or not. (Note here we are not talking about criminalising the person applying the chatbot but just the trigger for the OSA obligations.)
Integration into Regulated Services (eg Social Media and Search)
At first blush, it seems easy to suggest that these tools are part of the functionalities and features of the service that should be assessed for risk. Ofcom suggested that “[w]here a site or app includes a Generative AI chatbot that enables users to share text, images or videos generated by the chatbot with other users, it will be a user-to-user service.”
There is a question, however, whether the output of provider-controlled chatbots constitutes regulated content that triggers the safety duties, whether we are talking about Meta’s Instagram accounts or the summary that a virtual assistant might come up with when asked a question. Significantly, in the latter case the software has shifted from providing snippets of other people’s content into providing its own content.
As regards U2U services, the service provider only has to take action in relation to user-generated content; the output of provider-controlled accounts – whether driven by chatbots or people – is not user-generated content. So we then have to ask: while the existence of the tool may be within the scope of a risk assessment, does it lead to content about which a service provider should take action? Risks are those arising from illegal content (s 59(14)) and content harmful to children (s 60(6)) which are both limited to regulated user-generated content. Of course, we could suggest that any user response to problematic content from a provider controlled chatbot would be user-generated content, triggering the duties.
The scope of obligations for search services is defined differently. Search results “means content presented to a user of the service by operation of the search engine in response to a search request made by the user”. It is not limited to the replication of third-party content. The limitation in s 59(14) in relation to user-to-user services does not apply to search – though the question of the extent to which chatbot content can be criminal content is still open. S 60(6) is not expressly disapplied from search and continues to limit the definition of content harmful to children – though it is unclear how this fits with the definition of search content
Free-standing Chatbots
These come in a range of forms, so the analysis may be category specific. Certainly it would seem as if something like searchGPT could be a search engine for the purposes of OSA. The definition of a search engine in the OSA (s 229) is somewhat circular (a search engine allows you to search) and seems more focussed on distinguishing between search functions within a website and general search services. So, as Ofcom suggested, search for the purposes of OSA “includes tools that modify, augment or facilitate the delivery of search results on an existing search engine, or which provide ‘live’ internet results to users on a standalone platform.”
As regards other sorts of chatbots, the question would seem to be: can other users encounter the content you have generated? Ofcom noted “[w]here a site or app allows users to upload or create their own Generative AI chatbots – ‘user chatbots’ – which are also made available to other users, it is also a user-to-user service”. If we are looking at something like Replika (which allows a user to engage with an AI girlfriend – and buy gifts or earn rewards by staying on the site and engaging with it), the answer would be “no”. So this could be seen as analogous to games where users do not encounter other users. Girlfriend GPT seems to allow users to share characters (in return for rewards); this would seem to make the underlying platform a user-to-user service. Insofar as that content is made public – as Meta made the conversations of the users of the Meta AI public – the answer would seem to be “yes”, though there may be questions as to what is regulated content.
(Note that the definition of user-to-user to user service requires the possibility for other users to encounter that content whether the uploading/sharing user intended for that to happen or not and whether or not other users do actually encounter the content.)
So it would seem this could be a regulated service - though the split between user-generated content and provider content comes into play again here
Mitigations
It seems in principle that at least some chatbots and their output could be caught by the Online Safety Act and that mitigations will need to be applied as for other sorts of risks and content. In this context, the “small print” attached to AI outputs (eg warnings about accuracy) will be insufficient – the entire range of relevant obligations will apply (and in particular the obligation relating to age verification in relation to primary priority content). Ofcom in addition noted the need for an effective take-down system and for an appropriate complaints mechanism. These, however, are not particularly tailored to the chatbot context, and are almost certainly insufficient. Some consideration could be given as to whether mechanisms deployed in other livestream contexts could be appropriate here; see the measures proposed on this functionality in Ofcom’s latest consultation.
Conclusion
Some chatbots and their outputs will be caught within the OSA regime, but – despite the Government’s encouragement of Ofcom to be proactive in the face of new technologies, particularly AI - the coverage appears incomplete and there are some technical questions which remain unanswered which may affect the completeness of protection. This uncertainty is undesirable; it could be that there is space here for chatbot-specific obligations to clarify the position in relation to each regulatory context. Moreover, there seems to have been little thought in the AI/chatbot context as to what appropriate and effective mitigations look like and more work is required here.