The UK must hold AI chatbot companies to account for the harm they cause
This statement is also available as a PDF at the bottom of this page.
Harms caused by AI chatbots are severe and rapidly increasing. We are calling for urgent action from the Government to address these human rights violations.
The start of the year brought with it reports of a tsunami of deepfake abuse on X’s image-generating chatbot, Grok, which included millions of sexualised images of women and children. Many of these deepfakes reinforced racist tropes against Black and marginalised women, including requests to lighten skin. There have already been numerous tragic deaths connected to AI chatbot usage, including Adam Raine and Sewell Setzer III. Reports over the weekend revealed that Grok has also been used to create explicit and derogatory posts about the Hillsborough and Heysel disasters, as well as the death of footballer Diego Jota. Yet these reports were by no means the first of their kind, and represent a systemic failure to both prevent harm occurring in the first place and act swiftly when it was identified.
We have the opportunity to be truly world-leading in our approach. The Government’s action to bring AI chatbots into the scope of the Online Safety Act is an important step in the right direction, however their amendment falls far short of where it needs to be to fulfill that ambition and make a difference to the lives of all UK users. The focus on ‘illegal content’ fails to capture the concerns of civil society, who have identified the anthropomorphic features and functionalities of a chatbot as a unique driver of harm, one that creates emotional dependency which can lead to isolation, depression, psychosis, and in extreme cases, suicide.
Furthermore, harms to our information ecosystem, often called “hallucinations”, resulting from misinformation and bias in the training data, are not currently covered by the Government’s approach, failing to address threats to the democratic process through disinformation related to elections. These threats are not abstract, and are already impacting the democratic process.
AI chatbots, like all other online spaces and tools, should not be brought to market until it is proven that they are safe by design for all users. Chatbot providers must be held to account for the harm associated with their products, particularly where they have failed to carry out a risk assessment, or, if they have done so and have identified harm as part of that risk assessment, they have failed to take mitigating action.
The 36 organisations and individuals below - whose interests span CSAM, child online safety, VAWG, suicide and self harm, mental health, extremism, online hate and abuse, democratic participation and AI regulation - are calling on Peers to support Baroness Kidron‘s amendment to the Crime and Policing Bill (due to be debated on March 18th) which would make it a criminal offence to create an AI chatbot that produces specified content, not to risk assess their products before deployment nor to take steps to mitigate the risks to UK users from content and design-based harms. More detail is provided in the annex below and in our research brief.
AI Youth
CEASE - Centre to End All Sexual Exploitation
Ripple Suicide Prevention Charity
Clean Up The Internet
Institute for Strategic Dialogue (ISD)
Centre for Protecting Women Online
Fair Vote UK
Internet Watch Foundation
5Rights Foundation
Antisemitism Policy Trust
Molly Rose Foundation
Center for Countering Digital Hate (CCDH)
Internet Matters
End Violence Against Women Coalition (EVAW)
The Jo Cox Foundation
Parent Zone
Everyone’s Invited
NSPCC
SWGfL
Suzy Lamplugh Trust
The Jo Cox Foundation
Samaritans
FlippGen
The Safe AI for Children Alliance
Refuge
Mental Health Foundation
Kick It Out
Demos
Marie Collins Foundation
Coalition to End Gambling Ads
Civic Digits
Global Action Plan
Professor Gina Neff, Minderoo Centre for Technology and Democracy
Professor Julia Hornle, Queen Mary University of London
Professor Clare McGlynn
Adele Zeynep Walton
Andy Briercliffe