AI chatbots: the case for action

Tags:

AI chatbots have fast become an area of serious concern on both an individual and societal level, uniting a broad range of organisations in calls for urgent Government action. Over the last few months, there has been an increase in reports and research pertaining to the harm caused by AI chatbots, spanning the impact on individuals' mental health and wellbeing, the creation of child sexual abuse material and the spread of harmful misinformation, through to the pollution of the information environment, election bias and the impact on our physical environment.

Yet regardless of clear risks associated with this technology and significant support for regulation by the public, AI chatbots are not properly covered by the Online Safety Act, and broader movements towards AI regulation have seemingly stalled.

The Online Safety Act Network has been coordinating a working group to map the harm being caused by AI chatbots, gathering experts across civil society to provide insights and evidence from their fields on chatbots' impact, particularly on the most vulnerable users.

Today, we are launching a research brief that is the culmination of these conversations and our desk research, tracking the latest research and demonstrating the clear danger that inaction is causing.

We are seeing some signs of movement. Giving evidence to the Science, Innovation and Technology (SIT) Committee recently, the Secretary of State, Liz Kendall, gave strong signals that the Government will take action on AI chatbots that do not fall within the scope of the OSA, and introduce new legislation if necessary. This was further emphasised by DSIT Minister Kanishka Narayan in response to a Westminster Hall debate on AI Safety last week. Numerous MPs raised their concerns about harms associated with chatbots in a debate held on Monday (15th) on the Online Safety Act, many of whom expressed the need for clarity from the Government on how they will take action.

However, we are yet to hear concrete commitments as to how they plan to address this gap, and there are hints that previous commitments to introduce an AI Bill will be reneged on.

On behalf of the working group, we are writing to Minister Narayan this week to invite him to work collaboratively with us to develop solutions to the harms we’ve identified, as well as future harms that will arise if these products continue to develop without the safety of users as a central principle in their design.

We hope that DSIT will consider the evidence laid out in the paper we’ve published today and use it to inform the next steps required to secure proper risk-based regulation, product safety and accountability.