AI chatbots: a missed opportunity

Tags:

Yesterday, the Secretary of State for Science, Innovation and Technology, Liz Kendall, told Laura Kuenssberg that AI chatbots will be “brought under the Online Safety Act in terms of what’s illegal and we’ll also bring them in for what’s harmful for children and young people”; her interview generating headlines proclaiming “under-16s could be banned from using AI chatbots.”

This is not enough: harms arising from AI chatbots are not just about content, nor are they just about children’s access. They are about unsafe, untested products - designed to be addictive and manipulative - being rolled out without regulatory oversight. Last week, the UK Government had an opportunity presented to them to introduce much more comprehensive measures to prevent this. It failed to take it.

This is despite the evidence of harm related to AI chatbots continuing to dominate headlines, from teenagers’ overreliance on chatbots leading to sleep loss, academic failure and emotional withdrawal, to reports that new forms of chatbot-related psychosis are on the rise. Reports over the weekend that Mythos, Anthropic’s new AI tool, could be the most powerful yet brought with it concerns from everyone from the finance sector to the general public that advancements in AI technology may once again come at the expense of human safety. Yet the Government continues to forge ahead with a regulatory approach focused narrowly on illegal content, failing to ensure that these products are rigorously tested against both evidenced and emergent harms.

Last week, the Government refused to accept Baroness Kidron’s amendment to the Crime and Policing Bill which would have made providers of AI chatbots criminally liable for failing to carry out a risk assessment or mitigate against identified risks; it was narrowly voted against in the Lords by 121 to 115 votes. This close defeat signals the end of the road for this particular piece of legislation as a vehicle for a more ambitious approach to regulation focused on safety by design and risk mitigation; an approach favoured by 45 organisations.

The defeat last Thursday coincided with the launch by Liz Kendall, of Sovereign AI, a new fund to bring AI innovation to the UK. Her speech included reference to the long, and at times arduous, passage of the OSA, and the need to remove barriers to action. She said,

“It can take a year to pass a law. It took about 8 for the Online Safety Act.

Not for a lack of good people, with good intentions.

But because there are too many barriers in the way.

Too many decisions being shunted between too many departments, by people who just may not have the right experience.

Not so with Sovereign AI.”

The Secretary of State is right that the legislative process is often too slow to deal with the immediate harm posed by the rapid pace of technological development. It is perhaps to her credit that she has at least tried to use the legislative vehicles available to her in this session - such as the Children’s Wellbeing and Schools Bill and the Crime and Policing Bill - to bring in some further safeguards and amend the OSA. But this is a piecemeal approach that has been forced upon her by campaigners - from those calling for greater action against violent and extreme pornography through to concerned parents, seeking stronger protections for children and young people.

The Government’s narrow amendment relating to AI chatbots and illegal content is a start, but by failing to engage with the more expansive approach advocated by Kidron and the cross-party Peers who supported her, the Government now risks leaving vast numbers of UK citizens without proper protections. This wouldn’t happen in any other sector. Tellingly, her SovAI speech clearly demonstrates the Government’s priorities in relation to AI; innovation over individual safety, with the commitment to unlock “the full power of government” a privilege reserved for the tech industry alone.

Whilst civil society groups and campaigners call for greater safety mechanisms, the Government appears hamstrung by the scope of the legislative vehicles available to them, with the Crime and Policing Bill ensuring that the proposed action falls narrowly within the confines of the criminal law. Such an approach is out of step with the way in which harm is experienced through these emerging technologies, which civil society and academics alike have connected to the anthropomorphic features and functionalities of an AI chatbot, which lead to emotional dependency and disconnection from physical reality. This regulatory approach has recently been introduced in China, which “applies to products or services that utilize AI technology to provide the public within the territory of the People's Republic of China with simulated human personality traits, thinking patterns, and communication styles, and engage in emotional interaction with humans through text, images, audio, video, etc”.

Whilst the Government’s consultation on young people and social media includes AI chatbots, there is a discrepancy between the suggested measures for social media companies, which include restrictions on addictive design and harmful algorithms, and the questions on AI chatbots, which are strictly limited to questions on whether AI chatbots should be age gated, and which functionalities should be age restricted. Age restrictions alone will not ensure the safety of users from the anthropomorphic features and functionalities that make AI chatbots acutely harmful to all users, and the suggested measures in the consultation do not have the teeth needed to hold tech companies to account.

With glaring gaps still in need of attention, the Government must commit to ensuring their regulatory approach is one that centres the principles of proper risk assessment, product testing and safety by design, or risk even more preventable harm related to chatbots. They will need to bring forward new legislation to do so - but at present there is no sign that the King’s speech will signal new legislation to tackle the growing threat of AI or strengthen the OSA.

There is still time to rectify this clear omission to ensure that future technologies are not rolled out onto the market without the rigorous product testing we demand of other industries. AI developed in the UK must strive to be the safest in the world - the same commitment made to the public for the online world during the course of the OSA.