Response to the SIT Committee report on Social Media, Misinformation and Harmful Algorithms
The Science, Innovation and Technology Committee has today published the report from its inquiry into Social Media, Misinformation and Harmful Algorithms. The inquiry was launched in response to the role that social media played in fuelling the riots across the UK following the murders of three girls almost a year ago - on 29 July - in Southport. The public inquiry into the events leading up to the murders started earlier this week.
We welcome the Committee’s report and recommendations and we encourage the Government to respond positively, and swiftly, to them. The Committee’s assessment that “social media business models incentivise the spread of content that is damaging and dangerous” in a way that endangered public safety in the aftermath of the murders is correct. They are also correct that the Online Safety Act “fails to keep UK citizens safe from a core and pervasive harm” and that the “few measures in the Act that address misinformation fall short”. Despite this, since the riots last year, the only meaningful step to address this legislative gap is the proposal from Ofcom to include a measure on a “crisis response protocol” in its codes, which the Committee acknowledges and which we recommended in our evidence; the consultation on this, along with other measures, was launched recently. Meanwhile, as the Committee notes, the watering down of protections for users and the decision to replace fact-checkers with crowd-sourced community notes by two of the major platforms, Meta and X, is concerning. (See our blog post on Meta’s announcement earlier in the year.)
We therefore welcome the Committee adopting our recommendation that the Government must “compel platforms to put in place minimum standards” though would prefer that this recommendation was related to all the aspects of platforms’ terms of service rather than just limited to “addressing the spread of misleading content online”. (See our recommendation in our evidence to the SIT Committee; and our proposed text for this amendment at annex C of our evidence submitted to the Data (Use and Access Bill) Committee.)
We also are delighted that the Committee has accepted our recommendation that Ofcom and DSIT must “confirm that services are required to act on all risks identified in risk assessments, regardless of whether they are included in Ofcom’s Codes of Practice” - we have been calling for this since Ofcom published its first draft codes for consultation in November 2023.
The recognition that “small but risky” platforms have been left off the hook is also welcome - something we have written about at length; the Committee recommends that Ofcom must “create an additional category” to cover these given the role that harmful smaller platforms play in the ecosystem.
We are also pleased to see the Committee addressing the confused and opaque nature of Government responsibility for foreign disinformation and agree that the various bodies responsible for this should be consolidated into a single entity with a clear chain of command. The Committee's call that the National Security Online Information Team is put on a statutory footing is also something we recommended in our evidence to the Committee.
We agree with the Committee’s recommendation that “the government should pass legislation that covers generative AI platforms, bringing them into line with other online services that pose a high risk of producing or spreading illegal or harmful content”. We would flag, however, that concerns about the risks from unregulated generative AI go wider than content-based online services and we are disappointed that the Government missed the opportunity to bring in a requirement for risk assessment and safety testing for all products (physical or digital) with an AI-component in the recently passed Product Regulation and Metrology Bill. The recent paper by Prof Lorna Woods on chatbots and the OSA is relevant useful background here too.
Finally, we are pleased that the Committee has dedicated much of its analysis and related recommendations to the lack of effective regulation of the digital advertising market and the role this spreads in the promotion of misleading, damaging or hateful material”. Their conclusion that the government’s reliance on industry-led, content-focused solutions is “insufficient” is correct. We flagged in our evidence the role that monetisation plays in virality of content and note that the Committee has agreed with our assessment and includes a number of recommendations related to this aspect. We are working with our civil society partners to consider a range of policy responses to address the challenges that arise from insufficient regulation of online advertising (as explained in the recent paper by Prof Lorna Woods and Dr Alexandros Antoniou) and look forward to the Government’s response here.