This is the response submitted by Carnegie UK to Ofcom’s call for evidence on categorisation. In addition to the proforma submission, which can be downloaded as a PDF below, the following additional points were made.
Some smaller platforms should be seen as part of a network of harm. This is in two directions: as a seed or catalyst, generating harmful content (such as a nudification app service, racist or suicide memes) which are amplified/distributed on larger platforms or in the opposite direction as an extreme destination to which people are teased from larger services.
Some combinations of features/functionalities might exacerbate problems; other combinations might provide counter-balance or risk mitigation. Functionalities should not be considered in the abstract but in context, particularly bearing in mind user base and the main routes by which the service envisages that content be discovered – there is a difference between feeds based on who a user follows and those that have a trend-based discovery page, for example.
Assessment of risk is not limited to user-facing features but should include operational design and corporate culture. Risk of harm is not limited to those users, but could have impacts on non-users; see, for example, Humane Tech’s ledger of harms .There might also be issues around decentralised networks, which could be inherently risky by design; see for example ISD’s case study of Peertube.
Other groups with expertise who should contribute to this consultation
In some cases any risk management regime will recognise a priori that certain categories have intrinsically higher risk. The Bill singles out children and in clause 16 [HL 164] adults with particular characteristics who require particular protection. Ofcom should discuss with victim groups representing each of these characteristics to seek out invidious platforms, functionalities etc that present particular risk of harm.
Over the last five years, Carnegie has worked with the following groups who show great expertise on this issue, particularly from the victim perspective: for example, Anti semitism Policy Trust, Mental Health Foundation, Hope Not Hate, Glitch, EVAW, Centenary Action Group and others. Ofcom’s work would be incomplete without input from them – this should include Ofcom meeting with both their staff and victims themselves.
Leading Parliamentary references
We have worked with parliamentarians to ensure that the gateway to risk management is risk based. We composed Sir Jeremy Wright’s amendment in Commons Report to replace ‘and’ with ‘or’ and worked again with MHF and ASPT with Baroness Morgan in the Lords on her successful amendment.
With brand new legislation a regulator might look to Parliament for interpretative guidance. The leading debates on this case can be found at: Lords Report – 19 July; Commons Report – 5 December; Commons Bill Committee – spring/summer 2022.
During passage of legislation the following documents stand out as containing useful material: Antisemitism Policy Trust briefing to Commons Bill Committee May 2022 citing the role of Gab in antisemitism crossing over to the physical world, links here and here.
Professor Woods led work on codes of practice which draw out features and/or characteristics of platforms that increase risk in respect hate speech including for the UN Special Rapporteur on Minorities. We draw on these in response to questions below but see the Model Code.
A number of international regulators have defined ranges of companies to whom different types of online safety rules apply. For example:
- the Digital Services Act’s Very Large Online Platforms (VLOPs);
- the Digital Markets Act’s Gatekeepers;
- Australia’s Basic Online Expectations from Online Safety Act 2021 – this includes a very broad swathe of social media companies (Basic Online Safety Expectations Regulatory Guidance July 2022 Page 10 ‘Who do codes apply to?’)
- OECD document in response to the Christchurch Call – contains a very useful list of global video sharing platforms and their policies which could indicate a propensity to higher risks that should be reflected in categorisation.