The UK’s Online Safety Act (OSA) was passed into law in 2023 and is now being implemented, with the first parts of it going into effect in March 2025. The guidance provided is rather impenetrable, so I’ve decided to write my own summary here.

Disclaimer

I’m not a lawyer and this document is not legal advice, but I am quite familiar with UK law and I’ve spent several hours reading the guidance. I’m mostly writing this to condense my own thoughts, but I hope others find it useful, and suggestions are welcome.

If you’re running a service covered by the OSA, you’re going to need to carefully review the law and the guidance yourself (hopefully this’ll help), or you’ll need to get a lawyer.

About this document

This document is targeted towards operators of small “user-to-user” services, with up to 700,000 UK users, and which are not “likely to be accessed by children” (as covered below). Regulated services are sometimes referred to as “Part 3” services.

You might see references to “Category 1/2A/2B” services – these are the largest services which are subject to additional regulation. If you’re one of those, you’ll have the dubious honour of being contacted directly by Ofcom in summer 2025, but if you’re running a smaller service, you still have plenty of new rules to comply with.

OSA compliance is a risk assessment based process, so if you’re trying to find solid answers to some questions, you may find there aren’t any. As long as you’re able to justify your answer, you’re unlikely to face an immediate fine if the regulator ever disagrees with it.

I’ll try and keep this document up to date as the situation develops, but I can’t provide any guarantees.

Summary of the Online Safety Act

In a nutshell, the OSA requires services which handle user-generated content to:

  • Have baseline content moderation practices
  • Use those moderation practices to take down content that violates UK law
  • Stop kids from seeing porn

Guidance

The act itself provides a broad framework, but most of the useful detail is in the statutory guidance, which Ofcom has initially published in December 2024. That’s the “Regulatory documents & guidance” section on this page – not all the other documents above it, which are just their expansive response to the consultation.

There is a second wave of guidance due to land in January 2025 which covers age verification and pornographic content.

Definitions

I’m trying to keep this relatively jargon-free but there are a few terms which get used constantly:

  • Priority offence: Terrorism, CSAM, grooming, assisting suicide, threats to kill, harassment, stalking, bunch of public order offences. (There’s a lot. See schedules 5-7 for the full list)
  • Relevant offence: Either a priority offence, one of the communications offences defined in the OSA itself (encouraging self-harm, threatening communications, etc), or any other offence the government decides to add in future
  • Illegal content: Any content which amounts to a relevant offence
  • Priority content: Any content which amounts to a priority offence

(s59)

Scope

The Online Safety Act applies to every service which handles user-generated content and has “links to the UK”, with a few limited exceptions listed below.

The scope is extraterritorial (like the GDPR) so even sites entirely operated outside the UK are in scope if they are considered to have “links to the UK”.

A service has links to the UK if any of the following apply:

  • the service has a “significant number” of UK users
  • UK users form one of the target markets for the service
  • the service is accessible to UK users and “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK” (this seems less likely to apply for smaller services but who knows)

(s4(5)-4(6))

“Significant number” is not defined, either in the law or the guidance. Ofcom says:

Service providers should be able to explain their judgement, especially if they think they do not have a significant number of UK users. (Overview of Regulated Services 1.11-1.13)

Exemptions

There’s a short list of narrow exemptions, which are not covered by the OSA at all:

  • Internal business services, only accessible to employees (or volunteers/contractors)
  • Email services
  • SMS/MMS/one-to-one voice services
  • “Limited functionality services”
    • This is when users can only post comments on the provider’s published content, not on user-generated content
    • This includes, e.g., product reviews
    • It probably includes blog comments but it may not include them if users can reply to each other - this is unclear (Overview of Regulated Services 1.17)
  • Services which enable combinations of Email/SMS/limited functionality services
  • Services provided by public bodies
  • Services provided by persons providing education or childcare

(Schedule 1 part 1)

Hosted services

The OSA puts obligations on the service provider, so if you host a community on a platform such as Discord or WhatsApp, the OSA doesn’t directly affect you (although it’s likely you’ll soon see the indirect effects).

While larger platforms are subject to more regulation under the OSA than smaller sites, I think it’s clear that it provides a centralising force, particularly given the complexity of the regulations and the lack of straightforward guidance for smaller sites.

Duties

As a small user-to-user service, the OSA requires you to:

  • Assess the risk of illegal content (s9)
  • Take proportionate measures to mitigate the illegal content risks you identified (s10(2)(c))
  • Take proportionate measures to prevent people encountering priority content (s10(2)(a))
  • Take proportionate measures to mitigate the risk of people committing priority offences (s10(2)(b))
  • Allow users to easily report illegal content, and content which is harmful to children, and take it down (s20)
  • Allow users to complain about reports, takedowns, etc (s21)
  • “Have particular regard to the importance of protecting users’ right to freedom of expression” (s22(2))

You don’t have to worry too much about these duties directly – the risk assessment process guides you through what you need to do to comply with them.

If your service is “likely to be accessed by children” there are additional requirements, including to perform a separate “children’s risk assessment”. This document does not cover services which are likely to be accessed by children, but you must perform a “children’s access assessment” to confirm this.

Children’s access assessment

Your service (or part of it) is “likely to be accessed by children” if the following applies:

  • You don’t use age verification or age estimation to prevent children from accessing it, and,
  • There is a “significant number” of children who are users (based on the ratio of total UK users to UK child users), or it’s “likely to attract” a significant number of child users

(s35)

Of course “significant number” isn’t defined here either. Ofcom has a guidance page on these assessments. Notably it says:

Even a relatively small number of children could be significant in terms of the risk of harm. We suggest you should err on the side of caution in making your assessment.

More guidance for this will be published in January 2025 as part of “phase two”. This will definitely mandate age verification for porn sites – it seems to be unclear whether this will be required for anything else, but I think it’s a possibility.

Freedom of expression

It’s worth commenting briefly on the “freedom of expression” duty. Ofcom is clear that it’s intended to prevent the OSA itself from unduly interfering with users’ freedom of expression.

Sites are still free to put whatever additional restrictions they like in their TOS, and Ofcom has no power to restrict that:

Ofcom does not have a power under the Act to compel providers to carry content they do not wish to carry. In practice, this means that services may continue to operate with regard to Terms and Conditions which prohibit more content than is covered in this Guidance, though they will not be compliant if their Terms and Conditions capture less. However, we encourage providers to consider carefully the impacts of their choices on users’ opportunities to express themselves. (Illegal Content Judgements Guidance 1.4)

Enforcement

If you’re found not to be complying with the OSA, Ofcom will notify you and ask you to fix it first (a “provisional notice of contravention”). If you don’t, they have the ability to fine you up to 10% of worldwide revenue or £18m (through a “confirmation decision”). The purpose of these fines is to act as a deterrent, not to put you out of business.

If a service doesn’t comply with a confirmation decision, and Ofcom believes the risk of harm is sufficiently high, they can deploy “business disruption measures”, by ordering ISPs, service providers, or app stores to block the service.

If you fail to comply with a requirement in the confirmation decision, and the requirement relates to children’s online safety, you can be prosecuted (s138). Corporate officers can be made personally liable for these offences (s202).

(Enforcement guidance)

Illegal content risk assessment

This is really the core part of the OSA. All services must carry out a “suitable and sufficient” illegal content risk assessment.

This needs to be done:

  • Before 16 March 2025 for existing services
  • Before making any significant change to your service (see risk assessment guidance s4)
  • Within 3 months of launching a new service (or a service coming into the scope of the OSA)
  • Whenever Ofcom changes their guidance

You need to keep records of each assessment. (s9(1-4))

The risk assessment guidance, at a mere 84 pages, is actually one of the better and more concise documents, and you probably do need to read that one all the way through.

I’ll summarise a few points:

  • There are 17 categories of priority content which you must individually risk-assess. You also have to assess the risk of other illegal content.
  • Each category needs to be assigned a risk level of “low”, “medium”, or “high”
  • The risk assessment needs to take into account (s9(5)):
    • the user base
    • the risk of encountering illegal content, taking into account any recommendation algorithms or sharing features
    • the risk of users committing or facilitating a priority offence, and possibly harming other users in the process
    • the nature and severity of the harm which could be caused
    • how the service’s design might affect the risks
  • You’ll need to refer to the “Risk Profiles” in section 1 of the guidance and list the aspects which apply to your service
  • This is then combined with “evidence” (data about your service) to produce the final risk score. This is the fuzzy bit, although section 3 has some rules about this

Measures

So now you’ve got a risk assessment, you can finally find out what you have to do. Smaller services (which again, are the only services we consider here), are split into three categories based on the results of your risk assessment. This will then determine the list of measures you need to take.

Technically these are only recommended measures, and you’re not required to implement them, but if you implement these measures, you are considered to be complying with the duties. You can use your own alternative measures as long as you ensure they protect users’ freedom of speech and privacy, as well as complying with the duties. (s49)

The measures are numbered (in the form “ICU A1”), and as a user-to-user service you only have to care about those starting ICU. Those measures are summarised in the “Summary of our decisions” document and described in detail in the Code of Practice for user-to-user services. I’ve only summarised them here, in a more readable format.

Measures are split up by risk category. Every risk category includes the previous ones:

Low-risk services

This is the minimum standard for all services regulated by the OSA. You’re a low-risk service only if you have assessed your risk as low for all 17 kinds of illegal harms.

I suspect this category might cover the smallest forums, as well as other sites which have some user-generated content.

Measure Requirement
ICU A2 Name an individual (accountable to the most senior governance body) to handle content safety, reporting and complaints.
ICU C1 Have a process to review content which the provider (not the user, that comes later) thinks may be illegal.
ICU C2 Have systems designed to swiftly take down illegal content, unless it is currently not technically feasible to achieve this outcome. ("Swiftly" is not defined.)
ICU D1 Have a system for users to make a complaint.
ICU D2 Users should be able to complain about a specific piece of content, or just complain in general. They must be able to provide supporting information with the complaint.
ICU D7 A user complaint about illegal content should be treated as per ICU C1.
ICU D9 If a complaint is an appeal, it should be determined promptly. (No guidance provided on the difference between "swiftly" and "promptly".)
ICU D10 If an appeal is approved, the content or user subject to that appeal should be reinstated. If there's a pattern of content being taken down in error, guidance should be adjusted or automated systems should be changed.
ICU D11 If the complaint is about "proactive technology" (scanning/fingerprinting) not behaving correctly, the complainant should be told what the provider is doing to rectify it, and their right to take legal action (?)
ICU D12 If the complaint is about compliance with the OSA, it should be passed to a nominated individual, and should be handled within an appropriate timeframe.
ICU D13 A complaint should only be disregarded if it is manifestly unfounded, in accordance with the TOS. These events should be reviewed to ensure the policy is appropriate.
ICU G1 The provider's TOS must include a number of specified provisions regarding illegal content, proactive technology, and complaints.
ICU G3 The terms from ICU G1 must be clearly signposted, accessible, and easily readable for the age range of the site.
ICU H1 The provider must remove public content and accounts from proscribed terrorist organisations.

Single-risk services

Single-risk services are services where only one category is assessed as being “medium” or “high”. In addition to the measures in the low-risk services category, single-risk services are recommended to:

Measure Applies if Requirement
ICU C9 High risk of image-based CSAM & is a file-storage/file-sharing service Use perceptual hashing to scan uploaded images for CSAM.
ICU D4 Acknowledge receipt of each complaint and provide an indicative timeframe for deciding the complaint
ICU D6 Enable the complainant to opt out of receiving any non-ephemeral (?) communications about a complaint.
ICU F1 High risk of grooming & existing means to determine the age/age range of the user Hide child user accounts from recommended follows, disallow direct messaging to child accounts the user does not follow.
ICU F2 Prompt child user accounts with contextual info about safety features and risks.

(Note that there are several measures omitted from this table which only apply to services “likely to be accessed by children”, and ICU C9 also applies to all sites with >700k users. Both of which are currently out of the scope of this document.)

Multi-risk services

Multi-risk services are all other services, where more than one risk is “medium” or “high”.

Measure Applies if Requirement
ICU A3 Have written statements of responsibilities regarding illegal harms for senior managers.
ICU A5 Track evidence of new kinds of illegal content, using data from complaints/moderation/law enforcement referrals/etc.
ICU A6 Have a code of conduct for employees around illegal harms.
ICU A7 Does not apply to volunteers Employees working in design and management of the service must be trained in compliance to safety duties.
ICU C3 Written internal content policies.
ICU C4 Content moderation performance targets covering speed and accuracy.
ICU C5 Written policy for prioritising content for review.
ICU C6 Content moderation should be adequately resourced.
ICU C7 Does not apply to volunteers Content moderators should receive adequate training and materials.
ICU C8 Content moderation volunteers should have access to adequate materials.
ICU D8 Appeal processing should be monitored for speed and accuracy and adequately resourced.
ICU E1 Recommender systems are tested publicly & high risk of two or more harms from a specific list Safety metrics should be produced and analysed when conducting on-platform testing of recommender systems.

Decentralised services

In their consultation response, Ofcom makes clear that they don’t intend for the Online Safety Act to make decentralised services unlawful:

We recognise that a wide variety of service types will fall in scope of the Act, including services with a decentralised and community-moderated organisational model. However, we have intentionally designed each measure with flexibility in mind as we recognise the importance for providers to have some flexibility in how they will implement these measures. We encourage providers to consider the safety outcome expected and to implement the measures in a way that is appropriate and effective for their own service and organisational structure. (Volume 1: Governance and Risk Management 5.19)

Given that the duties only require services to take “proportionate measures”, the implication must be that decentralised services will be given more leeway.

Mastodon & the Fediverse

Fediverse services like Mastodon should be able to re-use a standardised risk assessment, which should simplify things. However, they will likely end up in the “multi-risk” category, and some of the recommended measures don’t apply neatly to this situation, which means you may not get the guarantee of compliance which those measures provide.

Age verification (guidance coming January 2025) may also be an issue here.

I hear IFTAS may be working on some Online Safety Act guidance for Fediverse services, so I’d stay tuned and don’t panic too much (yet).

The most interesting question here is: if your Mastodon server doesn’t have “links to the UK”, but it federates with servers which do, are you subject to the OSA?

To comment on this post, mention me on mastodon, or drop me an email.