UK moves closer to scanning your private messages for illegal content
Key Points
- Ofcom published advice on 8 May 2026 advancing UK powers to require platforms to scan private messages for CSEA and public content for terrorism material.
- The recommended audit-based accreditation requires tools to score 60 out of 100 overall and 80 out of 100 on performance metrics including false positive rate.
- Ofcom dropped its proposed independent performance testing stage, citing year-zero costs of up to £9.3 million and dataset acquisition challenges.
- An Open Rights Group "Practice safe text" campaign generated 782 of the 810 consultation responses, arguing the powers undermine end-to-end encryption.
- DSIT Secretary of State must approve the standards before Ofcom can accredit tools and issue Technology Notices under section 121 of the Online Safety Act 2023.
UK regulator Ofcom has published advice that brings the country a step closer to forcing platforms including WhatsApp and iMessage to scan private messages for illegal content.
The statement, published on Friday (8 May), tells the Department for Science, Innovation and Technology (DSIT) Secretary of State how to set minimum accuracy standards for the content-scanning tools that Ofcom can require platforms to deploy.
Once DSIT approves those standards and Ofcom stands up its accreditation scheme, the regulator can issue Technology Notices under section 121 of the Online Safety Act 2023 forcing user-to-user services to use accredited tools.
For child sexual exploitation and abuse (CSEA) material, a Notice can cover both public posts and private chats. For terrorism content, a Notice can cover only public content.
Section 121 is the only provision in the Act that lets Ofcom require scanning of privately communicated content. Today’s advice clears one of the final blockers to that power becoming live.
What it means for everyday users
Three potential issues stand out for ordinary UK users.
First is the issue of false positives. Ofcom acknowledges in its advice that wrongly flagged content can affect freedom of expression, overwhelm moderators and generate inaccurate reports to law enforcement, but it has chosen not to set a fixed minimum performance threshold for accuracy metrics.
Tools applying for accreditation must score 80 out of 100 on the performance metrics objective, which covers false positive rate.
The overall pass mark sits lower at 60 out of 100, and other individual objectives need only 40 out of 100.
Second is the issue of scope. Any user-to-user service operating in the UK can, in principle, receive a Technology Notice. That includes encrypted messaging apps, social platforms and smaller forums.
Finally, there is the obvious question around privacy. Civil liberties group Open Rights Group ran a “Practice safe text” campaign in response to Ofcom’s December 2024 consultation, generating 782 of the 810 responses the regulator received.
The campaign argued that scanning private messages undermines end-to-end encryption and exposes vulnerable users including abuse survivors.
Ofcom records the concerns in its statement but has advanced broadly the framework it consulted on.
Ofcom drops the tougher testing stage
The most notable change since consultation is what Ofcom has dropped.
The original December 2024 proposals offered two stages: an audit-based assessment, plus independent performance testing in which Ofcom or a third party would test tools against shared datasets and benchmark them against rivals. Today’s advice keeps only the first stage.
Ofcom cites cost as a key reason. Its external feasibility study, conducted by consultancy PUBLIC, estimated the two-stage scheme at up to £9.3 million in year zero against £1.8 million for the audit-based assessment alone.
Ongoing annual costs would run to £1.1 million for the full scheme or £628,000 for the lighter version.
The regulator also points to practical difficulties around assembling test datasets, particularly those containing illegal CSEA material.
Google argued in its consultation response that independent testing carried serious practical concerns and would struggle to fairly compare tools designed for very different platform contexts.
The NSPCC said the extra stage would deter developers from applying. X and at least one other regulated service pushed in the opposite direction, calling for tougher prescribed performance thresholds.
Ofcom rejected the case for prescribed thresholds, arguing that no consensus exists on what an acceptable performance level looks like and that fixed numbers risk excluding all but a narrow range of tools.
The new framework
Ofcom’s recommended framework rests on four principles:
- Technical performance;
- Fairness;
- Robustness;
- Maintainability.
The first three each carry 30% of the overall weighted score, with maintainability carrying 10%.
Each principle is broken into objectives that developers must evidence, with Ofcom or an appointed third party scoring the submissions on a 0, 1 or 5 point scale per question.
Ofcom expects to re-accredit tools every three to six years, with the final cadence to be confirmed once the scheme is operational.
What happens next
The advice now sits with the DSIT Secretary of State, who must approve and publish the minimum standards before Ofcom can begin accrediting tools.
A Technology Notice does not automatically follow accreditation. Ofcom must still consider human rights, privacy law and the availability of less intrusive measures before requiring a specific platform to deploy a specific tool.
The regulator commits to revisiting the dropped independent testing stage once it has more experience with the audit-based approach.
Tday’s advice does not mean message scanning starts tomorrow.
But, it does mean the regulatory machinery for that outcome is one step closer to running.