The UK’s communications regulator has proposed new rules that will require messaging apps and social media platforms to take further action to prevent cyberflashing and illegal self-harm material.
Ofcom is now consulting on how it implements further updates to the Online Safety Act, which introduced many changes to the regulation of online platforms in the UK, including the requirement of an age assurance mechanism for websites serving adult content.
The regulator is now consulting on how platforms can comply with their duty to protect users from illegal self-harm content and unsolicited nude pictures and videos.
It is proposing combining offences related to promoting suicide and self-harm, meaning that operators should moderate and address both to the same extent.
Alongside this, Ofcom has proposed introducing the offence of cyberflashing to those which online platforms should address. Cyberflashing is committed where a person intentionally sends a nude image or video for the purposes of causing alarm, distress, or humiliation, or for the purpose of obtaining sexual gratification.
Under its proposed new rules, platforms where cyberflashing is a risk will be required to allow users to report this content, implement content moderation to prevent this content from being shown, and ensuring their algorithms do not recommend this content.
The platforms that will be required to address this issue are those where it is a significant risk.
Ofcom has said that both messaging applications and social media platforms will be required to take appropriate measures to prevent cyberflashing.
The public consultation on these proposals will be open until 17:00pm on Friday 24 April 2026, after which it expects to publish a final decision in summer 2026.
Guidance for moderating unsolicited media
Ofcom has already published guidance for online platforms explaining how they can moderate content to protect users from seeing unsolicited nudes.
It advised platforms to include ‘prompts’ which would ask users to reconsider before publishing potentially harmful content, or to change their algorithms to suppress harmful content instead of promoting it to users.
Specifically with regard to moderating sensitive images, Ofcom said that platforms can use automated technology known as ‘hash-matching’ to detect and remove non-consensual intimate images.
Platforms can also blur nudity by default, giving users the option to unblur an image if they wish.
Under the updates to the Online Safety Act, failing to proactively implement measures to protect users from cyberflashing could lead to fines of up to 10% of the companies’ qualifying worldwide revenue and potentially blocking their services in the UK.

Leave a Reply