New law to stop AI images of children in the UK

Laptop Internet

Children will be better protected from becoming victims of deepfakes as the government introduces new laws to ensure Artificial Intelligence (AI) cannot be exploited to generate child abuse material.

Data from the Internet Watch Foundation released on Wednesday (12 November) shows reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. (note)

There has also been a disturbing rise in depictions of infants, with images of 0–2-year-olds surging from 5 in 2024 to 92 in 2025. (note)

Under stringent new legislation, designated bodies like AI developers and child protection organisations, such as the Internet Watch Foundation (IWF), will be empowered to scrutinise AI models, and ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.

Currently, criminal liability to create and possess this material means developers can’t carry out safety testing on AI models, and images can only be removed after they have been created and shared online. This measure, one of the first of its kind in the world, ensures AI systems’ safeguards can be robustly tested from the start, to limit its production in the first place.

The laws will also enable organisations to check models have protections against extreme pornography, and non-consensual intimate images.

While possessing and generating child sexual abuse material is already illegal under UK law, both real and synthetically produced by AI, improving AI image and video capabilities present a growing challenge.

We know that offenders who seek to create this heinous material often do so using images of real children – both those known to them and those found online – and attempt to circumnavigate safeguards designed to prevent this.

This measure aims to make such actions more difficult by empowering companies to ensure their safeguards are effective and to develop innovative, robust methods to prevent model misuse.

“We will not allow technological advancement to outpace our ability to keep children safe,” said Technology Secretary Liz Kendall.

“These new laws will ensure AI systems can be made safe at the source, preventing  vulnerabilities that could put children at risk. By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”

Now read: 1 in 6 UK businesses plan to cut jobs due to AI next year

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *