Dailyza — The UK government has announced plans to ban so-called deepfake “nudification” apps, a category of generative AI tools that can edit photos or videos to make it appear someone’s clothing has been removed. The move forms part of a broader strategy to halve violence against women and girls, and is designed to close a gap where the creation of non-consensual explicit imagery is illegal, but the tools that enable it can still be built, marketed, and distributed.
The new measures were outlined by Technology Secretary Liz Kendall, who said the government would not “stand by while technology is weaponised to abuse, humiliate and exploit” women and girls. Under the proposal, it would become illegal to create and supply AI tools that facilitate this kind of image manipulation, expanding the state’s approach from punishing offenders to targeting the infrastructure that makes the abuse easier to scale.
What “nudification” apps do—and why the UK wants them banned
Nudification (sometimes described as “de-clothing”) apps use generative AI to produce realistic-looking fake nude imagery from an existing photo. While some services present themselves as novelty tools, child protection groups and online safety experts have warned that the output can be used to harass individuals, facilitate blackmail, and contribute to the creation and circulation of abusive sexual content.
Campaigners have argued that these tools lower the barrier to producing explicit imagery and can rapidly multiply harm: a single photo shared publicly or privately can be transformed into a convincing fake and redistributed at speed, including through private messaging channels where detection and enforcement are difficult.
Risk of child sexual abuse material
One of the most urgent concerns raised by experts is the potential for nudification tools to be used to generate child sexual abuse material (CSAM). Even when the imagery is synthetic or manipulated, the harm to children can be severe, and the content can be collected, traded, and reuploaded across platforms—creating long-term trauma and ongoing victimisation.
How the proposal builds on existing UK law
Creating sexually explicit deepfake images of someone without consent is already a criminal offence under the UK’s Online Safety Act. The government’s new proposal goes further by making it illegal to create or distribute the nudification apps themselves—shifting enforcement upstream to those who develop, profit from, or enable the technology.
According to the government, the intention is to ensure that “those who profit from them or enable their use” face legal consequences, rather than placing the burden solely on victims to report content after it has already been created and shared.
Pressure from children’s advocates and safety groups
The announcement follows sustained calls from child protection advocates to outlaw nudification tools entirely. In April, Children’s Commissioner for England Dame Rachel de Souza urged a total ban, arguing that if creating such imagery is illegal, the technology designed to produce it should be treated the same way.
The Internet Watch Foundation (IWF), which runs the Report Remove service allowing under-18s to confidentially report explicit images of themselves online, has highlighted that manipulated imagery is a growing feature of reports. The IWF said 19% of confirmed reporters indicated that some or all of their imagery had been altered.
IWF chief executive Kerry Smith welcomed the government’s plan, describing nudification apps as products that “have no reason to exist” and warning that the imagery they produce can be “harvested in some of the darkest corners of the internet.”
Working with tech firms—and the debate over device-level protections
The government also said it would “join forces with tech companies” to develop methods to combat intimate image abuse, including continued work with UK safety technology company SafeToNet. The firm has developed AI tools it says can detect and block sexual content and can block cameras if sexual content is detected being captured.
Such approaches build on existing platform-level detection systems used by companies including Meta, which has implemented tools to detect and flag potential nudity in images—often positioned as a way to reduce the risk of children sharing intimate images of themselves.
But the government’s announcement has also reignited debate over whether protections should be mandatory at the device level. The children’s charity NSPCC welcomed the ban proposal but said it was disappointed not to see comparable ambition to require stronger built-in safeguards, particularly to prevent the spread of CSAM in private messages.
Why private channels are a sticking point
Campaigners argue that while public posts can be moderated, much of the circulation of abusive imagery happens in private or semi-private spaces. That makes enforcement harder and increases reliance on reporting systems that often place emotional and administrative burden on victims, families, and schools.
What happens next—and what it could mean for platforms and developers
The proposed offences would represent a clear warning to developers and distributors that building or hosting nudification tools could carry criminal consequences in the UK. For platforms, the direction of travel suggests tougher expectations around detecting, removing, and preventing the spread of manipulated explicit imagery—particularly when it involves minors.
Key details will matter: how the law defines nudification tools, how it treats open-source models versus consumer-facing apps, and how enforcement will work when services are hosted overseas but accessible in the UK. As with other online harms, regulators may face a fast-moving landscape in which new tools emerge as older ones are blocked.
Still, the government’s message is unambiguous: as deepfake technology becomes more accessible, the UK intends to target not only the individuals who abuse it, but also the products and services that make that abuse easier to carry out at scale.

