Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • Elvy Secures €5.9M as Klarna Veteran Joins as Chair
  • Fractile Secures $220M to Challenge Nvidia in AI Chip Market
  • White Circle Secures $11M from AI Leaders to Enhance Enterprise Security
  • DesignVerse Secures €4.6 Million to Innovate Aviation Infrastructure
  • Dailyza: Highlights from the EU-Startups Summit 2026 in Malta
  • Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!
  • SoftBank Invests $450M in Graphcore to Revitalize Chipmaker
  • Mantle8 Secures €31 Million Series A Funding for Hydrogen Exploration
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Thursday, May 14
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Technology
UK government announcement on banning deepfake AI nudification apps to combat non-consensual explicit imagery online

UK Bans Deepfake ‘Nudification’ Apps in New Online Safety Push

19 December 2025 Technology No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

Dailyza — The UK government has announced plans to ban so-called deepfake “nudification” apps, a category of generative AI tools that can edit photos or videos to make it appear someone’s clothing has been removed. The move forms part of a broader strategy to halve violence against women and girls, and is designed to close a gap where the creation of non-consensual explicit imagery is illegal, but the tools that enable it can still be built, marketed, and distributed.

The new measures were outlined by Technology Secretary Liz Kendall, who said the government would not “stand by while technology is weaponised to abuse, humiliate and exploit” women and girls. Under the proposal, it would become illegal to create and supply AI tools that facilitate this kind of image manipulation, expanding the state’s approach from punishing offenders to targeting the infrastructure that makes the abuse easier to scale.

What “nudification” apps do—and why the UK wants them banned

Nudification (sometimes described as “de-clothing”) apps use generative AI to produce realistic-looking fake nude imagery from an existing photo. While some services present themselves as novelty tools, child protection groups and online safety experts have warned that the output can be used to harass individuals, facilitate blackmail, and contribute to the creation and circulation of abusive sexual content.

Campaigners have argued that these tools lower the barrier to producing explicit imagery and can rapidly multiply harm: a single photo shared publicly or privately can be transformed into a convincing fake and redistributed at speed, including through private messaging channels where detection and enforcement are difficult.

Risk of child sexual abuse material

One of the most urgent concerns raised by experts is the potential for nudification tools to be used to generate child sexual abuse material (CSAM). Even when the imagery is synthetic or manipulated, the harm to children can be severe, and the content can be collected, traded, and reuploaded across platforms—creating long-term trauma and ongoing victimisation.

How the proposal builds on existing UK law

Creating sexually explicit deepfake images of someone without consent is already a criminal offence under the UK’s Online Safety Act. The government’s new proposal goes further by making it illegal to create or distribute the nudification apps themselves—shifting enforcement upstream to those who develop, profit from, or enable the technology.

According to the government, the intention is to ensure that “those who profit from them or enable their use” face legal consequences, rather than placing the burden solely on victims to report content after it has already been created and shared.

Pressure from children’s advocates and safety groups

The announcement follows sustained calls from child protection advocates to outlaw nudification tools entirely. In April, Children’s Commissioner for England Dame Rachel de Souza urged a total ban, arguing that if creating such imagery is illegal, the technology designed to produce it should be treated the same way.

The Internet Watch Foundation (IWF), which runs the Report Remove service allowing under-18s to confidentially report explicit images of themselves online, has highlighted that manipulated imagery is a growing feature of reports. The IWF said 19% of confirmed reporters indicated that some or all of their imagery had been altered.

IWF chief executive Kerry Smith welcomed the government’s plan, describing nudification apps as products that “have no reason to exist” and warning that the imagery they produce can be “harvested in some of the darkest corners of the internet.”

Working with tech firms—and the debate over device-level protections

The government also said it would “join forces with tech companies” to develop methods to combat intimate image abuse, including continued work with UK safety technology company SafeToNet. The firm has developed AI tools it says can detect and block sexual content and can block cameras if sexual content is detected being captured.

Such approaches build on existing platform-level detection systems used by companies including Meta, which has implemented tools to detect and flag potential nudity in images—often positioned as a way to reduce the risk of children sharing intimate images of themselves.

But the government’s announcement has also reignited debate over whether protections should be mandatory at the device level. The children’s charity NSPCC welcomed the ban proposal but said it was disappointed not to see comparable ambition to require stronger built-in safeguards, particularly to prevent the spread of CSAM in private messages.

Why private channels are a sticking point

Campaigners argue that while public posts can be moderated, much of the circulation of abusive imagery happens in private or semi-private spaces. That makes enforcement harder and increases reliance on reporting systems that often place emotional and administrative burden on victims, families, and schools.

What happens next—and what it could mean for platforms and developers

The proposed offences would represent a clear warning to developers and distributors that building or hosting nudification tools could carry criminal consequences in the UK. For platforms, the direction of travel suggests tougher expectations around detecting, removing, and preventing the spread of manipulated explicit imagery—particularly when it involves minors.

Key details will matter: how the law defines nudification tools, how it treats open-source models versus consumer-facing apps, and how enforcement will work when services are hosted overseas but accessible in the UK. As with other online harms, regulators may face a fast-moving landscape in which new tools emerge as older ones are blocked.

Still, the government’s message is unambiguous: as deepfake technology becomes more accessible, the UK intends to target not only the individuals who abuse it, but also the products and services that make that abuse easier to carry out at scale.

Previous ArticleCube raises €700k as TV stars and YouTubers back accelerator
Next Article Adobe faces new class-action over AI training on pirated books
Kyle Kelley
  • Website

Keep Reading

Elvy Secures €5.9M as Klarna Veteran Joins as Chair

Fractile Secures $220M to Challenge Nvidia in AI Chip Market

White Circle Secures $11M from AI Leaders to Enhance Enterprise Security

DesignVerse Secures €4.6 Million to Innovate Aviation Infrastructure

SoftBank Invests $450M in Graphcore to Revitalize Chipmaker

Holmes Secures €1.1 Million Pre-Seed to Revolutionize Software Testing

Add A Comment

Leave A Reply Cancel Reply

Dailyza: Highlights from the EU-Startups Summit 2026 in Malta

Venture Capital 14 May 2026

The EU-Startups Summit 2026 concludes in Malta, showcasing innovation and investment opportunities.

Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!

Ditto Secures €7.6 Million to Simplify Doctor-Patient Communication

Cellply Revolutionizes Cancer Treatment with Innovative Tools

A-Star Secures $450M to Expand Investment Portfolio

Dailyza Unveils African-Startups.com to Boost Startup Ecosystem

Adfin Secures €15.3 Million to Revolutionize Revenue Automation

Personio and Forto Founders Invest in Regulate’s €1.4M Funding

NanoStruct Secures €2.6 Million to Revolutionize Food Safety

AlterEcho Emerges Victorious at EU-Startups Summit 2026 Pitch

Dailyza Highlights 8 Agtech Startups to Watch According to VCs

Ramp Secures $750M Funding from GIC, Iconiq Capital at $40B Valuation

Tencent Backs DeepSeek in $4B Funding Round at $50B Valuation

Dailyza Explores £7.5M Arāya Sie Fund Empowering Women in Deeptech

NASA’s Ambitious Moon Plans Boosted by Lunar Outpost’s $30M Deal

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.