Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • Qalzy Launches Pre-Seed Round to Enhance AI Nutrition Scale
  • Upscale AI Secures $200M Series A to Enhance Data Centre Networking
  • urfuture Secures £1.7M Seed Funding to Revolutionize Hiring
  • Solidroad Secures $25M Series A to Revolutionize QA with AI
  • CamGraPhIC Secures €211 Million Funding from European Commission
  • Dailyza: EU-Startups Summit 2026 to Ignite Innovation in Malta
  • Outcraft AI Secures €2 Million in Pre-Seed Funding from Practica Capital
  • Accel Secures $5 Billion to Fuel AI Startups Growth
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Friday, April 17
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Technology
UK government announcement on banning deepfake AI nudification apps to combat non-consensual explicit imagery online

UK Bans Deepfake ‘Nudification’ Apps in New Online Safety Push

19 December 2025 Technology No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

Dailyza — The UK government has announced plans to ban so-called deepfake “nudification” apps, a category of generative AI tools that can edit photos or videos to make it appear someone’s clothing has been removed. The move forms part of a broader strategy to halve violence against women and girls, and is designed to close a gap where the creation of non-consensual explicit imagery is illegal, but the tools that enable it can still be built, marketed, and distributed.

The new measures were outlined by Technology Secretary Liz Kendall, who said the government would not “stand by while technology is weaponised to abuse, humiliate and exploit” women and girls. Under the proposal, it would become illegal to create and supply AI tools that facilitate this kind of image manipulation, expanding the state’s approach from punishing offenders to targeting the infrastructure that makes the abuse easier to scale.

What “nudification” apps do—and why the UK wants them banned

Nudification (sometimes described as “de-clothing”) apps use generative AI to produce realistic-looking fake nude imagery from an existing photo. While some services present themselves as novelty tools, child protection groups and online safety experts have warned that the output can be used to harass individuals, facilitate blackmail, and contribute to the creation and circulation of abusive sexual content.

Campaigners have argued that these tools lower the barrier to producing explicit imagery and can rapidly multiply harm: a single photo shared publicly or privately can be transformed into a convincing fake and redistributed at speed, including through private messaging channels where detection and enforcement are difficult.

Risk of child sexual abuse material

One of the most urgent concerns raised by experts is the potential for nudification tools to be used to generate child sexual abuse material (CSAM). Even when the imagery is synthetic or manipulated, the harm to children can be severe, and the content can be collected, traded, and reuploaded across platforms—creating long-term trauma and ongoing victimisation.

How the proposal builds on existing UK law

Creating sexually explicit deepfake images of someone without consent is already a criminal offence under the UK’s Online Safety Act. The government’s new proposal goes further by making it illegal to create or distribute the nudification apps themselves—shifting enforcement upstream to those who develop, profit from, or enable the technology.

According to the government, the intention is to ensure that “those who profit from them or enable their use” face legal consequences, rather than placing the burden solely on victims to report content after it has already been created and shared.

Pressure from children’s advocates and safety groups

The announcement follows sustained calls from child protection advocates to outlaw nudification tools entirely. In April, Children’s Commissioner for England Dame Rachel de Souza urged a total ban, arguing that if creating such imagery is illegal, the technology designed to produce it should be treated the same way.

The Internet Watch Foundation (IWF), which runs the Report Remove service allowing under-18s to confidentially report explicit images of themselves online, has highlighted that manipulated imagery is a growing feature of reports. The IWF said 19% of confirmed reporters indicated that some or all of their imagery had been altered.

IWF chief executive Kerry Smith welcomed the government’s plan, describing nudification apps as products that “have no reason to exist” and warning that the imagery they produce can be “harvested in some of the darkest corners of the internet.”

Working with tech firms—and the debate over device-level protections

The government also said it would “join forces with tech companies” to develop methods to combat intimate image abuse, including continued work with UK safety technology company SafeToNet. The firm has developed AI tools it says can detect and block sexual content and can block cameras if sexual content is detected being captured.

Such approaches build on existing platform-level detection systems used by companies including Meta, which has implemented tools to detect and flag potential nudity in images—often positioned as a way to reduce the risk of children sharing intimate images of themselves.

But the government’s announcement has also reignited debate over whether protections should be mandatory at the device level. The children’s charity NSPCC welcomed the ban proposal but said it was disappointed not to see comparable ambition to require stronger built-in safeguards, particularly to prevent the spread of CSAM in private messages.

Why private channels are a sticking point

Campaigners argue that while public posts can be moderated, much of the circulation of abusive imagery happens in private or semi-private spaces. That makes enforcement harder and increases reliance on reporting systems that often place emotional and administrative burden on victims, families, and schools.

What happens next—and what it could mean for platforms and developers

The proposed offences would represent a clear warning to developers and distributors that building or hosting nudification tools could carry criminal consequences in the UK. For platforms, the direction of travel suggests tougher expectations around detecting, removing, and preventing the spread of manipulated explicit imagery—particularly when it involves minors.

Key details will matter: how the law defines nudification tools, how it treats open-source models versus consumer-facing apps, and how enforcement will work when services are hosted overseas but accessible in the UK. As with other online harms, regulators may face a fast-moving landscape in which new tools emerge as older ones are blocked.

Still, the government’s message is unambiguous: as deepfake technology becomes more accessible, the UK intends to target not only the individuals who abuse it, but also the products and services that make that abuse easier to carry out at scale.

Previous ArticleCube raises €700k as TV stars and YouTubers back accelerator
Next Article Adobe faces new class-action over AI training on pirated books
Kyle Kelley
  • Website

Keep Reading

Qalzy Launches Pre-Seed Round to Enhance AI Nutrition Scale

Upscale AI Secures $200M Series A to Enhance Data Centre Networking

Solidroad Secures $25M Series A to Revolutionize QA with AI

Outcraft AI Secures €2 Million in Pre-Seed Funding from Practica Capital

Jane Street Invests $7 Billion in CoreWeave AI Cloud Services

Gravity Secures $7M Seed Funding for AI Analyst Orion

Add A Comment

Leave A Reply Cancel Reply

urfuture Secures £1.7M Seed Funding to Revolutionize Hiring

Venture Capital 17 April 2026

urfuture has raised £1.7M in seed funding to innovate hiring through behavioral matching for Gen Z.

CamGraPhIC Secures €211 Million Funding from European Commission

Dailyza: EU-Startups Summit 2026 to Ignite Innovation in Malta

Accel Secures $5 Billion to Fuel AI Startups Growth

EVANIUM Secures €2.2 Million to Advance OPTISOLV® Technology

Dailyza Announces EU-Startups Summit 2026 in Malta

Newfund Launches HEKA, Europe’s First €60M BrainTech Fund

GPO Fund’s Jeff Stewart on Strategic IPO Decisions for Startups

Dailyza Explores Compliance Challenges for Remote Startups in Europe

LightSeeds Secures €162k Funding to Boost CleanTech Solutions

Dailyza: Where Nordic Women-Founded Startups Face Capital Challenges

SiFive Secures $400M From NVIDIA, Apollo Ahead of IPO

EIGHT Portugal raises €3M Seed to scale video-first dating app

MillTech secures $60M from Apax Digital at $325M valuation

Eka Ventures closes new fund to back life, health and climate tech

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.