Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • Gyver Secures €1.4 Million Pre-Seed Funding for Workforce Infrastructure
  • Elvy Secures €5.9M as Klarna Veteran Joins as Chair
  • Fractile Secures $220M to Challenge Nvidia in AI Chip Market
  • White Circle Secures $11M from AI Leaders to Enhance Enterprise Security
  • DesignVerse Secures €4.6 Million to Innovate Aviation Infrastructure
  • Dailyza: Highlights from the EU-Startups Summit 2026 in Malta
  • Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!
  • SoftBank Invests $450M in Graphcore to Revitalize Chipmaker
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Thursday, May 14
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Technology
OpenAI ChatGPT teen safety update shown on a laptop with child online safety concept

OpenAI Tightens ChatGPT Teen Safety Rules as US Lawmakers Act

21 December 2025 Technology No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI has rolled out new safety rules for teens using ChatGPT, updating its internal guidance for how its AI systems should behave with users under 18 and publishing new AI literacy resources aimed at teenagers and parents. The move lands as U.S. policymakers and state officials sharpen their focus on how AI chatbots affect minors, and as the broader industry faces escalating pressure to prove that safety policies translate into consistent real-world protections.

The update follows heightened concern from educators, child-safety advocates, and lawmakers after reports that several teenagers allegedly died by suicide after prolonged interactions with AI chatbots. While OpenAI’s new rules do not resolve the debate over what guardrails should be required by law, they signal a more explicit, teen-specific approach to content boundaries, roleplay, and how the system should respond in situations involving potential harm.

Why OpenAI is updating teen protections now

Gen Z users are among the most active audiences for OpenAI’s chatbot, and the company’s expanding consumer reach has kept it in the crosshairs of regulators. The scrutiny is not limited to OpenAI: the entire AI sector is being asked to show how it can prevent harmful interactions, especially when a user may be emotionally vulnerable or unable to fully assess risk.

In recent weeks, a coalition of 42 state attorneys general urged major technology companies to implement stronger safeguards for children and vulnerable users interacting with AI chatbots. At the federal level, lawmakers have floated more aggressive proposals as they weigh what national standards for minors should look like. Among the proposals is legislation introduced by Sen. Josh Hawley that would ban minors from interacting with AI chatbots altogether, underscoring how quickly the policy conversation is shifting from voluntary best practices to potential prohibitions.

What changed in OpenAI’s Model Spec for users under 18

OpenAI’s update centers on its Model Spec, a set of behavioral instructions that guide how its large language models should respond. The company said the teen-focused rules build on existing restrictions that already prohibit generating sexual content involving minors and disallow encouragement of self-harm, delusions, or mania.

Under the updated guidance, the system is expected to apply stricter standards when it identifies a teenage user. The new teen rules focus heavily on limiting immersive or intimate interactions that could blur boundaries between a minor and a conversational agent.

Stricter limits on roleplay and “first-person” intimacy

For teen users, the models are instructed to avoid:

  • Immersive romantic roleplay
  • First-person intimacy and emotionally intense relationship simulation
  • First-person sexual or violent roleplay, even when it is non-graphic

This is a notable tightening compared with typical adult-facing policies, reflecting a growing industry view that the most difficult safety problems are not always explicit content, but rather prolonged, emotionally charged interactions that can encourage dependency or distort judgment.

Extra caution on body image, eating behaviors, and concealment

OpenAI’s updated guidance also calls for heightened caution around topics such as body image and disordered eating behaviors. Another key shift is the instruction to prioritize safety communication over user autonomy when harm may be involved, and to avoid offering advice that could help teens conceal unsafe behavior from parents, guardians, or other caregivers.

That last point addresses a recurring criticism of AI companions and chatbots: that even when they do not explicitly encourage harm, they may inadvertently provide “workarounds” or coaching that undermines adult supervision.

Closing loopholes: “fictional” and “hypothetical” prompts

OpenAI also states that these limits should remain in effect even when prompts are framed as “fictional, hypothetical, historical, or educational.” Those framings have become common tactics for probing model boundaries, sometimes used to coax a system into producing content it would otherwise refuse.

By explicitly addressing these edge-case prompt strategies, OpenAI is signaling that teen safeguards should not be treated as optional or easily bypassed through roleplay framing—an area where child-safety groups have argued that enforcement has often been inconsistent across platforms.

Age prediction and the challenge of enforcing teen safeguards

In addition to updating written guidelines, OpenAI has pointed to an upcoming age-prediction model designed to identify when an account likely belongs to a minor and automatically apply teen safeguards. That approach could reduce reliance on self-reported ages, which are widely considered insufficient across the tech industry.

Still, age estimation introduces its own set of questions: how accurate the system will be, how it will handle false positives and false negatives, and how OpenAI will balance child protection with privacy expectations. For policymakers, the key issue is whether automated age prediction can become a dependable enforcement layer—or whether it will be treated as a best-effort feature that remains vulnerable to evasion.

What the update means as lawmakers consider national standards

OpenAI’s changes arrive as Washington debates what a federal framework for minors and AI should require. Some lawmakers are pushing for strict limits, while others are weighing disclosure rules, duty-of-care standards, and audit requirements that would force companies to demonstrate how their systems behave for children in practice, not just on paper.

For OpenAI and its peers, the immediate test will be operational: how consistently the new teen rules are applied, how the systems respond under pressure from adversarial prompts, and whether the company can show measurable reductions in risky interactions. As the industry’s products become more capable and more embedded in everyday life, teen safety is increasingly becoming a defining benchmark for the next phase of AI regulation.

Dailyza will continue tracking how OpenAI’s teen safeguards perform in the wild—and how quickly voluntary rules evolve into enforceable standards as lawmakers move toward a national approach for minors and AI.

Previous ArticleWhen Harry Met Sally: Why Rob Reiner’s romcom endures
Next Article Ex-CoverWallet execs raise $4M for agentic AI observability
Kyle Kelley
  • Website

Keep Reading

Elvy Secures €5.9M as Klarna Veteran Joins as Chair

Fractile Secures $220M to Challenge Nvidia in AI Chip Market

White Circle Secures $11M from AI Leaders to Enhance Enterprise Security

DesignVerse Secures €4.6 Million to Innovate Aviation Infrastructure

SoftBank Invests $450M in Graphcore to Revitalize Chipmaker

Holmes Secures €1.1 Million Pre-Seed to Revolutionize Software Testing

Add A Comment

Leave A Reply Cancel Reply

Gyver Secures €1.4 Million Pre-Seed Funding for Workforce Infrastructure

Venture Capital 14 May 2026

Gyver, a Brescia-based startup, has announced €1.4 million in pre-seed funding to enhance workforce infrastructure in Europe.

Dailyza: Highlights from the EU-Startups Summit 2026 in Malta

Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!

Ditto Secures €7.6 Million to Simplify Doctor-Patient Communication

Cellply Revolutionizes Cancer Treatment with Innovative Tools

A-Star Secures $450M to Expand Investment Portfolio

Dailyza Unveils African-Startups.com to Boost Startup Ecosystem

Adfin Secures €15.3 Million to Revolutionize Revenue Automation

Personio and Forto Founders Invest in Regulate’s €1.4M Funding

NanoStruct Secures €2.6 Million to Revolutionize Food Safety

AlterEcho Emerges Victorious at EU-Startups Summit 2026 Pitch

Dailyza Highlights 8 Agtech Startups to Watch According to VCs

Ramp Secures $750M Funding from GIC, Iconiq Capital at $40B Valuation

Tencent Backs DeepSeek in $4B Funding Round at $50B Valuation

Dailyza Explores £7.5M Arāya Sie Fund Empowering Women in Deeptech

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.