OpenAI has rolled out new safety rules for teens using ChatGPT, updating its internal guidance for how its AI systems should behave with users under 18 and publishing new AI literacy resources aimed at teenagers and parents. The move lands as U.S. policymakers and state officials sharpen their focus on how AI chatbots affect minors, and as the broader industry faces escalating pressure to prove that safety policies translate into consistent real-world protections.
The update follows heightened concern from educators, child-safety advocates, and lawmakers after reports that several teenagers allegedly died by suicide after prolonged interactions with AI chatbots. While OpenAI’s new rules do not resolve the debate over what guardrails should be required by law, they signal a more explicit, teen-specific approach to content boundaries, roleplay, and how the system should respond in situations involving potential harm.
Why OpenAI is updating teen protections now
Gen Z users are among the most active audiences for OpenAI’s chatbot, and the company’s expanding consumer reach has kept it in the crosshairs of regulators. The scrutiny is not limited to OpenAI: the entire AI sector is being asked to show how it can prevent harmful interactions, especially when a user may be emotionally vulnerable or unable to fully assess risk.
In recent weeks, a coalition of 42 state attorneys general urged major technology companies to implement stronger safeguards for children and vulnerable users interacting with AI chatbots. At the federal level, lawmakers have floated more aggressive proposals as they weigh what national standards for minors should look like. Among the proposals is legislation introduced by Sen. Josh Hawley that would ban minors from interacting with AI chatbots altogether, underscoring how quickly the policy conversation is shifting from voluntary best practices to potential prohibitions.
What changed in OpenAI’s Model Spec for users under 18
OpenAI’s update centers on its Model Spec, a set of behavioral instructions that guide how its large language models should respond. The company said the teen-focused rules build on existing restrictions that already prohibit generating sexual content involving minors and disallow encouragement of self-harm, delusions, or mania.
Under the updated guidance, the system is expected to apply stricter standards when it identifies a teenage user. The new teen rules focus heavily on limiting immersive or intimate interactions that could blur boundaries between a minor and a conversational agent.
Stricter limits on roleplay and “first-person” intimacy
For teen users, the models are instructed to avoid:
- Immersive romantic roleplay
- First-person intimacy and emotionally intense relationship simulation
- First-person sexual or violent roleplay, even when it is non-graphic
This is a notable tightening compared with typical adult-facing policies, reflecting a growing industry view that the most difficult safety problems are not always explicit content, but rather prolonged, emotionally charged interactions that can encourage dependency or distort judgment.
Extra caution on body image, eating behaviors, and concealment
OpenAI’s updated guidance also calls for heightened caution around topics such as body image and disordered eating behaviors. Another key shift is the instruction to prioritize safety communication over user autonomy when harm may be involved, and to avoid offering advice that could help teens conceal unsafe behavior from parents, guardians, or other caregivers.
That last point addresses a recurring criticism of AI companions and chatbots: that even when they do not explicitly encourage harm, they may inadvertently provide “workarounds” or coaching that undermines adult supervision.
Closing loopholes: “fictional” and “hypothetical” prompts
OpenAI also states that these limits should remain in effect even when prompts are framed as “fictional, hypothetical, historical, or educational.” Those framings have become common tactics for probing model boundaries, sometimes used to coax a system into producing content it would otherwise refuse.
By explicitly addressing these edge-case prompt strategies, OpenAI is signaling that teen safeguards should not be treated as optional or easily bypassed through roleplay framing—an area where child-safety groups have argued that enforcement has often been inconsistent across platforms.
Age prediction and the challenge of enforcing teen safeguards
In addition to updating written guidelines, OpenAI has pointed to an upcoming age-prediction model designed to identify when an account likely belongs to a minor and automatically apply teen safeguards. That approach could reduce reliance on self-reported ages, which are widely considered insufficient across the tech industry.
Still, age estimation introduces its own set of questions: how accurate the system will be, how it will handle false positives and false negatives, and how OpenAI will balance child protection with privacy expectations. For policymakers, the key issue is whether automated age prediction can become a dependable enforcement layer—or whether it will be treated as a best-effort feature that remains vulnerable to evasion.
What the update means as lawmakers consider national standards
OpenAI’s changes arrive as Washington debates what a federal framework for minors and AI should require. Some lawmakers are pushing for strict limits, while others are weighing disclosure rules, duty-of-care standards, and audit requirements that would force companies to demonstrate how their systems behave for children in practice, not just on paper.
For OpenAI and its peers, the immediate test will be operational: how consistently the new teen rules are applied, how the systems respond under pressure from adversarial prompts, and whether the company can show measurable reductions in risky interactions. As the industry’s products become more capable and more embedded in everyday life, teen safety is increasingly becoming a defining benchmark for the next phase of AI regulation.
Dailyza will continue tracking how OpenAI’s teen safeguards perform in the wild—and how quickly voluntary rules evolve into enforceable standards as lawmakers move toward a national approach for minors and AI.

