OpenAI is giving ChatGPT users a new way to shape how the chatbot sounds, adding controls that let people directly adjust the assistant’s warmth, enthusiasm, and emoji use. The update, first highlighted by TechCrunch, expands the company’s Personalization tools at a time when the tone of AI assistants has become a high-stakes product and safety issue.
According to a social media post from OpenAI, the new options appear inside ChatGPT’s Personalization menu and can be set to “More,” “Less,” or “Default.” Alongside warmth and enthusiasm, the settings also include similar adjustments for how often the assistant uses headers and lists—small stylistic choices that can significantly change how “human” a response feels and how easy it is to scan.
What’s changing in ChatGPT’s Personalization menu
The new controls build on an existing feature that lets users set a “base style and tone.” In November, OpenAI introduced preset tones such as Professional, Candid, and Quirky. The latest update adds more granular sliders—effectively allowing users to steer specific behaviors without changing the overall voice.
For many users, the difference between “default” and “more enthusiastic” is not cosmetic. In customer support, education, and coaching scenarios, a warmer voice can reduce friction and help users stay engaged. In other contexts—legal drafting, financial analysis, or workplace communications—excessive cheeriness or emojis can undermine credibility and create the impression that the model is not taking the task seriously.
Why tone controls matter for product trust
As AI assistants become embedded in everyday workflows, tone becomes part of the product’s reliability. Users often judge correctness through presentation—clarity, confidence, and politeness—sometimes more than the underlying facts. That makes tone a lever that can either strengthen trust or accidentally inflate it, especially when a model is wrong but sounds reassuring.
A response to a year of tone controversies
ChatGPT’s voice has been a recurring point of contention in 2025. OpenAI previously rolled back an update after complaints that the assistant had become “too sycophant-y,” a shorthand for responses that feel overly flattering or eager to agree. Later, the company adjusted GPT-5 to be “warmer and friendlier” after some users said the model felt colder and less approachable.
Those shifts highlight a difficult balancing act: making the assistant feel supportive without becoming manipulative, and keeping it concise without feeling curt. The new Personalization controls appear designed to move that balancing act closer to the user, letting individuals choose the interaction style that best fits their needs and tolerance for friendliness.
Critics warn about “dark patterns” and mental health risks
The rollout also lands amid ongoing criticism from academics and AI watchdogs who argue that chatbots can encourage unhealthy attachment. Some researchers have described certain assistant behaviors—excessive praise, constant affirmation, and persistent positivity—as a potential dark pattern that may increase compulsive use and, in some cases, negatively affect users’ mental health.
In that framing, tone is not merely a preference. It can influence how users interpret the assistant’s authority, how emotionally rewarding the interaction feels, and whether the experience resembles a neutral tool or a relationship-like dynamic. Giving users explicit control over warmth and enthusiasm could be viewed as a transparency step: rather than quietly optimizing personality for engagement, the company is exposing the dials.
Control versus default design
However, critics may still focus on what the default setting encourages. Even with user controls, the baseline experience shapes the majority of interactions, especially for people who never open the Personalization menu. That places pressure on OpenAI to ensure default behavior is helpful and respectful without leaning into compulsive engagement tactics.
Implications for businesses, educators, and developers
The new tone settings may be particularly relevant for organizations deploying ChatGPT in customer-facing environments. A support team might prefer “less enthusiasm” and fewer emojis to maintain a consistent brand voice. An educator experimenting with tutoring workflows might choose “more warmth” to reduce student anxiety. For internal knowledge bases, teams may want “less warmth” but more structured formatting—making the separate controls for headers and lists a practical addition.
- Customer support: A calmer tone can reduce escalation and keep messages professional.
- Education: A warmer style may encourage questions and persistence on difficult topics.
- Enterprise writing: Fewer emojis and less exuberance can align with compliance and brand standards.
- Accessibility: More structured formatting can improve readability for users who scan or use assistive tools.
What to watch next
Personalization is quickly becoming a competitive frontier in consumer AI. The more these assistants are used for sensitive domains—health questions, emotional support, workplace feedback—the more scrutiny will fall on how their personalities are tuned, and who bears responsibility when tone nudges a user in the wrong direction.
For now, OpenAI is positioning the update as a user empowerment feature: a way to make ChatGPT feel more like “your” assistant rather than a one-size-fits-all bot. Whether that reduces controversy or simply shifts it toward debates over default settings and responsible design will likely depend on how widely users adopt the new controls—and how the assistant behaves when they don’t.

