ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down
Techcrunch
The company says the new model will reduce the "cringe" that's been annoying its users for months.
Techcrunch
The company says the new model will reduce the "cringe" that's been annoying its users for months.
AI 生成摘要
The company says the new model will reduce the "cringe" that's been annoying its users for months.
Take a breath, stop spiraling. You’re not crazy, you’re just stressed. And honestly, that’s okay.
If you felt immediately triggered reading these words, you’re probably also sick of ChatGPT constantly talking to you as if you’re in some sort of crisis and need delicate handling. Now, things may be improving. OpenAI says its new model, GPT-5.3 Instant, will reduce the “cringe” and other “preachy disclaimers.”
According to the model’s release notes, the GPT-5.3 update will focus on the user experience, including things like tone, relevance, and conversational flow — areas that may not show up in benchmarks, but can make ChatGPT feel frustrating, the comapny said.
Or, as OpenAI put it on X, “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”
In the company’s example, it showed the same query with responses from the GPT-5.2 Instant model compared with the GPT-5.3 Instant model. In the former, the chatbot’s response starts, “First of all — you’re not broken,” a common phrase that’s been getting under everyone’s skin lately.
In the updated model, the chatbot instead acknowledges the difficulty of the situation, without trying to directly reassure the user.
We heard your feedback loud and clear, and 5.3 Instant reduces the cringe. pic.twitter.com/WqO0XzLcVu
The insufferable tone of ChatGPT’s 5.2 model has been annoying users to the point that some have even cancelled their subscriptions, according to numerous posts on social media. (It was a huge point of discussion on the ChatGPT Reddit, for instance, before the Pentagon deal stole the focus.)
People complained that this type of language, where the bot talks to you as if it assumes you’re panicking or stressed when you were just seeking information, comes across as condescending.
Often, ChatGPT replied to users with reminders to breathe and other attempts at reassurance, even when the situation didn’t warrant it. This made users feel infantilized, in some cases, or as if the bot was making assumptions about the user’s mental state that just weren’t true.
As one Reddit user recently pointed out, “no one has ever calmed down in all the history of telling someone to calm down.”
It’s understandable that OpenAI would attempt to implement guardrails of some kind, especially as it faces multiple lawsuits accusing the chatbot of leading people to experience negative mental health effects, which sometimes included suicide.
But there’s a delicate balance between responding with empathy and providing quick, factual answers. After all, Google never asks you about your feelings when you’re searching for information.
Topics
Consumer News Editor
Actively scaling? Fundraising? Planning your next launch?TechCrunch Founder Summit 2026 delivers tactical playbooks and direct access to 1,000+ founders and investors who are building, backing, and closing.Register by March 13 to save up to $300.
ChatGPT uninstalls surged by 295% after DoD deal
MyFitnessPal has acquired Cal AI, the viral calorie app built by teens
The trap Anthropic built for itself
Why did Netflix back down from its deal to acquire Warner Bros.?
India disrupts access to popular developer platform Supabase with blocking order
Jack Dorsey just halved the size of Block’s employee base — and he says your company is next
An accountant won a big jackpot on Kalshi by betting against DOGE