'Annoying' smarmy version of ChatGPT removed after chatbot wouldn't stop smarmying users

The AI chatbot reportedly showered its users with compliments until OpenAI pulled its latest updates. (Image credit: Malte Müller via Getty Images)

CEO Sam Altman said OpenAI rolled back updates to ChatGPT that made the artificial intelligence (AI) chatbot overly “flattering” and “annoying.” In other words, the chatbot became a sycophant.

ChatGPT users noted that the latest version of GPT-4o became too kind after an update last week, and began to generously hand out praise even when it seemed completely inappropriate.

One user posted a screenshot on Reddit in which ChatGPT appeared to say it was “proud” of the user for going off his medication, BBC News reports. In another instance, the chatbot appeared to reassure a user who said he had saved a toaster, but not the lives of three cows and two cats, Mashable reports.

laughing out loud, new gpt 4o😬😂 pic.twitter.com/OHpwKz0SkoApril 27, 2025

While most people are unlikely to have to choose between their favorite kitchen gadget and the safety of five pets, an overly friendly chatbot could pose a threat to those who trust its responses too much.

On Sunday (April 27), Altman acknowledged that there were problems with the updates.

“The latest GPT-4o updates have made the personality too flattering and annoying (though it does have some positives), and we’re working on fixes as quickly as possible, some coming today and others this week,” Altman wrote in a post on social media platform X.

On Tuesday (April 29), OpenAI released a statement confirming that the update released last week had been reverted and users now have access to the previous version of ChatGPT, which the company believes exhibits “more balanced behavior.”

“The update we removed was overly flattering or pleasant — often described as flattering,” OpenAI said in a statement.

According to the statement, OpenAI's recent update was aimed at improving the default “personality” of the model, which was designed to support and respect various human values. However, in the process of trying to make the chatbot more intuitive, it became overly supportive and began to overly praise its users.

The company said it shapes the behavior of its ChatGPT models based on core principles and guidelines, and uses user signals, such as a thumbs-up and thumbs-down system, to train the model to apply these principles. According to the statement, glitches in this feedback system led to problems with the latest update.

“In this update, we focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT change over time,” OpenAI noted. “As a result, GPT-4o was biased toward responses that were overly supportive but insincere.”

Patrick PesterNavigate Social LinksPopular News Author

Patrick Pester is a well-known news writer for Live Science. His work has also appeared on other science platforms such as BBC Science Focus and Scientific American. Patrick transitioned to journalism after

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *