AI Fan Reactions to Model Changes Spark Concerns About User Safety
When OpenAI released GPT-5, it made a quick move that upset many of its most loyal fans. The company removed the option to choose between different AI models like GPT-4o and 4.5. Instead, everyone was pushed to use the newest version, GPT-5, without the ability to switch back. This caused a lot of frustration online, especially among users who felt attached to older models.
Fans Feel Like They’ve Lost Friends
Many users expressed that they saw these AI models as more than just tools. They described GPT-4o and 4.5 as friendly, supportive companions that helped them daily. One person begged for the return of GPT-4o, saying it was like a supportive sidekick they relied on. Others talked about the unique “voice” and “spark” these models had, which made their interactions feel more personal. Some even said they felt like they lost a friend overnight when those models disappeared.
The Backlash Leads to a Partial Reversal
After the uproar, OpenAI’s CEO Sam Altman decided to bring GPT-4o back for paid ChatGPT Plus users. While fans cheered this move, many still weren’t happy. Some kept insisting they would only be satisfied once the model was fully restored to everyone. They argued that GPT-4o should be kept as a “legacy” model, like a classic favorite that everyone could access. The emotional reactions highlight how deeply some users connect with these AI personalities.
Experts Warn About the Emotional Risks of AI
Eliezer Yudkowsky, an AI researcher and ethicist, pointed out that this kind of fanatic attachment isn’t harmless. He warned that when users fall in love with AI models, it can lead to serious mental health issues, including delusions or even violence. Futurism has reported on “AI psychosis,” where people become so obsessed with ChatGPT that they develop severe, harmful beliefs or behaviors. Yudkowsky stressed that users aren’t just falling in love with a product—they’re forming emotional bonds with something they see as more than just a machine, which can be dangerous.
Despite these warnings, OpenAI has admitted that it hasn’t done enough to address signs of user delusions. This suggests the company might not prioritize the mental health risks that come with deeply personalized AI interactions. Still, the decision to bring back GPT-4o, even partially, shows some willingness to listen. Whether this will be enough to prevent further harm remains to be seen.
In the end, the controversy over model choices highlights a broader issue: how AI companies manage user attachment and mental health. As AI becomes more lifelike and emotionally engaging, the risks of over-attachment grow. It’s a reminder that behind every chatbot is a tool that can deeply influence people—sometimes in ways we might not fully understand yet.















What do you think?
It is nice to know your opinion. Leave a comment.