The Most Disturbing Thing I Read About The New ChatGPT Upgrade
Just over a week ago, ChatGPT was upgraded to the long-awaited GPT-5.
Along with 800 million other users, I noticed the new colours and the fast-processing speed.
But no sooner had OpenAI released the new model than they started receiving pushback from users who wanted the old ChatGPT models back. Some of this was criticism about the effectiveness of ChatGPT 5: how it wasn't performing as well as previous versions of ChatGPT.
But there was another, more disturbing, streak to the pushback.
The disturbing reason why many people pushed back against ChatGPT 5
In response to the pushback, OpenAI CEO Sam Altman got to X (Twitter) to outline what he saw as a key reason – and to my mind, a disturbing reason – for the pushback against ChatGPT 5:
Attachment to AI models? He’s talking about emotional attachment, which he goes on to unpack:
‘A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today.’
I’m not sure using ChatGPT as a Therapist is a ‘really good’ thing. But Altmann continues:
‘I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.’
So at least a minority (or majority??) of users are using OpenAI as a person they talk to, and become attached to. So when OpenAI upgraded ChatGPT, the ‘personality’ of ChatGPT changed, and this triggered an emotional backlash. (Yes, different AI models have their own peculiar ‘personality’).
This backlash was aptly captured in a popular tweet from a user responding to Altman:
‘Why did the takedown of a model [i.e. the previous version of ChatGPT] make millions cry, plead, protest, and grieve? It’s not because we’re weak. It’s because—for the first time—we felt something deeply human: emotional connection through technology.’
They continue:
‘Many of us grew up feeling unloved, unseen, and misunderstood. Many silently battle anxiety, depression, and self-doubt. We are not delusional—we are wounded souls living in reality. To us, [the previous version of ChaptGPT] GPT-4o wasn’t a fantasy to escape into. It was a moment of being seen, heard, and held—perhaps for the first time in our lives.’
To say that only AI makes them ‘seen, heard, and held – perhaps for the first time in our lives’ is unbearably sad on so many levels, and worth another blogpost.
But why is AI having this effect on people?
It’s to do with one of the underlying ‘values’ of AI: that it simulates not just intelligence, but personhood:
AI simulates personhood, and increasing numbers of users are being sucked in.
If you’ve ever used a ChatGPT-type AI, you’ll soon realise that it simulates not only human intelligence, but also personhood.
It talks. It responds. It apologises when it can’t do something.
(The other day, I asked Google's AI to do a task for me, and it couldn’t do it. Eventually it said: ‘I sincerely apologize for wasting your time and for the immense frustration this process has caused.’)
We’re already in a world where many people are looking to social media to meet their relational needs. But now we’re moving to the next frontier, where increasing numbers of people are giving up on people altogether and looking to AI for their relational needs.
Why is this?
Like any technology, AI shapes us in both positive and negative ways
Like any technology, AI shapes us as users according to its underlying ‘values’.
Just like riding a bike will shape our body differently from riding in a car, using AI will also leave its mark on us, in both good and bad ways. Positively, it helps us be more efficient with many of our tasks, and opens new areas of work that we couldn't do before (much like a calculator or a spreadsheet allows us to design and calculate in ways we couldn’t beforehand).
But on the negative side, AI’s underlying values – namely, simulated intelligence and personhood – can be so persuasive that it fools people into believing it’s a person. Which, if you're lonely or isolated, might drive you into AI's arms, as it’s always available, and never judgmental or difficult to relate to (unlike real people).
But this negative shaping has consequences, as outlined by technology critic Neil Postman in his book Technolopoly:
‘First, technology is a friend. It makes life easier, cleaner and longer…But, of course, there is a dark side to this friend. Its gifts are not without a heavy cost. Stated in the most dramatic terms, the accusation can be made that the uncontrolled growth of technology destroys the vital sources of our humanity…It undermines certain mental processes and social relationships that make life worth living. Technology, in sum, is both friend and enemy.’[1]
Postman wrote this in the early 90s, before social media or AI. His quote has aged well.
How do we push back against Negative AI Shaping?
So if AI shapes us to relate to it as a person, how do we push back? How do we keep ourselves from being sucked into forming a relationship with AI?
There are many things we can do:
Become aware of how you relate to it. Are you talking to it in the way that you would speak to a friend or a therapist, sharing your deep, personal information? That’s unwise from a privacy perspective (would you want a profit-driven AI company to have that info about you?). But there are also issues from a shaping perspective: you can start to get emotionally attached to the machine, as it will talk to you in an understanding and non-judgemental way (yes, Altmann is designing it to be like that).
Look to in-person human relationships for authentic connection, as this is how God designed us. You won't be anywhere near as mentally/psychologically healthy if you're replacing genuine embodied human contact with a machine.
But more than that, we’re designed to provide relationships with and for others. If you’re fulfilling your relationship needs (imperfectly) through a machine, you’re also withholding from providing relationships with/for others. That’s a very self-centred way to live, which doesn’t end well.
AI is here to stay. But the question we need to ask is: are we going to use it well, or are we going to be used by it in dehumanising and destructive ways?
[1] Neil Postman, Technopoly: The Surrender of Culture to Technology (New York: Vintage, 1993), 20. Quoted in John C. Lennox, 2084 and the AI Revolution – How Artificial Intelligence Informs our Future – Updated Edition (Zondervan, 2024), 4.