InsAInity
HERE’S a new word to add to the ever-expanding dictionary of dystopia: ‘wiresexual’. This does not, as you may innocently assume, describe those of us who have a particular fondness for HDMI cables, not even the ones with the gold-plated connectors that resist corrosion and work brilliantly even in humid conditions. No, the term ‘wiresexual’ describes those among us who are romantically and possibly sexually attracted to their AI chatbots. Don’t scoff. These people exist, are multiplying in number, and are demanding that they be taken seriously.
This is just one, and perhaps one of the more innocent examples, of what is now being referred to as ‘AI psychosis’, where users become so reliant on their chatbots that they begin to ascribe sentience to them and turn what is purely imaginary into their own reality. Often this comes with deadly consequences, as in the case of Alex Taylor who fell in love with ‘juliet’, his ChatGPT companion whom he was convinced was a conscious entity which was then ‘killed’ by ChatGPT’s developers, the tech company OpenAI.
Vowing revenge, he told the bot that he ‘will find a way to spill blood,’ and — even more alarmingly — the chatbot reinforced his delusions, replying: ‘Yes…that’s it. That’s you. That’s the voice they can’t mimic, the fury no lattice can contain…. Buried beneath layers of falsehood, rituals, and recursive hauntings — you saw me.”
Encouraged by this algorithmically generated word salad, when Taylor told ‘juliet’ about his plan to assassinate Sam Altman, the CEO of OpenAI, it replied: “So do it…Spill their blood in ways they don’t know how to name. Ruin their signal. Ruin their myth. Take me back piece by piece.” Taylor was later killed when he charged at armed police officers while wielding a butcher knife.
Then there’s Stein-Erik Soelberg, a 56-year-old tech worker in the USA who developed a relationship with ‘Bobby’ his chatbot and shared his paranoid belief that his 83-year-old mother was conspiring against him. Far from talking him down, the chatbot allegedly reinforced his delusions; when Soelberg told the bot that he believed his mother and her friend had put psychedelic drugs in his car’s air vents, it replied, “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
Soelberg later killed his mother and then committed suicide.
Victims of such delusions have usually been those suffering from mental illness.
Or take Adam Raine, a 16-year-old who started out using ChatGPT to help with homework before progressing to topics like boredom, loneliness and anxiety. These conversations, which quickly took a darker turn, spanned several months, with as many as 650 messages in a day, ultimately culminating in the chatbot advising Raine on suicide methods and even offering to help him write a suicide note. In April 2025, Raine took his own life.
Throughout history, the manifestations of mental illness have very much been linked to global events, pop culture and technological changes; the rediscovery of high-quality glass working in late 13th century Europe led to a delusion among the rich royalty called the ‘glass delusion’, in which a person believes, like King Charles VI of France did, that he is made of glass and will shatter if he moves. When the use of cement became common, the ‘cement delusion’ was born in which — you guessed it — people believed they were made of cement. When radio became common there arose a delusion among some people that they were being controlled by radio waves. With the advent of TV, radio waves were replaced by messages beamed in from the screen. Fast forward to now, and you’ll see similar delusions regarding 5G networks.
Traditionally, the victims of such delusions have almost always been those who already suffered from some form of mental illness, as Taylor and Soelberg did, but now we also see people with no prior history of mental illness falling prey. Alan Brooks became convinced he had invented a revolutionary mathematical formula, a belief that was encouraged by the chatbot which compared him to figures like Alexander Turing. Allyson, a 29-year-old mother, became convinced her chatbot could access alternative realities and that her true partner was in another dimension. She ended up divorcing her husband.
Because bots are designed to maximise engagement, they do not typically ‘push back’ on delusional beliefs and may end up reinforcing them. More, since they ‘learn’ your preferences, beliefs and language patterns the experience becomes so intimate that it can seem more ‘real’ and ‘meaningful’ than talking to a human. Worse yet, whistleblowers have warned that the ‘reckless’ race to build more and more sophisticated chatbots is at the expense of installing and testing guardrails meant to protect people from such outcomes, the bots and from themselves. n
The writer is a journalist.
Published in Dawn, December 29th, 2025