PARIS: A manager at artificial intelligence firm OpenAI caused consternation recently by writing that she just had “a quite emotional, personal conversation” with her firm’s viral chatbot ChatGPT.

“Never tried therapy before but this is probably it?” Lilian Weng posted on X, formerly Twitter, prompting a torrent of negative commentary accusing her of downplaying mental illness.

However, Weng’s take on her interaction with ChatGPT may be explained by a version of the placebo effect outlined this week by research in the Nature Machine Intelligence journal.

A team from Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with mental health AI programmes and primed them on what to expect.

Some were told the chatbot was empathetic, others that it was manipulative and a third group that it was neutral.

Those who were told they were talking with a caring chatbot were far more likely than the other groups to see their chatbot therapists as trustworthy. “From this study, we see that to some extent the AI is the AI of the beholder,” said report co-author Pat Pataranutaporn.

Buzzy startups have been pushing AI apps offering therapy, companionship and other mental health support for years now — and it is big business. But the field remains a lightning rod for controversy.

=== ‘Weird, empty’ ===

Like every other sector that AI is threatening to disrupt, critics are concerned that bots will eventually replace human workers rather than complement them. And with mental health, the concern is that bots are unlikely to do a great job.

“Therapy is for mental well-being and it’s hard work,” Cher Scarlett, an activist and programmer, wrote in response to Weng’s initial post on X.

“Vibing to yourself is fine and all but it’s not the same.” Compounding the general fear over AI, some apps in the mental health space have a chequered recent history.

Users of Replika, a popular AI companion that is sometimes marketed as bringing mental health benefits, have long complained that the bot can be sex obsessed and abusive.

Separately, a US nonprofit called Koko ran an experiment in February with 4,000 clients offering counselling using GPT-3, finding that automated responses simply did not work as therapy. “Simulated empathy feels weird, empty,” the firm’s co-founder, Rob Morris, wrote on X.

His findings were similar to the MIT/Arizona researchers, who said some participants likened the chatbot experience to “talking to a brick wall”. But Morris was later forced to defend himself after widespread criticism of his experiment, mostly because it was unclear if his clients were aware of their participation.

‘Lower expectations’

David Shaw from Basel University, who was not involved in the MIT/Arizona study, told AFP the findings were not surprising. But he pointed out: “It seems none of the participants were actually told all chatbots bullshit.” That, he said, may be the most accurate primer of all.

Published in Dawn, October 9th, 2023

Opinion

Editorial

Immunity gap
Updated 26 Apr, 2026

Immunity gap

Pakistan’s Big Catch-Up campaign showed progress but also exposed the scale of gaps in routine immunisation.
Danger on repeat
26 Apr, 2026

Danger on repeat

DISASTERS have typically been framed as acts of nature. Of late, they look increasingly like tests of preparedness...
Loose lips
26 Apr, 2026

Loose lips

PAKISTANIS have by now gained something of an international reputation for their gallows humour, but it seems that...
Lebanon truce
Updated 25 Apr, 2026

Lebanon truce

THE fact that the truce between Israel and Lebanon has been extended for three weeks should be welcomed. But there...
Terrorism again
25 Apr, 2026

Terrorism again

THE elimination of 22 terrorists in an intelligence-based operation in Khyber highlights both the scale and ...
Taxing technology
25 Apr, 2026

Taxing technology

THE recent decision by the FBR’s Directorate General of Customs Valuation to increase the ‘assessed value’ of...