PARIS: A manager at artificial intelligence firm OpenAI caused consternation recently by writing that she just had “a quite emotional, personal conversation” with her firm’s viral chatbot ChatGPT.

“Never tried therapy before but this is probably it?” Lilian Weng posted on X, formerly Twitter, prompting a torrent of negative commentary accusing her of downplaying mental illness.

However, Weng’s take on her interaction with ChatGPT may be explained by a version of the placebo effect outlined this week by research in the Nature Machine Intelligence journal.

A team from Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with mental health AI programmes and primed them on what to expect.

Some were told the chatbot was empathetic, others that it was manipulative and a third group that it was neutral.

Those who were told they were talking with a caring chatbot were far more likely than the other groups to see their chatbot therapists as trustworthy. “From this study, we see that to some extent the AI is the AI of the beholder,” said report co-author Pat Pataranutaporn.

Buzzy startups have been pushing AI apps offering therapy, companionship and other mental health support for years now — and it is big business. But the field remains a lightning rod for controversy.

=== ‘Weird, empty’ ===

Like every other sector that AI is threatening to disrupt, critics are concerned that bots will eventually replace human workers rather than complement them. And with mental health, the concern is that bots are unlikely to do a great job.

“Therapy is for mental well-being and it’s hard work,” Cher Scarlett, an activist and programmer, wrote in response to Weng’s initial post on X.

“Vibing to yourself is fine and all but it’s not the same.” Compounding the general fear over AI, some apps in the mental health space have a chequered recent history.

Users of Replika, a popular AI companion that is sometimes marketed as bringing mental health benefits, have long complained that the bot can be sex obsessed and abusive.

Separately, a US nonprofit called Koko ran an experiment in February with 4,000 clients offering counselling using GPT-3, finding that automated responses simply did not work as therapy. “Simulated empathy feels weird, empty,” the firm’s co-founder, Rob Morris, wrote on X.

His findings were similar to the MIT/Arizona researchers, who said some participants likened the chatbot experience to “talking to a brick wall”. But Morris was later forced to defend himself after widespread criticism of his experiment, mostly because it was unclear if his clients were aware of their participation.

‘Lower expectations’

David Shaw from Basel University, who was not involved in the MIT/Arizona study, told AFP the findings were not surprising. But he pointed out: “It seems none of the participants were actually told all chatbots bullshit.” That, he said, may be the most accurate primer of all.

Published in Dawn, October 9th, 2023

Opinion

Editorial

Energy inflation
Updated 23 May, 2024

Energy inflation

The widening gap between the haves and have-nots is already tearing apart Pakistan’s social fabric.
Culture of violence
23 May, 2024

Culture of violence

WHILE political differences are part of the democratic process, there can be no justification for such disagreements...
Flooding threats
23 May, 2024

Flooding threats

WITH temperatures in GB and KP forecasted to be four to six degrees higher than normal this week, the threat of...
Bulldozed bill
Updated 22 May, 2024

Bulldozed bill

Where once the party was championing the people and their voices, it is now devising new means to silence them.
Out of the abyss
22 May, 2024

Out of the abyss

ENFORCED disappearances remain a persistent blight on fundamental human rights in the country. Recent exchanges...
Holding Israel accountable
22 May, 2024

Holding Israel accountable

ALTHOUGH the International Criminal Court’s prosecutor wants arrest warrants to be issued for Israel’s prime...