PARIS: A manager at artificial intelligence firm OpenAI caused consternation recently by writing that she just had “a quite emotional, personal conversation” with her firm’s viral chatbot ChatGPT.

“Never tried therapy before but this is probably it?” Lilian Weng posted on X, formerly Twitter, prompting a torrent of negative commentary accusing her of downplaying mental illness.

However, Weng’s take on her interaction with ChatGPT may be explained by a version of the placebo effect outlined this week by research in the Nature Machine Intelligence journal.

A team from Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with mental health AI programmes and primed them on what to expect.

Some were told the chatbot was empathetic, others that it was manipulative and a third group that it was neutral.

Those who were told they were talking with a caring chatbot were far more likely than the other groups to see their chatbot therapists as trustworthy. “From this study, we see that to some extent the AI is the AI of the beholder,” said report co-author Pat Pataranutaporn.

Buzzy startups have been pushing AI apps offering therapy, companionship and other mental health support for years now — and it is big business. But the field remains a lightning rod for controversy.

=== ‘Weird, empty’ ===

Like every other sector that AI is threatening to disrupt, critics are concerned that bots will eventually replace human workers rather than complement them. And with mental health, the concern is that bots are unlikely to do a great job.

“Therapy is for mental well-being and it’s hard work,” Cher Scarlett, an activist and programmer, wrote in response to Weng’s initial post on X.

“Vibing to yourself is fine and all but it’s not the same.” Compounding the general fear over AI, some apps in the mental health space have a chequered recent history.

Users of Replika, a popular AI companion that is sometimes marketed as bringing mental health benefits, have long complained that the bot can be sex obsessed and abusive.

Separately, a US nonprofit called Koko ran an experiment in February with 4,000 clients offering counselling using GPT-3, finding that automated responses simply did not work as therapy. “Simulated empathy feels weird, empty,” the firm’s co-founder, Rob Morris, wrote on X.

His findings were similar to the MIT/Arizona researchers, who said some participants likened the chatbot experience to “talking to a brick wall”. But Morris was later forced to defend himself after widespread criticism of his experiment, mostly because it was unclear if his clients were aware of their participation.

‘Lower expectations’

David Shaw from Basel University, who was not involved in the MIT/Arizona study, told AFP the findings were not surprising. But he pointed out: “It seems none of the participants were actually told all chatbots bullshit.” That, he said, may be the most accurate primer of all.

Published in Dawn, October 9th, 2023

Opinion

Editorial

After the deluge
16 Jun, 2024

After the deluge

AS on many previous occasions, Pakistan needed other results going their way, and some divine intervention, to stay...
Fugue state
16 Jun, 2024

Fugue state

WITH its founder in jail these days, it seems nearly impossible to figure out what the PTI actually wants. On one...
Sindh budget
16 Jun, 2024

Sindh budget

SINDH’S Rs3.06tr budget for the upcoming financial year is a combination of populist interventions, attempts to...
Slow start
Updated 15 Jun, 2024

Slow start

Despite high attendance, the NA managed to pass only a single money bill during this period.
Sindh lawlessness
Updated 15 Jun, 2024

Sindh lawlessness

A recently released report describes the law and order situation in Karachi as “worryingly poor”.
Punjab budget
15 Jun, 2024

Punjab budget

PUNJAB’S budget for 2024-25 provides much fodder to those who believe that the increased provincial share from the...