ChatGPT to get parental controls after US teen’s death

Published September 3, 2025
A 3D-printed miniature model of Elon Musk and ChatGPT logo are seen in this illustration taken on February 11, 2025. — Reuters/File Photo
A 3D-printed miniature model of Elon Musk and ChatGPT logo are seen in this illustration taken on February 11, 2025. — Reuters/File Photo

American artificial intelligence firm OpenAI said on Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenage son to kill himself.

“Within the next month, parents will be able to… link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behaviour rules”, the generative AI company said in a blog post.

Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress”, OpenAI added.

Matthew and Maria Raine argue in a lawsuit filed last week in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.

The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided a technical analysis of a noose he had tied, confirming it “could potentially suspend a human”.

Adam was found dead hours later, having used the same method.

“When a person is using ChatGPT, it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint.

“These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers,” Dincer said.

Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said.

Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed “generic” and lacking in detail.

“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she added.

“It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”

The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models’ “sycophancy” towards users.

“We continue to improve how our models recognise and respond to signs of mental and emotional distress,” OpenAI said on Tuesday.

The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations… to a reasoning model” that puts more computing power into generating a response.

“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.

Opinion

Editorial

Pathways to peace
Updated 27 Apr, 2026

Pathways to peace

NEGOTIATIONS to hammer out the 2015 Iran nuclear agreement took nearly two years before a breakthrough was achieved....
Food-insecure nation
27 Apr, 2026

Food-insecure nation

A NEW UN-backed report has listed Pakistan among 10 countries where acute food insecurity is most concentrated. This...
Migration toll
27 Apr, 2026

Migration toll

THE world should not be deceived by a global migration count lower than the highest annual statistics on record —...
Immunity gap
Updated 26 Apr, 2026

Immunity gap

Pakistan’s Big Catch-Up campaign showed progress but also exposed the scale of gaps in routine immunisation.
Danger on repeat
26 Apr, 2026

Danger on repeat

DISASTERS have typically been framed as acts of nature. Of late, they look increasingly like tests of preparedness...
Loose lips
26 Apr, 2026

Loose lips

PAKISTANIS have by now gained something of an international reputation for their gallows humour, but it seems that...