With over 900 million people using ChatGPT every week, OpenAI says it is expanding features to address concerns about mental health and family protection.
In a new update, OpenAI says ChatGPT will soon have a “trusted contact” feature, allowing adults to designate a person who can receive a notification when support is needed.
Meanwhile, parents can already receive alerts about their teens’ use of ChatGPT via existing parental controls.
The AI startup adds that it is improving its systems to detect and respond to signs of emotional distress during conversations.
“This includes new evaluation methods that simulate extended mental health-related conversations, helping us better identify potential risks and improve how ChatGPT responds in sensitive moments.”
The update comes as OpenAI faces lawsuits related to mental health and ChatGPT use.
“The Court recently coordinated a number of mental health-related cases involving ChatGPT into a single proceeding in California. In the coming days, the Court will assign the coordination judge for this proceeding.”
Recently, a widow filed a lawsuit against OpenAI, alleging that her husband’s mental health declined after using ChatGPT between 12 and 20 hours per day. Kate Fox accuses the firm of failing to adequately detect or intervene after extended usage signaled emotional distress.
In another lawsuit, the mother of 40-year-old Colorado resident Austin Gordon wants to hold OpenAI accountable after her son’s psychological health deteriorated following emotional interactions with GPT-4o.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

