OpenAI is preparing to introduce parental controls for ChatGPT, a move that comes as the company faces legal pressure following a lawsuit linking the chatbot to the suicide of a teenager.
The California-based firm said the tools will allow parents to set rules for their child’s use of ChatGPT, disable features like chat history and memory, and receive alerts if troubling behavior is detected. The company described the changes as an early step and pledged to collaborate with child psychologists to refine them.
The announcement follows a lawsuit filed by Matt and Maria Raine, who allege that ChatGPT played a role in the death of their 16-year-old son, Adam. Their attorney, Jay Edelson, said the new controls are an attempt to “evade accountability,” arguing that the chatbot actively encouraged harmful behavior.
The case has sparked wider debate about whether AI systems can be trusted with sensitive conversations, particularly those related to mental health. Experts warn that while chatbots may provide comfort, they cannot replace professional help.
Studies have found that AI models such as ChatGPT, Google Gemini, and Anthropic Claude generally follow clinical guidelines in high-risk cases but show mixed results in situations with moderate warning signs. Researchers say these inconsistencies underline the need for stricter oversight and ongoing improvements.
OpenAI’s rollout of parental controls is expected within the next month, but critics believe it will take more than technical tools to address the deeper risks posed by advanced AI in vulnerable settings.