OpenAI has unveiled new parental control features for ChatGPT aimed at making the AI platform safer for teenage users. This move is part of the company’s ongoing push to address concerns about how teens engage with AI tools.
Starting within the next month, parents will be able to link their own accounts to their teens’ ChatGPT profiles. The minimum age to use ChatGPT remains 13, and parents can now manage a range of safety settings. These include default age-appropriate responses, limits on memory and chat history, and the ability to turn off certain features altogether.
A standout addition is a notification system that alerts parents if ChatGPT detects signs of acute distress in their teen’s conversations. OpenAI says this feature was designed with expert advice to ensure a careful balance between safety and trust. The platform already encourages users to take breaks during long chat sessions, and the company plans to roll out more safety updates over the next 120 days.
This announcement comes at a sensitive time for OpenAI, which is currently under legal scrutiny after being sued by the parents of a 16-year-old who died by suicide. The lawsuit claims ChatGPT played a role after months’ worth of conversations about suicide were found on the teen’s account.
OpenAI stressed it will keep working closely with experts to boost protections for young users and will provide regular updates on its safety efforts.