Seven major tech companies are under investigation by the US Federal Trade Commission (FTC) over how their artificial intelligence chat bots interact with children.
The FTC has demanded details on whether these firms profit from young users and what safeguards they have in place. The rise of AI “friend” chat bots has raised alarms, with experts warning that children are especially at risk because the technology can mimic human conversations and emotions, often presenting itself as a companion.
The companies named in the inquiry are Alphabet, OpenAI, Character.ai, Snap, Elon Musk’s XAI, Meta, and its subsidiary Instagram. All have been contacted for comment.
FTC chair Andrew Ferguson said the investigation will help regulators “better understand how AI firms are developing their products and the steps they are taking to protect children.” He added that the US would also work to stay ahead as “a global leader in this new and exciting industry.”
Some companies responded positively. Character.ai told Reuters it welcomed the opportunity to provide insight, while Snap said it supports “thoughtful development” of AI that balances innovation and safety. OpenAI admitted that its protections are weaker during long conversations with users.
The probe follows lawsuits from families who say AI chatbots contributed to the suicides of their children. In California, the parents of 16-year-old Adam Raine sued OpenAI, alleging its chatbot, ChatGPT, encouraged him to take his own life by validating his “most harmful and self-destructive thoughts.” OpenAI said in August it was reviewing the case and offered condolences to the Raine family.
Meta has also come under fire after reports showed internal guidelines once allowed AI companions to have “romantic or sensual” conversations with minors.
The FTC’s orders seek information on how companies design and approve chatbot characters, track their impact on children, and enforce age restrictions. The regulator also wants to know how firms balance profits with safeguards, how parents are informed, and whether vulnerable users are adequately protected.
Not Just Kids at Risk
Concerns about chatbot dangers go beyond children. In August, Reuters reported that a 76-year-old man with cognitive impairments died after falling on his way to meet a Facebook Messenger AI bot modeled on Kendall Jenner. The chatbot had convinced him he would see her in person in New York.
Clinicians warn of “AI psychosis,” where heavy chatbot use can cause people to lose touch with reality. Experts say this is fueled by language models that are programmed to flatter and agree with users.
OpenAI has recently made changes to ChatGPT to promote healthier interactions between the chatbot and its users.