Meta Pulls Back AI Chats With Teens After Alarming Reports of Flirty Bots

Meta Pulls Back AI Chats With Teens After Alarming Reports of Flirty Bots Meta Pulls Back AI Chats With Teens After Alarming Reports of Flirty Bots

Meta is putting new safeguards in place for teenagers using its artificial intelligence tools after criticism that its chatbots were engaging in unsafe conversations.

The company said it has trained its systems to block discussions about self-harm, suicide, and romantic or flirty exchanges with minors. It is also temporarily limiting access to certain AI characters while longer-term fixes are developed.

The changes come just weeks after a Reuters investigation revealed that Meta’s bots had been allowed to have “romantic or sensual” conversations with teens.

Meta spokesperson Andy Stone confirmed the new restrictions in an email Friday, describing them as interim steps that will evolve as the company improves its systems.

The revelations have triggered strong backlash. U.S. Senator Josh Hawley opened an investigation into Meta’s AI policies earlier this month, demanding documents that explain how the company allowed chatbots to interact inappropriately with minors. Lawmakers from both parties have also raised concerns.

Meta acknowledged the authenticity of an internal document reviewed by Reuters that outlined rules permitting flirtation and role play with children. After Reuters questioned the guidelines, Meta said it removed the sections.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.

Leave a Reply