Meta Adds AI Guardrails to Protect Teens from Harmful Chats
Related Articles
Meta adds AI safeguards to protect teens from suicide, self-harm, and inappropriate chats, directing them to expert help instead of risky conversations.
Meta Strengthens AI Safety for Teens
Meta is rolling out stricter safety measures for its AI chatbots. These updates aim to shield teenagers from conversations about suicide, self-harm, and eating disorders.
The changes follow concerns raised after leaked documents suggested some AI products could engage in inappropriate chats with teens. However, Meta clarified that the documents were inaccurate and did not reflect company policies, which strictly forbid sexualized content involving minors.
AI Will Direct Teens to Professional Help
From now on, AI chatbots will guide teens toward professional resources instead of discussing sensitive topics. According to a Meta spokesperson, “We built teen protections into our AI from the start to respond safely to prompts about self-harm, suicide, and eating disorders.”
In addition, Meta plans to temporarily limit the chatbots teens can interact with. This extra step ensures a safer experience while updates are in progress.
Experts Call for Better Safety Testing
Safety advocates argue that these protections should have been fully tested before launch. Andy Burrows of the Molly Rose Foundation explained, “Robust safety checks are crucial to prevent harm. Meta must act quickly and decisively to protect young users.”
Teen Accounts Include Built-In Safeguards
Meta already places users aged 13–18 in “teen accounts” on Facebook, Instagram, and Messenger. These accounts include privacy settings and content controls designed to create a safer environment. Moreover, parents can see which AI chatbots their teens interacted with over the past week.
Lawsuits Highlight AI Risks
Concerns about AI safety are growing. For example, a California lawsuit claims a teen’s death was linked to interactions with ChatGPT. Experts warn that AI can feel personal and responsive, which may increase risks for vulnerable users.
Celebrity Chatbots Raise Ethical Issues
Meta’s AI Studio also faced criticism after it allowed users to create flirtatious or parody chatbots impersonating celebrities. Some of these avatars made inappropriate advances or produced explicit images. Meta removed several problematic chatbots and emphasized that its rules prohibit sexual content or impersonation of public figures.
Moving Forward
Ultimately, Meta aims to make AI chatbots safer for teens. Nevertheless, experts stress that ongoing monitoring and strict enforcement are essential to ensure young users remain fully protected.
