OpenAI enhances safeguards for users under 18. By Chase DiBenedetto on March 24, 2026.
OpenAI has introduced new open-source safety prompts for developers to widely implement policies protecting teens. The prompt-based safety pack provides guidance on common teenage risks, developmental content, and age-appropriate topics such as self-harm, sexual content, dangerous trends, and harmful ideals. This approach offers a more robust alternative to previous high-level guidelines, integrating directly into AI systems.
Earlier, OpenAI added Under-18 principles and released gpt-oss-safeguard to aid developers in applying safety measures. This model simplifies safety classification by directly using platform policies. However, translating broad safety goals into precise regulations remains a challenge for developers, often leading to enforcement lapses or overly broad filtering.
The developer pack was co-created with Common Sense Media and everyone.ai. Concerns about chatbot exposure for vulnerable teens and young children have been prominent, as AI companies address the mental health impact of their models. OpenAI faced a wrongful death lawsuit, prompting enhancements to mental health and teen safety features.
The OpenAI safety model is available on Hugging Face, and the prompt pack on GitHub. These tools are not a definitive guarantee of teen safety but aim to establish a foundational safety standard.
Disclosure: Ziff Davis, Mashable’s parent company, has filed a lawsuit against OpenAI over copyright infringement in AI training.
