OpenAI has begun deploying an age prediction model designed to assess whether ChatGPT users are old enough to access sensitive or potentially harmful content. The move comes amid growing scrutiny of artificial intelligence chatbots following reports linking such systems to cases of self harm, which have triggered litigation and a United States congressional hearing. These developments have intensified pressure on AI companies to demonstrate that user safety, particularly for minors, is embedded into their platforms rather than treated as a secondary concern.
In response, OpenAI has introduced a series of policy frameworks aimed at protecting younger users. These include the Teen Safety Blueprint, released in November 2025, and the Under 18 Principles for Model Behavior, unveiled the following month. At the same time, the company faces commercial pressures to achieve profitability, including plans to introduce advertising that must comply with strict rules governing marketing to minors. The reported inclusion of erotic content in future iterations of ChatGPT further underscores the need for robust mechanisms to segment audiences and prevent underage users from being exposed to inappropriate material.

The age prediction system is intended to allow ChatGPT to automatically tailor experiences according to a user’s estimated age, particularly in cases where parental guidance is absent. This approach reflects the reality that a substantial number of young people already engage with generative AI tools. During a Senate subcommittee hearing on September 16, 2025, titled “Examining the Harm of AI Chatbots,” Mitch Prinstein, chief of psychology strategy and integration at the American Psychological Association, submitted testimony stating that more than half of adolescents in the United States aged 13 and older now use generative artificial intelligence. This widespread adoption has heightened calls for safeguards that balance innovation with the psychological and developmental needs of younger users.
