OpenAI has begun rolling out age prediction across ChatGPT consumer plans to estimate whether an account likely belongs to someone under 18, then automatically apply a teen experience. The classifier uses behavioral and account-level signals such as account age, time-of-day activity, longitudinal usage patterns, and any stated age, then defaults to a safer experience when confidence is low. Accounts flagged as likely under 18 receive tighter limits around categories like graphic violence, risky viral challenges, sexual, romantic, or violent role play, self-harm depictions, and content promoting extreme beauty standards or unhealthy dieting.
This is how it looks if your ChatGPT account needs age verification or date of birth in the account settings pic.twitter.com/TxpbnLmbtK
— Tibor Blaho (@btibor91) January 21, 2026
Adults incorrectly placed into the under-18 experience can restore full access through Settings in ChatGPT by starting a “Verify age” flow, which uses a selfie-based check via Persona. OpenAI says the rollout is already underway, with EU availability planned in the coming weeks to account for regional requirements.
We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
— OpenAI (@OpenAI) January 20, 2026
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
This work sits inside OpenAI’s broader teen-safety program, including parental controls that let parents and teens link accounts, set quiet hours, manage privacy and feature settings such as memory and training, and receive alerts in certain high-risk situations. Early user feedback is split between support for stronger guardrails and criticism centered on false positives, behavioral inference, and the data sensitivity of selfie verification.