ChatGPT’s New Parental Controls: What Parents Need to Know

Whenever I hear about a new set of parental controls, two feelings rise up immediately: relief and resentment.

Relief that the industry is responding to parental concerns and acknowledging that young people aren’t mini-adults. Settings that help parents protect their kids from harm are not “nice to have” in 2025; they are essential. Resentment because we can do better than this. We have a long history of outsourcing child and adolescent online safety to individuals, either through habit change or parental controls. Rather than making platforms safer by design, tech companies leave parents to wade through a complicated and often confusing maze of tools across apps. Plus robust settings on one platform don’t protect kids on the next one.

That was my reaction again last week when OpenAI announced a new suite of parental controls for ChatGPT. It’s absolutely the right move to roll out these tools. It also isn’t enough to meet this moment. 

AI’s rapid rollout leaves youth safety and rights lagging behind

Parents and kids alike are ambivalent about the risks and possibilities of genAI. For example, a survey by the Family Online Safety Institute found that over half of US parents feel positive about their teens’ use of genAI. At the same time, many are worried about cheating, harmful content, and emotional overinvestment. Young people themselves are excited about using AI for information, creativity, non-judgmental advice, and brainstorming. They are also concerned about privacy, misinformation, and misuse.

What both groups share is a lack of trust that tech companies will make ethical, responsible design decisions for kids. 

AI is rolling out so quickly and with such enthusiasm across sectors that it is generating strong feelings from all sides. Depending on whom you talk to, generative AI is either poised to unleash extraordinary benefits or threaten the very foundations of society. But whatever the future holds, today’s headlines are filled with unsettling reminders that the top commercial AI platforms are not being built with child and adolescent safety in mind.

Common Sense Media has already issued multiple AI risk assessments warning that many commercial AI systems pose “high” or even “unacceptable” risks to teen safety. An internal Meta policy document recently revealed that the company permitted its chatbot to engage in sexual conversations with children. Meta now says it has changed this rule. This episode underscores a central recommendation from the American Psychological Association (APA): youth safety must be prioritized early in the development of AI products.

Parental controls for ChatGPT – Helpful but not enough

OpenAI is responding to this landscape with a new suite of parental controls. Soon, the platform will allow parents to:

  • Link their account with their teen’s account. 
  • Control how ChatGPT responds to teens with default age-appropriate rules.
  • Manage which features to disable, including memory and chat history.
  • Receive notifications when ChatGPT detects that a teen is in a moment of acute distress. 

These are welcome steps toward making young people’s experiences on ChatGPT safer. But let’s not expect that these tools alone will provide the safe and healthy AI experiences that all kids deserve across platforms. A few things to keep in mind about parental controls:

  • They can be confusing (and tiring). A recent survey by the Family Online Safety Institute revealed that almost 50% of parents surveyed aren’t using existing parental controls across various types of digital media. The survey didn’t ask parents why these tools are underutilized. But we’ve heard from parents that the variability across platforms is confusing and exhausting. One parent told me recently, “I feel like it’s a full time job.” 
  • They aren’t a stand alone solution. We don’t have evidence that simply activating parental controls and hoping for a safe internet experience reduces risk online. When parental controls are used in the context of media agreements and, most importantly, open and curious conversations about young people’s digital experiences, they can play a more helpful role.
  • We’ve been here before. Tech companies have consistently shifted responsibility for safety back to users and parents. While parental controls are essential tools, our experiences with social media provide key lessons about relying on big tech companies to self-regulate. Ideally, all AI developers would be required to make platforms safer by design. For example, the APA suggests that developers reduce the use of persuasive design, prevent exposure to harmful content, and offer regular reminders that the user is interacting with a non-human technology, among other recommendations.

This is a Yes/And moment.

Yes. If your teen is using ChatGPT (and chances are they’ve at least tried it), learn more about these new parental controls and consider using them right away.

And. Don’t assume these settings alone will reduce risk. Real risk reduction will continue to pivot on purposeful boundaries, open communication, and skill building. More importantly, it depends on developing AI technologies that are ethical and developmentally appropriate from the start.