New Health Advisory on AI and Teens: Insights from the APA

“That’s probably just AI,” my youngest said, pointing at an image that looked a little too perfect to be real.

“How can you tell?” I responded. 

“It just seems like it,” he shrugged. 

On one hand, this skepticism is an asset in a digital world where the line between human- and AI-generated content continues to blur. On the other hand, the inability to name clear strategies for telling the difference highlights a central challenge for kids (and adults): AI-generated content is everywhere and it is getting harder and harder to spot.

That’s not to say there aren’t clues. My oldest is quick to point out the cheesy phrases and generous use of emojis that ChatGPT tends to spit out. But young people aren’t just encountering this technology in obvious places like a ChatGPT prompt window. It’s increasingly embedded in their social feeds, phones, creative apps, and beyond. The real question isn’t just what AI can do, but what our kids need as they navigate it. A new Health Advisory on Artificial Intelligence and Adolescent Well-being from the American Psychological Association (APA) offers valuable guidance for the road ahead.

What Hasn’t Changed in the Age of AI

Similar to other technologies, the APA points out that AI is likely to have nuanced and complex impacts on the young people who use it. Rather than drawing clear conclusions that AI is “all good” or “all bad,” we’re more likely to find, once again, that these tools are exceptionally powerful. Outcomes will depend on who the kid is, what tools they use, how they use them, and what’s happening in their environment. 

Let’s also remember that the developmental tasks of adolescents haven’t changed overnight. While technology races ahead, teens still need the same foundational ingredients to thrive: supportive relationships, a sense of belonging, purposeful boundaries, and opportunities to explore, build skills, and learn from mistakes. That’s why the question can’t just be whether to shut teens out of the digital future. Instead, it’s about getting crystal clear about when AI supports healthy development, when it gets in the way, and what we can do about it. 

What’s New – And Urgent

While there are guideposts that haven’t changed, it is clear we are in the middle of another transformative digital revolution. Its impact on the economy, education, and on our human relationships is only beginning to unfold. The APA advisory notes a few differences that we should explore with kids right away:

  • Unlike social media use or online video, young people may not realize when they’re interacting with AI technologies. This makes it harder to recognize, and reflect on, how these tools may be influencing them.
  • AI has also made it more difficult to discern what’s real and reliable. My young child’s gut reaction that something “was probably AI,” but being unable to prove it, is a good example of this. Growing up in an AI-saturated digital landscape will require even more vigilance, skill-building, and support around spotting misinformation.

I’ll also add emotional discernment to the growing list of AI-related challenges. Kids may cognitively understand “that’s not a human, that’s a robot,” but that doesn’t necessarily stop them from forming emotional bonds and relationships with AI. Teens are already turning to AI companions for support, advice, and even friendship. 

Key Takeaways From The APA AI Health Advisory

 

Prioritize safeguards for AI companions and bots

We should pay close attention to AI tools meant to simulate human relationships, whether that is AI companions or experts. When it comes to friendships, it’s no surprise that some teens might turn to AI for comfort and belonging when the social ground is moving under their feet. But until companies center the health and wellbeing of younger users, it’s worth being very wary of these products.  Let’s activate teen awareness about the marketing goals of these bots and companions and explore the design features that make these bot interactions different from human ones.

Teens are not “mini adults” and their AI shouldn’t be either

Given the pace of change, now is the time to engage deeply with adolescents about their AI experiences and to design platforms with their needs in mind. We’ve seen what happens when kids’ health and safety are treated as afterthoughts in product design (just look at social media). The APA recommends age-appropriate design that includes: safe-by-default settings, increased transparency, reduced persuasive design, human oversight, and rigorous testing and research. Let’s demand this from developers and policy makers alike.

Protect likenesses of youth

For caregivers, this means talking to teens about things like deepfakes, deepnudes, and more. Bringing up these topics doesn’t “give teens the idea to use them” and you don’t have to be expert to talk about them. Instead, talking to teens early and often protects against harmful use. Embed conversations about nudify apps and deepfakes into ongoing conversations about sex, sexuality, consent, and healthy decision-making. Remember, dire, dramatic one-time warnings like “You’ll land in jail if you do this” are far less effective than nonjudgmental questions, sharing values and clear expectations, and keeping the door open for ongoing dialogue. 

Pay close attention to the accuracy of health information

Young people are actively seeking mental health information online. AI-generated misinformation is particularly tricky during adolescence because young people may be less likely to check in with an adult around sensitive or embarrassing topics. We should continuously remind teens (and ourselves) that AI continues to generate significant errors that may make it challenging for them to make informed decisions about their own health. Share evidence-based mental health sites that your teen can use alongside social media to answer their questions and assure your teen that you are willing to reach out to a primary care physician, school counselor, or therapist for care.

Prioritize AI literacy as a foundational skill

Some parents may be diving into the AI revolution with excitement, either professionally or personally. Others are deeply worried that AI poses an existential threat to human thriving. Many fall somewhere in between. Regardless of where we land, one thing is clear: AI literacy is essential to our kids’ ability to make informed decisions in this rapidly evolving landscape.

That’s why it’s worth asking schools, “How are you integrating age-appropriate AI literacy into the core curriculum?” And it’s worth asking policymakers, “How are you funding research, resources, and teacher training to support this work?”

Even if we’ve set clear boundaries around AI tools and apps at home, we should still make space for conversations about how AI works, it’s benefits and risks (environmental, social, psychological), and how to thoughtfully and ethically navigate it. AI literacy doesn’t require technical expertise – it starts with curiosity, reflection, and staying engaged as we all navigate this new terrain.

Here we ago again

The pace of digital change has shifted from a steady run to a breathless sprint. Once again, we’re parenting in a time of accelerated possibility and risk. Let’s be clear: the most effective and lasting solutions must come at the collective level – in our capacity to build platforms designed with young people’s health, safety, and development in mind.

But in the meantime, we’re still parenting. We’re still sitting across from our kids when they say, “That’s probably just AI.”

Our willingness to turn toward them – to listen, to ask questions, to stay in the conversation – is one of the most powerful protective factors we have. We don’t need all the answers. We just need to keep showing up with curiosity, boundaries, and care.

That’s what will help our kids navigate what comes next.