New Data Reveals How and Why Teens Are Turning to AI Companions

In my latest book, I describe my early interactions with a rudimentary 1990s AI program called Dr. Sbaitso. This was back in the days of dial-up internet, so this “doctor” was far from a fluid and sophisticated conversation partner. The voice was robotic, it took forever to generate answers, and the program rarely delivered the kind of meaningful advice I was searching for. 

My friends and I approached the platform with equal parts playful humor (“Let’s see what we can make this robot say!”) and legitimate curiosity (“Do you think it might actually answer some of our questions?”). 

In contrast to today’s AI companions, Dr. Sbaitso was frustratingly unwilling to go deep with us. It responded to most of our queries with, “Why do you feel that way?” until we tired of the repetition and moved on with our lives. 

Why and how are teens using AI companions?

Today’s technology delivers a far different experience. AI companions are not real, but they can certainly feel that way. They are designed to mimic human relationships and deliver sophisticated, personalized, and frictionless interactions. We’ve all seen high-profile news stories about people falling in love with or forming deep emotional bonds with their AI companions—but are teens really using them? A new report called Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions from Common Sense Media makes it clear that they are.

Here are a few key findings from the report:

  • 72% of young people ages 13-17 have used AI companions at least once.
  • 52% of teens interact with these platforms at least several times a month.
  • 33% of teens have used AI companions for social interactions and relationships.
  • 1 in 3 users report feeling uncomfortable with something an AI companion said or did.
  • 1 in 3 users have chosen to discuss important or serious matters with AI companions instead of real people. Nearly 1 in 3 of those teens said that AI conversations are as or more satisfying than conversations with real friends.
  • 80% of users still say they spend more time with real friends than AI companions.

According to Dr. Jacqueline Nesi, a leading researcher on teens and tech use, it’s possible that these numbers are slightly inflated if teens interpreted the survey to include any chatbot. While the survey offered a clear definition of AI companions, confusion likely persists. Regardless, teen adoption of these technologies is accelerating. 

As for why teens are turning to AI companions, their reasons echo why Dr. Sbaitso drew me in decades ago. According to the report, teens cite:

  • Entertainment (30%)
  • Curiosity (28%)
  • Advice (18%)
  • Availability (17%)
  • Non-judgment (14%)
  • Ease of conversation compared to real people (9%). 

Built for Autonomy, Searching for Connection: Adolescents and AI

None of this should surprise us. Adolescence is marked by a growing desire for autonomy, privacy, and identity exploration. It is a time of big insecurities, big questions, and big possibilities. Given that developmental context, it’s no surprise that adolescents might turn to AI companions to sort through their experiences in what feels like a private, affirming, and non-judgmental space. A teen quoted in a recent Hopelab study put it bluntly, “We use [generative] AI because we are lonely and also because real people are mean and judging and AI isn’t.” 

If the most popular AI companions were built with youth wellbeing at the center, these platforms could potentially offer helpful scaffolding for navigating personal and social challenges. Unfortunately, many are rolling out using the same playbook as social media platforms. They prioritize engagement, attention, and time online. For some teens, especially those already struggling, this can make it even harder to unhook and navigate the friction that comes with human relationships. Moreover, a recent Common Sense Media Risk Assessment found that AI companions fail to meet basic standards of safety and deliver explicit sexual, violent, and harmful language.

How should we respond? (Hint: Don’t panic)

There’s always a risk when we encounter new data about kids and tech. We might panic and bombard young people with restrictions and warnings. We might zero in on the most unsettling statistics and overlook the more reassuring ones. But let’s be clear: findings like these deserve our close attention. This report confirms that AI companions aren’t part of some far-off sci-fi future—they’re here, and teens are already using them.

What the numbers don’t tell us is how these tools are shaping young people’s mental health, relationships, and sense of self. These are critical questions and we need more research that gives us the answers. In the meantime though, young people need our support, curiosity, and guidance. 

The good news is we already know a lot about what helps reduce risk and build resilience. Panic, lectures, and limits alone tend to backfire with adolescents. Instead, let’s remember what works.

  • Set clear, purposeful boundaries.
  • Pay attention to your teen.
  • Ask open, curious questions.
  • Create opportunities for reflection and skill-building.
  • Push for collective change that makes these technologies safer by design.

For more on these tips and ideas on how to respond that builds resilience and reduces risk, read our last blog post.

We’ve come a long way since Dr. Sbaitso. Given that today’s AI tools are more responsive, affirming, and always available, it’s no surprise that young people are experimenting with these tools and turning to them for advice. That is why it is so important not to repeat the same mistakes we made with social media. Let’s make sure that adolescent wellbeing, not profit alone, drives innovation and design in this space. Most importantly though, let’s keep the lines of communication open – not to deliver endless lectures and warnings, but to show teens that we are here to listen, too.