“How is that allowed?” my middle schooler asked me recently.
My son had just learned that Elon Musk’s AI chatbot and image generator, Grok, was being used within X to “undress” existing images of people online. We had already talked about nudify and “undressing” apps (add this to the growing list of awkward but essential digital-age conversations), but this felt different to him because it was on X. It felt different to me, too.
“I know,” I responded. “It’s hard to believe.”

Beyond Borderline Content
Elon Musk has been pushing the envelope of what’s permissible since taking over X. While Grok isn’t technically a dedicated “undressing app,” lax and inconsistent safeguards meant that users could generate scantily clad or nearly nude images with a simple prompt. The result has been that sexualized images, including images of children, have been widely shared on a mainstream platform. Rather than simply hosting harmful user-generated content, the platform is actively producing nonconsensual sexual imagery.
Grok’s latest internet scandal is of course facing significant pushback and facing legal scrutiny. UK media regulators have already opened a formal investigation, and other countries are threatening to suspend X. Some U.S. senators are urging Apple and Android to remove X from their app stores. In response, X initially limited these AI image requests to paid subscribers and now says it has implemented measures to prevent Grok from undressing images of real people.
Our kids deserve more than a world where a company’s first instinct is to monetize the ability to create and share sexualized images of real children and non-consenting adults.
Synthetic images, real harm
Research on image-based sexual abuse already tells us that the non-consensual distribution of intimate or sexual material can have significant psychological and emotional consequences. Images of real people suddenly clad in bikinis, underwear, or clear plastic undergarments may be AI-generated, but the anger, embarrassment, fear, and powerlessness that follows are very real. Women and girls appear to be disproportionately targeted. These images can become flash points for harassment, objectification, and exploitation. No matter what, this is tricky terrain for kids and teens who are still learning about relationships, power, consent, and boundaries.
Meanwhile young people are watching as Elon Musk himself deflects responsibility, jokes about the feature, and reposts trivializing images meant to minimize the gravity.
Teens deserve safety-by-design
While attention to Grok is important, this issue extends far beyond a single app or platform. AI-generated sexualized content and CSAM is becoming more common across the internet. Responding after harm occurs isn’t enough. As one author notes, “AI developers, tech companies, social media platforms, and regulators must treat nonconsensual sexualized imagery as a design-level risk, not a downstream moderation problem.”
In other words, it’s time to move beyond reactive band-aids and toward safety by design. Teens want, and deserve, an internet built with their privacy, safety, and dignity in mind. Let’s advocate for one.
Adolescence in the age of AI
Most experimentation with undressing prompts, especially when the feature is available on a widely accessible platform, may come from curiosity, messing around, disbelief (“Will this work?”), or attempts to impress friends or make jokes. In other cases, young people may use these images to intentionally hurt, harass, or humiliate a peer. Regardless of intent, once an image is created, it has a real psychological impact.
The vast majority of teens, sitting at our kitchen tables or working on a paper in class, can easily explain why this is a harmful use of technology. That doesn’t mean they won’t experiment with it, be pressured to engage with it, or be pulled into it as a bystander.
That disconnect makes more sense when we remember that skills like resisting pressure, perspective-taking, and thinking ahead to future consequences are still developing during adolescence. Researchers often distinguish between “cool” (low-stakes) and “hot” (high-stakes, emotionally charged) executive function skills. Teens tend to do much better with cool skills than hot ones. This helps explain why thoughtful decision-making can fall apart in moments of excitement, pressure, or heightened emotion.
Plus, teens are navigating a confusing terrain where boundaries and expectations feel inconsistent: they’re told this isn’t okay, while watching the CEO of one of the world’s most powerful tech companies joke about it. This does not mean that poor decision-making is inevitable. Far from it. It does mean that talking early and often, working through realistic scenarios, and building media literacy matters.
Skip the catastrophic lectures
When something like a Grok undressing scandal hits the headlines, it’s tempting to default to turn to kids and:
Lecture: “Let me tell you everything you need to know about how awful this is.”
Catastrophize: “Let me tell you all the ways your life could be ruined if you even look at that platform.”
Both are understandable. But neither is especially effective.
We know from years of conversations about issues like sexting that threatening kids with legal consequences and worst-case scenarios doesn’t reliably reduce risk. Long lectures don’t either.
It’s not that these warnings are wrong. Generating sexual, non-consensual images carries serious risks, and teens deserve to understand them. The problem is that fear-based messages alone, such as “One photo and you could go to jail” or “Never do this! Your reputation will never recover!” don’t seem to stop young people from exploring sex and sexual imagery online. This is especially confusing messaging when they watch adults normalize and joke about these same behaviors.
The reality is that in lower-stakes, “cool” settings, most teens could deliver the lecture themselves. In addition, lectures that rely on fear, shame, or legal threats can backfire by pushing behavior underground and making it less likely that young people will reach out for help if they make a mistake.
Set boundaries and start conversations
We want our kids to come to us when things get hard. That means moving away from catastrophic lectures and toward conversations grounded in clear boundaries, communication, and media literacy. Here are some ways to get started:
Start with curiosity
Ask our teens whether they’ve ever asked for, received, or seen a sexualized photo that has been manipulated by AI. Is it common at their school? Do they think it’s a big deal? Why or why not? Listen to their perspective without rushing to correct or react.
Share the facts
Talk with teens about consequences without defaulting to catastrophic warnings. Laws matter, but legality is a low bar. A deeper conversation invites teens to consider what non-consensual image generation and sharing can do to another person’s sense of safety, dignity, and belonging.
Talk about sextortion
Explain what AI-enabled sextortion is and make it clear that if they are targeted, it isn’t their fault. Make it clear that talking about it right away can help prevent escalation.
Revisit consent
Name that consent applies even when images are AI-generated or altered. It applies even if it is a “joke.” Just because an image isn’t “real” doesn’t mean it isn’t harmful and wrong.
Revisit upstander skills
Ask what they think someone should do if they see or are sent an AI-generated sexual image. Frame it as an upstander moment: How can you avoid causing harm, interrupt it, or get help? Brainstorm responses together including choosing not to forward it, refusing to give it an audience, and looping in a trusted adult as soon as possible.
Generate strategies to resist pressure
Talk about strategies for resisting pressure to generate, view, or share images. Remind teens that adults are always happy to be the excuse: “I can’t even mess with that. My parent(s) always find out about everything.”
Activate media literacy
Step back and explore the bigger picture. Who is creating these tools? Who benefits? How do these platforms make money? Who is harmed? Who is responsible for harm? How does gender shape these interactions?
Double messages are okay
Hold both/and messages, such as:
“I expect you to prioritize your safety and the safety of others by not using undressing or nudify apps ever. And, if you end up in a tough spot or make an unsafe choice, I can be your first call and I won’t make you regret it. We will figure it out together.”
Make it about more than Grok
Nest conversations about Grok or nudify apps within broader discussions about relationships including consent, communication, misogyny, self-worth, and decision-making.
Stay connected
Moments like this can leave us feeling angry and exhausted. We ask the same question my middle schooler did: “How is that allowed?”
These moments can also make the distance between the internet as we want it to be and the internet as it is feel like a chasm that keeps getting wider. And still, we keep parenting. We keep advocating for accountability. We keep modeling care and responsibility. We keep sharing our values and setting purposeful boundaries. And perhaps most importantly, we keep engaging young people in the essential conversations and skill building that will help them stay rooted in their self-worth and in our shared humanity – even when AI features are designed to strip us of it.