“I am just so tired. It feels like a full-time job,” a parent shared with me recently, describing the constant work of managing parental controls and online risks.
“Yeah,” I replied. “You aren’t alone. Most of these platforms aren’t designed with our kids’ wellbeing in mind, and we’ve largely outsourced responsibility to parents and kids. You have every right to feel exhausted.”
I have some version of this conversation in nearly every community I visit. Caregivers acknowledge that online spaces can offer real benefits for their kids, but are worn down by working against business models that prioritize time and engagement over wellbeing and safety.

From Parental Controls to Policy Solutions
Parents aren’t the only ones feeling this strain. Educators and policymakers are also confronting growing concerns about how tech is shaping children’s learning and mental health. Against this backdrop, Australia is embarking on a major experiment: restricting social media accounts for young people under the age of sixteen.
There are as many strong opinions about this policy as there are people who care about kids. What everyone shares is a genuine concern about harmful content, exploitation, and problematic design. Where opinions diverge is in how best to respond.
This isn’t an abstract debate. Schools are making day-to-day decisions about devices in classrooms, families are trying to navigate risks and opportunities with limited support and uneven tools, and scientists are continuing to uncover the complexity of the relationship between technology use and youth outcomes. The evidence base is far from completely settled.
On the ground, a majority of Australian parents support age restrictions, though many express limited confidence in the government’s ability to enforce them. The young people most directly affected are, not surprisingly, much more ambivalent. Here in the United States, many applaud Australia’s decisive action, while others worry that the policy is too blunt an instrument and could produce unintended consequences for child rights and safety.
The Case for Age Restrictions
Let’s consider the arguments for age restrictions. Raising the minimum age for creating accounts on popular social media platforms may help create and reinforce a broader social norm to delay access beyond the sensitive window of early adolescence. While some youth will undoubtedly figure out how to get around the restrictions, this population-level approach assumes that age limits could still meaningfully reduce many young people’s exposure to intensive data collection and recommendation algorithms.
The ban may also ease social pressure among younger adolescents to join platforms like TikTok or Snapchat. This would give both caregivers and young people the cover of a clear policy and potentially alleviate the collective action challenge of delaying use. Despite the inevitable challenges of implementation, supporters emphasize the importance of bold action rather than waiting for more perfect solutions.
The Limits of Simple Solutions
When we are worried about kids, it’s understandable to hope for simple solutions. Attempting to keep children out of these spaces is, in many ways, a more straightforward pitch than changing the spaces themselves.
Yet many child-safety advocates in Australia and beyond urge caution, noting that simple solutions are rarely simple in practice. Under this policy, young users can still access online content without logging in. Restrictions on major platforms may also push young people (and bad actors) toward spaces excluded from the ban, such as Roblox or Discord, or onto smaller, less regulated platforms.
Others worry that years of pressure on tech companies to invest in child safety, content moderation, and age-appropriate design could lose momentum. If platforms can argue that children are no longer their responsibility, the incentive to design for their safety may diminish. In a recent statement, UNICEF has warned that
“age restrictions must be part of a broader approach that protects children from harm, respects their rights to privacy and participation, and avoids pushing them into unregulated, less safe spaces… Laws introducing age restrictions are not an alternative to companies improving platform design and content moderation.”
Young people themselves are quick to point out the likelihood of workarounds. Many express frustration that the government would rather push them out, or limit them to passively viewing content, than force companies to design for healthier engagement. Adults should take seriously the ways in which young people have relied on social media for social connections and support.
What Comes Next
I haven’t met a parent who isn’t eager for stronger structural support around technology and digital wellbeing. While the U.S. hasn’t passed meaningful federal legislation, Australia isn’t the only country taking action. The UK’s Age Appropriate Design Code has already had real impacts on youth data privacy and safety.
So let’s watch the effects of Australia’s policy closely. Not as a final answer to online safety, but as the real-world experiment it is. Let’s remain open to both expected and unexpected benefits and willing to acknowledge and respond to unintended consequences. Luckily, Australia’s e-safety commission has already appointed a group of researchers to assess the impacts of the policy.
Invest in Belonging, Not Just Barriers
No matter what happens in Australia, one thing is clear: If we want young people to have healthy online experiences at 13, 16, or 25, we can’t simply restrict access until they reach a certain birthday and hope for the best. We have to couple any legislation with a willingness to walk alongside them, helping them build the digital agency and media literacy skills they’ll need in a world that is rapidly expanding and changing. Let’s not accept the online world as it is. Let’s keep imagining the way that it could be, invest in tech accountability, and co-create platforms with youth that allow them to more safely connect and contribute.”
Any technology policy must also grapple with the world young people are actually growing up in. We’ve created communities, schools, and social lives that are deeply device-dependent. Assuming that young people will automatically “go outside and climb trees” if access to a subset of platforms is restricted is unrealistic and unlikely. Let’s also invest in accessible green space, libraries, connected learning, engaging after-school programs, and other positive youth development opportunities.
Connected and Courageous
The teenage brain isn’t just sensitive to harm; it’s also built for agency, purpose, and discovery. Let’s not lose sight of that as we craft policy and continue to turn toward our own kids – yes, with purposeful boundaries, but also with curiosity and the opportunities for participation and skill building that help grow connected and courageous humans online and offline.