Banning children from social media won’t make them safer online

Vladimir Cortés Roshdestvensky

The world’s attention has turned to Australia. In 2025, the country began enforcing a law that restricts access for under-16s to ten major social media platforms, including TikTok, Instagram, Snapchat, Facebook, X, and YouTube. Supporters frame the measure as a long-overdue response to the harms children face online. Critics warn it risks becoming a blunt instrument that mistakes exclusion for protection.

Australia is hardly alone: similar debates are unfolding across the Global Majority and beyond, as lawmakers scramble to respond to harms that platforms have been slow to address.

Less visible internationally, but arguably more consequential, is what happened in Brazil. In September 2025, the country adopted Law 15.211 — the Digital Statute for Children and Adolescents, also known as ECA Digital— a comprehensive framework designed not around bans, but around responsibility, safety-by-design and human rights. Together, Australia and Brazil illustrate two very different paths towards the same goal: protecting children and adolescents in the digital environment.

Australia’s approach rests on a familiar regulatory instinct: delay access. Platforms are required to take “reasonable steps” to prevent children below a minimum age from creating accounts. This responds to genuine concerns, particularly mounting evidence linking social media use to anxiety, self-harm and other mental health harms among young people. But the debate cannot stop at age gates when court filings in the US allege that Meta buried internal findings indicating a causal link between Facebook use and worse mental health outcomes — a reminder that age gates can only do so much when the underlying business model remains untouched. If a platform can knowingly amplify harm and then conceal what it learns, the danger doesn’t expire on a user’s 16th birthday.

Brazil’s law takes a different starting point. Rather than asking whether children should be online, it asks how digital environments should be designed when children are — inevitably — present, from video platforms and messaging apps to games and social networks. The statute applies not only to services aimed at minors, but to any digital product likely to be accessed by them. Its core principle is clear: the best interests of the child must be embedded from the earliest stages of technological design and operation.

This approach is not radical. It mirrors long-standing international human rights standards, particularly the UN Committee on the Rights of the Child’s General Comment No. 25, which affirms that children’s rights apply fully in the digital environment. That includes not only the right to protection from harm, but also rights to information, participation, expression, and development.

Brazil’s framework translates these principles into concrete rules: Platforms must adopt the most protective privacy settings by default, and they are required to assess and mitigate risks linked to algorithmic recommendation systems, addictive design features, and exposure to harmful content. Moreover, profiling children for commercial purposes is tightly restricted. So-called “loot boxes” in games aimed at children are banned. Age-appropriate design is no longer a voluntary ethical choice, but a legal obligation.

 

Responsibility does not stop at platforms alone

Responsibility does not stop at platforms alone. App stores and operating systems are brought into the regulatory chain, required to provide privacy-preserving age signals and meaningful parental control tools. The law recognises that safety cannot be achieved through a single gatekeeper, but through shared duties across the digital ecosystem.

This matters because prohibition often produces unintended effects. Young people are technically adept. Where access is blocked, workarounds emerge — VPNs, shared accounts, false age declarations. History shows that restrictive measures can push harmful behaviours into less visible, less regulated spaces, undermining both protection and trust.

It is also worth remembering what children themselves say.  Young people describe digital spaces as essential for learning, social connection, identity formation, and civic participation. A blanket narrative of harm risks erasing these experiences. I recently came across a Reddit thread by a user who identified themselves as 14, documenting their efforts to “de-Google” their digital life — running alternative services, managing privacy settings, using two-factor authentication. 

This is not an argument against regulation. It is an argument for regulation that recognises children as rights-holders, not merely risks to be managed.

None of this is to deny the seriousness of the harms at stake. The mental health impacts associated with large social media platforms are real and deeply troubling. What is striking, however, is how fiercely major technology companies have resisted independent scrutiny of those effects, from lobbying efforts to legal battles aimed at blocking access to internal research. If protection is the goal, transparency and accountability must be part of the solution.

The choice facing policymakers is not between protection and empowerment, or between safety and access. The real question is whether regulation targets symptoms or systems. Age bans may offer political clarity and immediate reassurance, but they do little to change the underlying business models and design choices that generate risk in the first place.

Brazil’s approach is not perfect, nor easily transferable. But it offers an important lesson: child safety online cannot rest solely on excluding children from digital spaces. It must be built into the architecture of those spaces themselves, through enforceable duties, human rights standards, and shared responsibility.

If governments are serious about protecting children online, the challenge is not simply to ask whether they should be there. It is to demand that the digital world they inhabit is worthy of them.


Share this post