- Australia, Indonesia, France, Denmark, Spain, and Germany are all moving to ban social media for users under 15 or 16.
- Age verification requires collecting sensitive personal data from every user, including adults, raising massive privacy concerns.
- No evidence suggests blanket bans reduce online harm — children migrate to unregulated platforms, encrypted chats, and VPNs.
- The bans shift responsibility from platforms to governments and parents, while letting Big Tech off the hook for toxic design choices.
- Amnesty Tech and digital rights groups warn these laws set a precedent for broader internet censorship.
A Global Wave of Panic Legislation
Australia kicked it off in December 2025, banning under-16s from Facebook, Instagram, TikTok, X, YouTube, Reddit, and Snapchat. Within weeks, the dominoes started falling. Denmark announced a ban for under-15s. France passed a bill targeting the same age group. Germany, Greece, Spain, Slovenia, Malaysia, and Indonesia all followed with proposals of their own. The UK launched a public consultation on March 2.
Indonesia went furthest, announcing it would deactivate accounts for minors on “high risk” platforms starting March 28 — including Roblox, a gaming platform used by tens of millions of children worldwide.
The speed is remarkable. The evidence base is not.
The Privacy Trap No One Talks About
Every single one of these bans requires age verification. And age verification requires proof of identity — not just for children, but for every user on the platform.
Australia’s law explicitly states that platforms “can’t rely on users simply entering their own age.” That means biometric scans, government ID uploads, or third-party verification services processing the personal data of hundreds of millions of people. Indonesia’s communications minister Meutya Hafid framed it as protecting children. What she did not mention is the surveillance infrastructure required to enforce it.
Nurul Izmi at Indonesia’s Institute for Policy Research and Advocacy put it plainly: “Implementing age verification means collecting children’s sensitive personal data.” In countries with weak data protection laws — which includes most of the countries on this list — that data becomes a target.
France already has a dismal track record on digital privacy enforcement. Germany’s proposal came from a conservative coalition with a history of expanding state surveillance powers. The idea that these governments will handle biometric data responsibly requires a level of trust they have not earned.
Bans Don’t Work — They Displace
The core assumption behind every one of these laws is simple: block children from social media, and they will be safer. The assumption is wrong.
A 2024 study by the Oxford Internet Institute found no causal link between social media use and declining mental health in adolescents. Ofcom’s own data shows that 37% of children aged 3 to 5 already use social media, with 60% having their own profiles. These are not teenagers making independent choices — these are toddlers handed devices by their parents.
Banning a 14-year-old from Instagram does not make them stop going online. It pushes them to platforms with zero moderation: Telegram groups, Discord servers, encrypted messaging apps, or whatever new app emerges next week. Australia’s ban notably excludes WhatsApp — an app where child exploitation material circulates freely in private groups.
As Matt Joseph, a 17-year-old Indonesian student, told the BBC: “If the government wants them to use it less, they need an incentive.” Prohibition without alternatives is not a strategy. It is a press release.
Letting Big Tech Off the Hook
The most cynical aspect of these bans is what they do not address: platform design.
Instagram’s algorithmic recommendation engine pushes eating disorder content to teenage girls within 24 hours of account creation. TikTok’s infinite scroll is engineered to maximize dopamine-driven engagement. YouTube’s autoplay sends children from Peppa Pig to conspiracy content in three clicks. None of that changes with an age ban.
The UK consultation at least acknowledges this, floating the idea of requiring platforms to “limit or remove features that drive compulsive use, such as endless scrolling.” That is closer to the real solution: regulating the product, not the user.
Spain’s proposal to hold social media executives personally accountable for harmful content is another step in the right direction. But most of these laws do the opposite — they shift the burden of enforcement from billion-dollar companies to parents and governments, while the algorithmic engines that cause the harm keep running.
The Censorship Precedent
Amnesty Tech has warned that social media bans for children create a framework that governments can expand. Today it is under-16s. Tomorrow it is “national security.” Indonesia already blocks OnlyFans, Pornhub, and the AI chatbot Grok. The country’s new social media ban fits a pattern of increasing government control over digital access — wrapped in the language of child protection.
The UK House of Lords passed an amendment enabling the government to “make regulations requiring internet service providers to prevent or restrict access by children of or under a specified age to specified features or functionalities of certain internet services.” That language is broad enough to cover virtually anything.
Digital rights are not a luxury for adults. The UN Convention on the Rights of the Child guarantees children’s right to access information and freedom of expression. Every one of these bans restricts both.
The Hard Answer No One Wants
Protecting children online is a real problem that deserves serious solutions. Blanket age bans are not serious. They are politically convenient, easy to announce, and impossible to enforce without building the kind of surveillance infrastructure that should alarm everyone.
The real answers are harder: mandatory safety-by-design standards for platforms, algorithmic transparency, a ban on profiling ads targeting minors, properly funded digital literacy programs, and holding executives — not children — accountable when products cause harm.
Governments choosing age bans over platform regulation are not protecting children. They are protecting the companies that put children at risk in the first place.