In 2026, Australia’s world-first age-restricted social media policy unfolds alongside an evolving U.S. regulatory landscape that transcends traditional parental-consent models for children’s data protection
Australia’s pioneering approach to children’s online privacy came into force in December 2025 with the implementation of a statutory minimum age requirement that bars individuals under the age of sixteen from holding accounts on major social media platforms, marking a decisive turn in global digital policy.
Under the Online Safety Amendment (Social Media Minimum Age) Act, providers of age-restricted social media services must take “reasonable steps” to prevent under-sixteen users from creating or maintaining accounts, applying obligations to services such as
Facebook, Instagram, TikTok, Snapchat, X, Reddit, Threads, Twitch and others.
The law, now operational, empowers regulators to impose significant fines on companies that fail to comply, positioning Australia at the vanguard of governments seeking to curb potential harms associated with early social media use among minors.
While exactly how many under-sixteen Australians have been affected in practice remains under review and enforcement challenges persist, the move signifies a regulatory willingness to shift away from traditional notice-and-consent regimes toward access-based restrictions and age assurance obligations for digital platforms.
Implementation of the Australian policy has generated both acclaim and scrutiny.
Supporters argue that shielding children from addictive feeds and harmful content via enforceable age limits reinforces parental authority and protects young people’s mental health and well-being.
Questions have emerged, however, about enforcement efficacy, given evidence that some minors are circumventing age checks and accessing social media through alternative channels or verification workarounds.
The broader legislative regime also includes ongoing examination of age-assurance technologies, and regulators are exploring how age verification might be robustly implemented without infringing on privacy or fundamental freedoms.
This moment represents an inflection point in children’s privacy governance, as policymakers balance protection with digital inclusion.
Across the United States, the regulatory environment for children’s online privacy is likewise undergoing a transformation that moves beyond classic notice-and-consent models such as the Children’s Online Privacy Protection Act (COPPA).
Recent regulatory trends point to a wider set of policy tools that emphasize access limits, rules on digital advertising involving minors, design mandates that embed privacy protections by default, and ecosystem-level age verification measures.
These developments are emerging through a patchwork of state privacy laws, amendments to federal rules governing data collection and disclosures, and proposed legislation aimed at modernizing children’s online protections.
Even as constitutional and free speech challenges temper some efforts, the practical landscape indicates growing momentum among lawmakers and regulators to shape product design, data governance and risk-based compliance strategies that directly address the realities of minors’ digital experiences.
Taken together, the policy evolution in Australia and the United States underscores a broader international trend in 2026 toward more prescriptive, operationally demanding frameworks for children’s privacy and online safety.
By moving beyond the traditional notice-and-consent paradigm, these approaches reflect heightened concern among governments about the intersection of technology, children’s rights and societal well-being, with implications not only for domestic digital ecosystems but also for global platform operators that must navigate divergent regulatory expectations.