More than one million Australian teens to receive account warnings as platforms prepare large-scale deactivations before December ten
Australia’s major social media companies are preparing to deactivate accounts held by users under sixteen as the country enforces a world-first online safety law taking effect on December 10. Over the coming days, platforms will notify more than one million accounts registered to teenagers, offering the choice to download data, freeze profiles until they reach the legal age, or lose access entirely once the ban begins.
TikTok, Snapchat and Meta’s
Facebook, Instagram and Threads are expected to disable accounts identified as belonging to users under sixteen, according to individuals with direct knowledge of internal preparations.
The companies, which collectively serve more than twenty million Australians, have indicated they will comply with the law while aiming to keep disruption minimal for the wider population.
The rollout marks a sharp contrast with warnings issued during the year-long debate over the legislation, when operators predicted widespread user loss and unworkable compliance burdens.
Firms had objected that mandatory age checks could force constant log-ins, rely on intrusive data collection or be easily bypassed.
Instead, platforms will lean on existing behavioural-analysis software already used to estimate user age for marketing, identifying likely minors through engagement patterns such as likes, follows and interaction timing.
Users who believe they have been mistakenly flagged will be directed to third-party age assurance apps, which rely on selfie-based age estimation and, if needed, identity documents.
Trials have shown these tools can incorrectly approve users aged fifteen or block those aged sixteen and seventeen, a margin of error that could expose companies to enforcement action.
Industry experts warn that the highest risk of wrongful deactivation sits among Australia’s roughly six hundred thousand sixteen- and seventeen-year-olds, as age-estimation accuracy narrows in that range and many do not yet possess formal identification.
If errors occur, service disruptions may last days or weeks while platforms correct mismatches.
The legislation requires social media operators to prevent minors from accessing their services without relying on parental discretion, reflecting growing international concern about digital harms following internal disclosures from major platforms and renewed political advocacy in 2024. Australia’s approach has drawn global attention, with observers noting that Britain, France and several U.S. states have struggled to implement their own age-limit measures due to practicality and free-speech concerns.
Platforms will direct most users to in-app prompts and automated systems, turning to external verification only when disputes arise.
Authorities have advised operators to monitor virtual private networks used to mask location and consider emerging platforms not yet covered by the ban as potential avenues of circumvention.
Regulators view a smooth rollout as central to Australia’s leadership in global youth-safety policy, with the effectiveness of early enforcement expected to shape broader international efforts aimed at reducing online risks ranging from bullying to harmful content exposure.