Major platforms commit to comply with Australia’s new law from December but warn of serious enforcement challenges
Major social-media companies have affirmed they will comply with Australia’s pioneering law preventing users under the age of sixteen from maintaining accounts on specified platforms, while cautioning that implementing the rule will be technically complex and costly.
The legislation, effective from ten December 2025, mandates that social-media services take “reasonable steps” to prevent Australians under sixteen from creating or retaining accounts, with potential fines reaching up to A$49.5 million for non-compliance.
Meta Platforms — which owns
Facebook and Instagram — acknowledged the engineering and age-assurance challenges ahead.
Its Australian director, Mia Garlick, noted that identifying under-16 users and removing their accounts will require “significant” operational work.
The company plans to approach accounts flagged as being underage, giving users the option to delete data or defer access until they turn sixteen.
TikTok, operated by ByteDance, also confirmed its intention to comply.
Australian policy lead Ella Woods‑Joyce expressed concern that the ban might inadvertently push young users toward less-regulated online spaces lacking robust safety measures.
The platform estimates it currently holds around two hundred thousand accounts belonging to Australian users under sixteen, while Meta identified about four hundred and fifty thousand across
Facebook and Instagram; Snap Inc. (owner of Snapchat) cited roughly four hundred and forty thousand.
Australia’s regulator, the eSafety Commissioner, published guidance emphasising that platforms are not required to verify the age of every user via document checks but must demonstrate they have taken “reasonable steps” to prevent under-16s from using their services.
The regulator noted that no single verification technology is fool-proof and encouraged a layered approach combining behaviour-based signals and existing account data rather than widespread re-verification of all users.
Among the key issues still unresolved are how platforms will reliably detect under-16 users who falsify their age, how to address potential false positives where adult users are mis-flagged, and how to handle users who attempt to evade the rules via virtual private networks or alternative accounts.
Some industry analysis warns that enforcement may remain patchy and that the regulation could have unintended consequences, such as driving minors toward smaller, unmoderated services.
While the platforms’ public acceptance marks a turning point, the broader debate continues over whether the ban will deliver its intended benefits for child safety, and what impact it may have on young people’s online participation and digital access.
Australia’s legislation is being closely watched by regulators and policy-makers around the world as a test case in youth-online-safety regulation.
With just over a month before the law takes effect, major platforms are in final stages of readiness, while government agencies and industry groups are ramping up awareness campaigns aimed at children, parents and educators.
The regulatory clock is now ticking for tech firms to prove they can effectively comply with what is described as one of the strictest social-media age-restriction regimes globally.