Australia’s online watchdog has criticised the world’s biggest social platforms of failing to properly enforce the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and insufficient measures to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Uncovered in First Major Review
Australia’s eSafety Commissioner has outlined a worrying pattern of non-compliance among the world’s most prominent social media platforms in her inaugural review following the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish sufficient safeguards to stop minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, noting that some platforms have allowed children who initially declared themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings demonstrate a significant escalation in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has made clear that simply showing some children still hold accounts is insufficient; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from creating accounts in the outset. This shift signals the government’s commitment to ensure tech giants responsible, with possible sanctions looming for companies that fail to meet the statutory obligations.
- Allowing previously banned users to re-verify their age and restore account access
- Enabling multiple tries at the same age assurance method without penalty
- Inadequate safeguards to stop accounts for under-16s from being opened
- Inadequate notification systems for parents and the general public
- Absence of publicly available information about compliance actions and account deletions
The Magnitude of the Challenge
The considerable scale of social media usage amongst Australian young people highlights the compliance challenge facing both the government and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have proven far more complex than expected, with platforms having difficulty to distinguish genuine age declarations from false claims. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the real challenge of confirming age online. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to deploy the infrastructure required by law. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they risk facing substantial fines that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Statistics Demonstrate
In the first month after the ban’s launch, Australian authorities reported that 4.7 million accounts had been restricted or removed. Whilst this number initially seemed to demonstrate enforcement effectiveness, further investigation reveals a more complex picture. The considerable quantity of account deletions suggests that many under-16s had managed to establish accounts in the beginning, demonstrating that preventative measures were lacking. Moreover, the data prompts inquiry about whether removed accounts constitute authentic compliance or merely users removing their accounts willingly in reaction to the new restrictions.
The minimal transparency surrounding these figures has disappointed independent observers seeking to assess the ban’s true effectiveness. Platforms have revealed little data about their enforcement methodologies, effectiveness metrics, or the profile of removed accounts. This absence of transparency makes it hard for regulators and the wider public to assess whether the ban is operating as planned or whether younger users are merely discovering alternative ways to reach social media. The Commissioner’s insistence on thorough documentation of systematic compliance measures reflects increasing concern with platforms’ unwillingness to share complete details.
Industry Response and Pushback
The major tech platforms have responded to the regulator’s enforcement action with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination remains a significant industry-wide challenge. The company has advocated for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than platform-level enforcement. This stance demonstrates broader industry concerns that the existing regulatory system places an unrealistic burden on individual platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had locked 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures reflect authentic adherence or simply represent reactive account management. The fundamental tension between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an whole age group persists unaddressed. Companies have long resisted rigorous age verification methods, pointing to privacy issues and technical constraints, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta maintains age verification should occur at app store level rather than on individual platforms
- Snap claims to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical challenges as impediments to effective age verification
- Platforms maintain they are doing their best whilst questioning the ban’s overall effectiveness
More Extensive Considerations Regarding the Ban’s Effectiveness
As Australia’s under-16 social media ban moves into its implementation stage, key concerns persist about whether the legislation will achieve its intended goals or merely push young users towards less regulated platforms. The regulator’s initial compliance assessment reveals that despite months of implementation, significant loopholes exist—children keep discovering ways to bypass age verification mechanisms, and platforms have had difficulty stop new underage accounts from being created. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply migrate to other platforms, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s worldwide effects increase the complexity of assessments of its impact. Countries such as the United Kingdom, Canada, and various European states are observing Australia’s initiative closely, evaluating similar regulatory measures for their own citizens. If the ban does not successfully reduce children’s social media usage or cannot protect them from dangerous online content, it could undermine the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to effectively limit underage access, it may embolden other nations to pursue similar approaches. The outcome will potentially determine global regulatory trends for the foreseeable future, making Australia’s implementation efforts analysed far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health advocates and organisations focused on child safety have championed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators argue that taking young Australians off platforms built to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes valid applications of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s real-world effects reaches past individual users to impact content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously used effectively. Meanwhile, the ban inadvertently benefits large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has indicated a notable transition from passive monitoring to proactive action, marking a critical turning point in the rollout of the age restriction. The authority will now compile information to establish whether services have omitted “reasonable steps” to block minors from using, a legal standard that surpasses simply recording that minors continue using these systems. This approach requires tangible verification that organisations have implemented appropriate systems and processes meant to keep out minors. The regulatory body has indicated it will pursue investigations carefully, constructing evidence that could result in considerable sanctions for failure to comply. This shift from observation to intervention demonstrates growing frustration with the services’ existing measures and signals that willing participation by itself is insufficient.
The rollout phase raises critical issues about the appropriateness of fines and the concrete procedures for holding tech giants accountable. Australia’s regulatory framework delivers regulatory tools, but their success hinges on the eSafety Commissioner’s readiness to undertake regulatory enforcement and the platforms’ capacity to respond substantively. Global regulators, notably regulators in the UK and EU, will keenly observe Australia’s regulatory approach and results. A robust enforcement effort could set a template for further jurisdictions contemplating comparable restrictions, whilst failure might weaken the comprehensive regulatory system. The next phase will be critical whether Australia’s groundbreaking legislation delivers real safeguards for teenagers or becomes largely performative in its impact.
