Australia’s online watchdog has accused the world’s largest social media companies of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Compliance Failures Uncovered in First Major Review
Australia’s eSafety Commissioner has outlined a worrying pattern of failure to comply among the world’s most prominent social media platforms in her inaugural review since the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement appropriate safeguards to stop minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings represent a notable intensification in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards active enforcement. The regulator has made clear that merely demonstrating some children still hold accounts is inadequate; platforms must instead provide concrete evidence that they have established robust systems and processes intended to stop under-16s from opening accounts in the outset. This shift signals the government’s determination to hold tech giants accountable, with potential penalties looming for companies that fail to meet the legal requirements.
- Enabling previously banned users to re-verify their age and restore account access
- Allowing repeated attempts at the same age assurance method with no repercussions
- Insufficient safeguards to stop accounts for under-16s from being established
- Insufficient complaint mechanisms for parents and members of the public
- Shortage of publicly available information about compliance actions and user account terminations
The Extent of the Problem
The considerable scale of social media activity amongst young Australians underscores the compliance challenge facing both the government and the platforms themselves. With millions of accounts already removed or restricted since the ban’s implementation, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have turned out to be considerably more complex than expected, with platforms struggling to distinguish genuine age declarations from false claims. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a broader concern about the willingness of platforms to place compliance ahead of user growth. Social media companies have consistently opposed strict identity verification requirements, citing data protection worries and the genuine difficulty of confirming age online. However, the regulatory report suggests that some platforms may not be making adequate commitment to implement the systems required by law. The shift towards active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing significant penalties that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Data Shows
In the first month subsequent to the ban’s launch, Australian officials reported that 4.7 million accounts had been restricted or deleted. Whilst this figure initially looked to demonstrate regulatory success, further investigation reveals a more layered picture. The sheer volume of account deletions suggests that many under-16s had successfully created accounts in the first place, revealing that preventative measures were inadequate. Moreover, the data raises questions about whether suspended accounts represent genuine enforcement or just users closing their profiles of their own accord in in light of the latest limitations.
The limited transparency concerning these figures has disappointed independent observers seeking to assess the ban’s genuine effectiveness. Platforms have provided little data about their implementation approaches, success rates, or the characteristics of removed accounts. This lack of clarity makes it difficult for regulators and the wider public to evaluate whether the ban is working as intended or whether younger users are just locating alternative ways to access social media. The Commissioner’s push for detailed evidence of structured adherence protocols reflects growing frustration with platforms’ resistance to disclosing comprehensive data.
Industry Response and Pushback
The social media giants have addressed the regulatory enforcement measures with a mixture of compliance assurances and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst simultaneously arguing that precise age verification remains a significant industry-wide challenge. The company has advocated for a alternative strategy, proposing that strong age verification systems and parental consent requirements implemented at the application store level would be more efficient than enforcement at the platform level. This position reflects wider concerns across the industry that the current regulatory framework places an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an whole age group persists unaddressed. Companies have consistently opposed stringent age verification, citing privacy concerns and technical limitations, establishing an impasse between authorities and platforms over who carries responsibility for implementation.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups point to privacy issues and technical challenges as barriers to effective age verification
- Platforms contend they are doing their best whilst questioning the ban’s general effectiveness
Larger Questions Concerning the Ban’s Impact
As Australia’s under-16 social media ban moves into its enforcement phase, fundamental questions remain about whether the legislation will achieve its intended goals or merely drive young users towards less regulated platforms. The regulator’s initial compliance assessment reveals that following implementation, significant loopholes remain—children continue finding ways to bypass age verification mechanisms, and platforms have had difficulty stop new underage accounts from being established. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will truly leave major social networks or simply migrate to alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s worldwide effects add another layer of complexity to assessments of its impact. Countries including the United Kingdom, Canada, and several European nations are observing Australia’s approach closely, considering similar regulatory measures for their respective populations. If the ban proves ineffective at reducing children’s digital engagement or cannot protect them from damaging material, it could undermine the case for comparable regulations elsewhere. Conversely, if regulation becomes sufficiently robust to effectively limit underage usage, it may inspire other nations to adopt comparable measures. The result will likely influence international regulatory direction for the foreseeable future, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Benefits and Those Who Suffer
Mental health campaigners and child safety organisations have backed the ban as a necessary intervention against algorithmic manipulation and contact with harmful content. Parents and educators contend that removing young Australians platforms designed to maximise engagement could reduce anxiety, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes valid applications of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s practical impact extends beyond individual users to impact content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that rely on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously used effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Enforcement
Australia’s eSafety Commissioner has indicated a significant shift from hands-off observation to active enforcement, marking a key milestone in the execution of the under-16 ban. The watchdog will now gather evidence to establish whether companies have failed to take “reasonable steps” to block minors from using, a regulatory requirement that goes further than simply documenting that young people stay within these platforms. This strategy demands tangible verification that platforms have established suitable mechanisms and protocols intended to prevent minors. The enforcement team has indicated it will conduct enquiries carefully, constructing evidence that could result in substantial penalties for failure to comply. This shift from monitoring to enforcement reflects mounting concern with the services’ existing measures and indicates that willing participation by itself is insufficient.
The rollout phase highlights important questions about the appropriateness of fines and the concrete procedures for ensuring platform accountability. Australia’s regulatory framework offers enforcement instruments, but their success depends on the eSafety Commissioner’s readiness to undertake formal action and the platforms’ capability to adjust meaningfully. International observers, especially regulators in the Britain and Europe, will closely monitor Australia’s implementation tactics and results. A effective regulatory push could set a model for further jurisdictions contemplating similar bans, whilst failure might weaken the entire regulatory framework. The next phase will prove crucial whether Australia’s pioneering regulatory approach delivers substantive defence for adolescents or stays primarily ceremonial in its effect.
