Australia’s online watchdog has accused the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its first compliance report since the prohibition came into force, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Compliance Failures Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has documented a troubling pattern of failure to comply amongst the world’s largest social media platforms in her first formal review since the ban took effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement appropriate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings indicate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has made clear that simply showing some children still maintain accounts is insufficient; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from creating accounts in the outset. This shift reflects the government’s determination to hold tech giants accountable, with possible sanctions looming for companies that do not meet the statutory obligations.
- Permitting previously banned users to confirm again their age and restore account access
- Enabling multiple tries at the identical verification process with no repercussions
- Weak systems to stop new under-16 accounts from being created
- Inadequate complaint mechanisms for parents and the general public
- Shortage of publicly available information about enforcement efforts and user account terminations
The Scope of the Problem
The considerable scale of social media usage amongst young Australians underscores the compliance challenge facing both the authorities and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from false claims. This complexity has placed enforcement authorities wrestling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted stringent age verification measures, citing privacy concerns and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems required by law. The shift towards active enforcement represents a critical juncture: either platforms will substantially upgrade their compliance infrastructure, or they stand to incur substantial fines that could transform their operations in Australia and potentially influence compliance frameworks internationally.
What the Statistics Demonstrate
In the opening month following the ban’s launch, Australian authorities indicated that 4.7 million accounts had been suspended or deleted. Whilst this statistic initially looked to demonstrate enforcement effectiveness, subsequent analysis reveals a more complex picture. The considerable quantity of account removals suggests that many under-16s had managed to establish accounts in the initial stages, indicating that preventive controls were inadequate. Furthermore, the data casts doubt about whether deleted profiles represent real regulation or simply users closing their accounts willingly in reaction to the new restrictions.
The minimal transparency concerning these figures has disappointed independent observers trying to determine the ban’s true effectiveness. Platforms have provided little data about their implementation approaches, effectiveness metrics, or the nature of suspended accounts. This absence of transparency makes it hard for regulators and the public to determine whether the ban is working as intended or whether younger users are merely discovering alternative ways to use social media. The Commissioner’s demand for thorough documentation of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share comprehensive data.
Industry Response and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of assurances of compliance and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that accurate age determination continues to be a significant industry-wide challenge. The company has advocated for a alternative strategy, suggesting that robust age verification and parental approval mechanisms put in place at the application store level would be more efficient than enforcement at the platform level. This stance demonstrates wider concerns across the industry that the current regulatory framework puts an impractical burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to actively exclude an whole age group remains unresolved. Companies have long resisted stringent age verification, pointing to privacy concerns and technical limitations, creating a standoff between regulators and platforms over who carries responsibility for implementation.
- Meta maintains age verification ought to take place at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical challenges as impediments to effective age verification
- Platforms contend they are making their best effort whilst challenging the ban’s general effectiveness
More Extensive Inquiries Regarding the Ban’s Efficacy
As Australia’s under-16 social media ban enters its implementation stage, fundamental questions persist about whether the legislation will achieve its intended goals or merely drive young users towards less regulated platforms. The regulator’s initial compliance assessment reveals that following implementation, substantial gaps exist—children keep discovering ways to bypass age verification systems, and platforms have struggled to prevent new underage accounts from being created. Critics contend that the ban’s effectiveness depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply shift towards other platforms, secure messaging apps, or virtual private networks designed to conceal their age and location.
The ban’s worldwide effects add another layer of complexity to assessments of its effectiveness. Countries such as the United Kingdom, Canada, and multiple European countries are observing Australia’s experiment closely, considering similar laws for their own populations. If the ban proves ineffective at reducing children’s digital engagement or cannot protect them from harmful content, it could weaken the case for equivalent legislation elsewhere. Conversely, if regulation becomes sufficiently robust to genuinely restrict underage access, it may inspire other nations to implement similar strategies. The outcome will likely influence global regulatory trends for years to come, making Australia’s regulatory efforts scrutinised far beyond its borders.
Who Gains and Those Who Suffer
Mental health supporters and child safety organisations have championed the ban as a essential measure against algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—keeping friendships alive, obtaining educational material, and participating in online communities around shared interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s concrete implications reaches past individual users to impact content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously employed effectively. Meanwhile, the ban unintentionally advantages large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects go well past the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has indicated a notable transition from inactive oversight to proactive action, marking a key milestone in the implementation of the age restriction. The regulator will now compile information to determine whether services have failed to take “reasonable steps” to block minors from using, a regulatory requirement that goes further than simply recording that young people stay within these platforms. This approach necessitates concrete evidence that organisations have established suitable mechanisms and protocols intended to prevent minors. The enforcement team has signalled it will launch probes methodically, developing arguments that could result in considerable sanctions for breach of requirements. This transition from observation to enforcement demonstrates mounting concern with the services’ existing measures and suggests that consensual engagement alone will no longer suffice.
The rollout phase raises significant concerns about the appropriateness of fines and the concrete procedures for holding tech giants accountable. Australia’s legislation provides enforcement instruments, but their success hinges on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ ability to adapt substantively. Overseas authorities, particularly regulators in the Britain and Europe, will closely monitor Australia’s enforcement strategy and outcomes. A effective regulatory push could establish a template for additional countries evaluating similar bans, whilst shortcomings might undermine the entire regulatory framework. The forthcoming period will determine whether Australia’s innovative statutory framework delivers real safeguards for adolescents or stays primarily ceremonial in its influence.
