Australia Accuses Meta, TikTok, Snapchat, and YouTube of Violating Child Social Media Ban; Legal Action Looms.

Australia Accuses Meta, TikTok, Snapchat, and YouTube of Violating Child Social Media Ban; Legal Action Looms.

3 Min Read

Three months after Australia pioneered a ban preventing children under 16 from using social media accounts, its online safety regulator has criticized platforms for insufficient enforcement. eSafety Commissioner Julie Inman Grant’s compliance report, released Tuesday, accuses Facebook, Instagram, Snapchat, TikTok, and YouTube of not taking necessary actions required by law to prevent young Australians from accessing their services.

Despite the Online Safety Amendment (Social Media Minimum Age) Act, effective December 10, resulting in the deactivation of five million under-16 Australian accounts, the report notes that around 70% of children continue to use social media like Facebook, Instagram, Snapchat, or TikTok. Children reportedly retain or create new accounts, bypassing age verification systems.

Inman Grant expressed “significant concerns” about compliance from half of the ten platforms under the law: Facebook, Instagram, Snapchat, TikTok, and YouTube, which may face legal action by mid-year, with courts able to impose fines up to 49.5 million AUD ($33 million) for non-compliance. Others like Reddit, X, Kick, Threads, and Twitch are not currently under investigation.

The report highlights “poor practices” where platforms allow unlimited attempts at passing age checks and encourage retries even when users identify as underage. These suggest systems prioritizing user retention over excluding children.

Communications Minister Anika Wells bluntly criticized the non-compliant platforms, accusing them of undermining the law by doing the “absolute bare minimum.” She noted that Australia’s unprecedented ban has sparked interest from over a dozen other countries since December, including legislative moves in France, Denmark, Malaysia, and plans in Indonesia. Failure of the Australian law could affect global policy momentum.

Platform responses vary: Meta remains committed but acknowledges age verification challenges; Snap reported locking 450,000 accounts and dedication to reasonable measures; TikTok declined comment; Alphabet did not respond.

The compliance issue isn’t simple, as the law doesn’t specify age-verification methods, demanding platforms to take reasonable steps, leaving adequacy for courts to decide. Some platforms use behavioral inference or AI tools to estimate age, though methods are imperfect. The eSafety Commissioner admits age verification might be time-consuming.

Lisa Given, an RMIT University expert, raises the legal question of accountability if flawed technologies are used despite multiple exclusion steps. This debate heads to courts and may coincide with a constitutional challenge by Reddit, arguing the ban infringes on implied freedom of political communication, and the Digital Freedom Project, leading to a joint High Court case with a preliminary hearing on May 21.

The law, passed with bipartisan support on November 29, 2024, reflects consensus on social media’s mental health risks for youth, assigning enforcement duties solely to platforms without penalizing children or parents. Its future impact and the legal precedent it sets will be closely watched worldwide. Whether platforms treat Australian law as a problem to solve or a precedent to undermine will be critical, with the eSafety Commissioner’s actions expected mid-year to define the law’s enforcement power.

You might also like