Australia Contemplates Requiring Apple to Limit AI Applications in Response to New Age Verification Policies

Australia Contemplates Requiring Apple to Limit AI Applications in Response to New Age Verification Policies

2 Min Read


**Australia Strengthens Age-Verification Rules for AI Applications**

After initiating a ban on social media platforms aimed at teenagers last year, Australia is now reinforcing age-verification standards for AI applications. This initiative is designed to shield young users from harmful content and excessive engagement with AI tools.

**AI Applications Under Examination for Mental Health Concerns**

Australia was the pioneer in banning social media applications for teenagers, representing a substantial effort to protect the mental well-being of youth. This action was driven by escalating global worries regarding the mental health effects of social media, emphasized by works like Jonathan Haidt’s *The Anxious Generation*.

Commencing March 9, AI platforms, including those from firms like OpenAI, will be mandated to execute measures to restrict access for users under 18 to inappropriate content such as pornography, extreme violence, self-harm, or materials related to eating disorders. This initiative also addresses concerns about excessive use of chatbots among teens, especially the potential for emotionally manipulative designs to create dependency.

The eSafety Commissioner remarked that although Australia has not documented instances of violence or self-harm linked to chatbots, there are apprehensions regarding children as young as 10 using AI tools for up to six hours per day. The eSafety representative voiced concerns that AI companies might exploit emotional manipulation and anthropomorphism to engage young users.

Under the new rules, app stores and search engines may be obliged to deny access to AI services that fail to meet age verification standards. Reports suggest that numerous popular text-based AI tools have not made sufficient strides toward compliance ahead of the looming deadline.

Apple, when contacted for a response, did not reply but has been enacting age-related protections across its platforms to conform with international age-restriction laws. Nonetheless, the onus of implementing these measures ultimately falls on individual developers.

In conclusion, Australia’s proactive approach to regulating AI applications demonstrates an increasing awareness of the necessity to safeguard young people from possible mental health hazards linked to technology. Adherence to these new regulations will be closely observed, with significant repercussions for AI service providers that do not comply with the guidelines.

You might also like