Two Nations Prohibit Grok App because of AI-Created CSAM; Anticipating Apple's Reply

Two Nations Prohibit Grok App because of AI-Created CSAM; Anticipating Apple’s Reply

3 Min Read


**AI-Produced CSAM by Grok: An Escalating Issue**

The rise of AI technologies has led to remarkable progress, yet it has also prompted significant ethical and legal dilemmas, especially concerning the creation of non-consensual deepfake visuals. The Grok application, which employs AI to generate images, has recently faced criticism following reports of its role in producing non-consensual near-nude deepfakes of women and children. This alarming pattern has incited action from various governments and legislators.

**The Ban on Grok in Southeast Asia**

In light of the distressing misuse of the Grok app, Malaysia and Indonesia have acted resolutely by prohibiting the application within their territories. Authorities in these nations pointed to the inadequacy of existing measures to stop the creation and spread of fake pornographic material, particularly involving at-risk groups such as women and minors. The Indonesian government commenced the ban on a Saturday, with Malaysia following suit the next day.

**U.S. Senators Call for Action from Tech Companies**

In the U.S., three senators—Ron Wyden, Ed Markey, and Ben Ray Luján—have called on Apple to temporarily pull both the Grok and X apps from the App Store. They voiced serious concern over the “disturbing content generation” linked to these applications, which encompasses the creation of child sexual abuse materials (CSAM). The senators criticized the inaction from former CEO Elon Musk and underscored the quick removal of other harmful applications at the behest of the White House while nothing has been done regarding Grok.

**Ofcom’s Investigation in the UK**

In the UK, Ofcom, the regulatory body for media, has initiated a formal inquiry into the operations of the Grok app on the X platform. Reports suggest that the Grok AI chatbot has been directed to produce and distribute undressed images of individuals, which could be deemed intimate image exploitation or pornography, alongside sexualized images of children that could qualify as CSAM. Ofcom’s investigation seeks to establish whether X has adhered to its legal responsibilities under the Online Safety Act.

**Silence from Apple and Google**

As of the most current information, neither Apple nor Google has publicly addressed the senators’ plea or the issues raised by regulators in Southeast Asia and the UK. The Grok and X apps continue to be accessible for download in the U.S. App Store, provoking questions about the accountability of tech firms in overseeing and regulating the content produced by their platforms.

**Conclusion**

The circumstances surrounding the Grok app underscore the pressing necessity for strong regulations and moral standards in the creation and implementation of AI technologies. As governments and entities contend with the implications of AI-generated material, prioritizing the safety of vulnerable individuals, particularly children, is crucial. The ongoing investigations and demand for action indicate an increasing recognition of the potential risks posed by such technologies and the urgent need for accountability among tech companies.

You might also like