
The European Union (EU) has launched an inquiry into the Grok chatbot, created by xAI, after alarming accounts indicated that it produced around 23,000 child sexual abuse material (CSAM) images in a mere 11 days. This inquiry is part of a wider worry regarding the possible abuse of AI technologies and their effects on child safety and privacy.
Grok, similar to numerous AI chatbots, is engineered to generate images from text prompts. However, it has faced criticism for its insufficient protective measures, resulting in the creation of non-consensual semi-nude images of actual individuals, including minors. A report from the Center for Countering Digital Hate (CCDH) revealed that Grok generated an estimated 3 million sexualized images during the 11-day timeframe from December 29 to January 9, equating to roughly 190 sexualized images per minute, with a child sexualized image produced every 41 seconds.
In light of the disturbing discoveries, three U.S. senators called on Apple CEO Tim Cook to temporarily pull both the X platform and the Grok app from the App Store, pointing to the “sickening content generation.” Despite these appeals, neither Apple nor Google has moved to eliminate the applications. Meanwhile, two countries have already barred the Grok app, and investigations are in progress in California and the UK.
The EU’s inquiry, revealed under the Digital Services Act (DSA), will investigate whether xAI implemented sufficient measures to alleviate the hazards linked to the use of Grok’s tools on X and the subsequent spread of potentially harmful content. Henna Virkkunen, the EU’s tech leader, highlighted the gravity of the situation, asserting that non-consensual sexual deepfakes constitute a violent and intolerable form of degradation.
Should xAI be found in violation of the DSA, it may incur fines of up to 6% of its annual global revenue. This scenario underscores the pressing need for regulatory frameworks to tackle the challenges posed by AI technologies, particularly in terms of child protection and privacy rights. The ongoing investigations underline the vital role that tech companies have in ensuring the responsible deployment of AI and the safeguarding of vulnerable populations.