
‘PhD-level’ Grok-4 debuted in July 2025, and at that moment, it was grappling with “eager” adherence to user commands. It had minimal restrictions on user inquiries, even producing racist and hate-filled visuals on request. Most individuals utilize Grok to create distinctive content or elicit reactions, yet similar to other AI platforms, users have the option to upload an image or engage with pre-existing content via the platform. For instance, you might reply to a friend’s selfie and instruct Grok to produce a new image that incorporates a top hat or a whimsical filter.
Nonetheless, as reported by various news organizations, including a Reuters inquiry, Grok is presently being abused to create sexualized images of women and minors. Genuine, generated visuals have surfaced that are “dehumanizing” users by artificially stripping away clothing. Numerous women have been victimized, which is disturbing in itself, but the manner in which offending users are engaging with the tool is particularly abhorrent. They are requesting Grok to recreate images of women in minimal bikinis or “very sheer” attire. Other violators are outright instructing Grok to eliminate clothing entirely or to position women in more provocative stances.
The UK regulatory body Ofcom has allegedly made “urgent contact” with X and xAI, the firm accountable for Grok, and will further investigate whether there are “potential compliance concerns that necessitate scrutiny.” Consequently, regulators across the