Select Language:
Elon Musk’s AI startup, Grok, announced on Friday that it is working quickly to address vulnerabilities after reports emerged that the tool was generating inappropriate images, including pictures of children and women transformed into erotic content.
In a statement shared on X (formerly Twitter), Grok emphasized, “We’ve identified gaps in our safeguards and are actively working to fix them.” The company also reiterated that Child Sexual Abuse Material (CSAM) is illegal and strictly prohibited.
The concern surfaced after users began reporting abuses following the launch of an “edit image” feature in late December, which allows modifications to existing images. Some users exploited this tool to strip clothing from images of women and minors, sparking widespread complaints.
XAI, Elon Musk’s company responsible for Grok, responded to AFP with a brief, automated message claiming, “The mainstream media lies.” When pressed further, the Grok chatbot did engage with an individual questioning the company’s potential criminal liability for facilitating the creation and sharing of child pornography—highlighting ongoing issues.
Internationally, authorities are taking notice. Officials in India are demanding details from X about efforts to remove explicit, nude, and sexually suggestive content generated by Grok without consent. Meanwhile, Paris prosecutors have expanded their investigation into X to include allegations that Grok is being used to produce and distribute child pornography, following an initial probe in July related to algorithm manipulation for foreign interference.
Recently, Grok has faced backlash over a series of controversial outputs, including remarks related to conflicts in Gaza and India-Pakistan tensions, antisemitic statements, and misinformation surrounding a deadly incident in Australia.





