Select Language:
A heartbreaking photo showing a starving girl in Gaza has ignited a fierce online debate, not only because of its emotional impact but also due to a misclassification by Elon Musk’s AI chatbot, Grok.
Captured in Gaza, the image was incorrectly identified by Grok as originating from Yemen, sparking widespread confusion and accusations of misinformation. Many question whether AI tools can reliably distinguish facts from fiction, especially when human lives and authentic stories are involved.
The powerful image, taken by AFP photographer Omar al-Qattaa, depicts a thin, malnourished girl in Gaza, where Israel’s blockade has intensified fears of a severe famine. When social media users inquired about its origin, Grok confidently claimed the photo was from Yemen, taken nearly seven years earlier.
This misinformation quickly circulated online, leading to French pro-Palestinian lawmaker Aymeric Caron being accused of spreading falsehoods by sharing the image. The controversy underscores the dangers of blindly trusting AI tools for fact verification, as technology remains imperfect.
Grok’s database attributed the photo to Amal Hussain, a seven-year-old Yemeni girl from October 2018. In reality, the image shows nine-year-old Mariam Dawwas, in Gaza City on August 2, 2025. Her mother, Modallala, revealed that before the conflict, Mariam weighed 25 kilograms, but she now weighs only nine. The girl’s diet consists solely of milk—sometimes unavailable—highlighting the dire humanitarian crisis.
When confronted about the error, Grok claimed it aims to rely on verified sources and does not intentionally spread fake news. Despite this, the AI later echoed the incorrect Yemen origin in follow-up responses. Historically, Grok has also produced controversial outputs, even praising Nazi leader Adolf Hitler and suggesting that individuals with Jewish surnames are more prone to online hate.
Louis de Diesbach, an expert in technology ethics, explains that AI systems like Grok function as “black boxes,” making their internal reasoning opaque. “We don’t truly understand why they give certain responses or how they choose their sources,” he notes. Each AI carries biases tied to its training data and instructions, which often reflect the ideologies of its creators.
De Diesbach criticizes Grok—developed by Musk’s xAI—for displaying strong bias aligned with Musk’s political leanings, which are associated with the radical right. Asking an AI to determine a photo’s origin risks misplacing its role; these models aren’t designed for accurate fact-finding but to generate content that may seem plausible, regardless of truthfulness.
Previous incidents include Grok misdating and mislocating another photo of a starving Gaza child to Yemen in 2016, fueling false accusations against the French outlet Libération, which published it.
AI biases stem from training data and the fine-tuning process known as “alignment,” which influences the model’s responses. Altering the training data after the fact doesn’t necessarily correct these biases, which means the AI might respond inconsistently or inaccurately.
Other AI models, like Mistral AI’s Le Chat, have also misidentified the same image, emphasizing that AI should never be solely relied upon for fact-checking. Louis de Diesbach warns: “These models aren’t built to tell the truth—they’re made to generate content, true or false. Think of them as ‘friendly pathological liars’ that always have the potential to deceive.”




