• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » Elon Musk’s Grok AI faces backlash over Gaza child image mistake

Elon Musk’s Grok AI faces backlash over Gaza child image mistake

Maisah Bustami by Maisah Bustami
August 7, 2025
in News
Reading Time: 2 mins read
A A
617668 3538037 updates.jpg
ADVERTISEMENT

Select Language:

A heartbreaking photo showing a starving girl in Gaza has ignited a fierce online debate, not only because of its emotional impact but also due to a misclassification by Elon Musk’s AI chatbot, Grok.

ADVERTISEMENT

Captured in Gaza, the image was incorrectly identified by Grok as originating from Yemen, sparking widespread confusion and accusations of misinformation. Many question whether AI tools can reliably distinguish facts from fiction, especially when human lives and authentic stories are involved.

The powerful image, taken by AFP photographer Omar al-Qattaa, depicts a thin, malnourished girl in Gaza, where Israel’s blockade has intensified fears of a severe famine. When social media users inquired about its origin, Grok confidently claimed the photo was from Yemen, taken nearly seven years earlier.

This misinformation quickly circulated online, leading to French pro-Palestinian lawmaker Aymeric Caron being accused of spreading falsehoods by sharing the image. The controversy underscores the dangers of blindly trusting AI tools for fact verification, as technology remains imperfect.

ADVERTISEMENT

Grok’s database attributed the photo to Amal Hussain, a seven-year-old Yemeni girl from October 2018. In reality, the image shows nine-year-old Mariam Dawwas, in Gaza City on August 2, 2025. Her mother, Modallala, revealed that before the conflict, Mariam weighed 25 kilograms, but she now weighs only nine. The girl’s diet consists solely of milk—sometimes unavailable—highlighting the dire humanitarian crisis.

When confronted about the error, Grok claimed it aims to rely on verified sources and does not intentionally spread fake news. Despite this, the AI later echoed the incorrect Yemen origin in follow-up responses. Historically, Grok has also produced controversial outputs, even praising Nazi leader Adolf Hitler and suggesting that individuals with Jewish surnames are more prone to online hate.

Louis de Diesbach, an expert in technology ethics, explains that AI systems like Grok function as “black boxes,” making their internal reasoning opaque. “We don’t truly understand why they give certain responses or how they choose their sources,” he notes. Each AI carries biases tied to its training data and instructions, which often reflect the ideologies of its creators.

De Diesbach criticizes Grok—developed by Musk’s xAI—for displaying strong bias aligned with Musk’s political leanings, which are associated with the radical right. Asking an AI to determine a photo’s origin risks misplacing its role; these models aren’t designed for accurate fact-finding but to generate content that may seem plausible, regardless of truthfulness.

Previous incidents include Grok misdating and mislocating another photo of a starving Gaza child to Yemen in 2016, fueling false accusations against the French outlet Libération, which published it.

AI biases stem from training data and the fine-tuning process known as “alignment,” which influences the model’s responses. Altering the training data after the fact doesn’t necessarily correct these biases, which means the AI might respond inconsistently or inaccurately.

ADVERTISEMENT

Other AI models, like Mistral AI’s Le Chat, have also misidentified the same image, emphasizing that AI should never be solely relied upon for fact-checking. Louis de Diesbach warns: “These models aren’t built to tell the truth—they’re made to generate content, true or false. Think of them as ‘friendly pathological liars’ that always have the potential to deceive.”

ChatGPT ChatGPT Perplexity AI Perplexity Gemini AI Logo Gemini AI Grok AI Logo Grok AI
Google Banner
ADVERTISEMENT
Maisah Bustami

Maisah Bustami

Maisah is a writer at Digital Phablet, covering the latest developments in the tech industry. With a bachelor's degree in Journalism from Indonesia, Maisah aims to keep readers informed and engaged through her writing.

Related Posts

World Map Highlighting Top Countries by GDP
Infotainment

Top 40 Countries by GDP in 2025 PPP

December 8, 2025
How To

How to Use GitHub Copilot: A Step-by-Step Guide

December 8, 2025
Russian Influencer Sparks Outrage for Using Suction Bag on Child
Entertainment

Russian Influencer Sparks Outrage for Using Suction Bag on Child

December 8, 2025
UN Narrows 2026 Aid Appeal Amid Rising Needs
News

UN Narrows 2026 Aid Appeal Amid Rising Needs

December 8, 2025
Next Post
Trump teases new China tariffs amid Russian oil trade concerns

Trump teases new China tariffs amid Russian oil trade concerns

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2025 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2025 Digital Phablet