Select Language:
The ongoing conflict in the Middle East has prompted a surge in AI-generated misinformation. Besides completely fake images, there’s a rise in authentic-looking pictures that have been subtly altered, leading to distorted perceptions of real events on the ground.
One notable example shows a high-resolution photo of a kneeling U.S. pilot being confronted by a local in Kuwait, shortly after parachuting from his aircraft. This image was widely circulated online and picked up by media outlets. However, a close inspection revealed that the pilot appears to have only four fingers on each hand.
AFP fact-checkers analyzed the image using AI detection tools and identified a SynthID watermark—an invisible marker indicating the use of Google’s AI to create or modify the image. Despite this, the incident depicted seems genuine. A video circulating on social media from March 2 confirms the scene, and satellite data corroborates the location. Reports from that day also indicated that Kuwait mistakenly downed three U.S. aircraft.
AFP was able to find an earlier, blurry version of the same photo on Telegram that matched the high-resolution one in content but lacked detail. AI verification confirmed the lower-quality image was real, suggesting it was the source used to generate the clearer, AI-processed version.
“AI enhancements can subtly change textures, facial features, lighting, or background elements, making an image appear more realistic than the original,” explained Evangelos Kanoulas, an AI professor at the University of Amsterdam. This can be exploited to reinforce specific narratives—making protests seem more violent, crowds appear larger, or expressions seem more intense.
Another example involves a dramatic photo of a large fire near Erbil International Airport in Iraq, following Iranian missile strikes on March 1. While SynthID detected Google AI involvement, the image was not entirely fabricated. The original showed a smaller fire with less vivid colors, indicating some level of modification.
Experts warn that the line between subtle enhancement and full content creation is razor-thin. “Even minor modifications can lead to a completely different story,” said James O’Brien, a computer science professor at UC Berkeley. “This could significantly alter how people perceive events.” Additionally, AI tools are prone to hallucinating—adding elements not present in the original—potentially misleading viewers. For instance, an AI-enhanced image of a police shooting in Minneapolis in January falsely depicted the victim holding a weapon, although the original was a low-res frame showing him with a phone.
As the U.S.-Israeli conflict with Iran intensifies, experts emphasize that without clear labeling, AI-boosted images threaten to further erode public trust. Such content already impacts people’s confidence in authentic visuals, with some questioning the truth even in genuinely real images.





