Select Language:
AI-generated videos circulating on Elon Musk’s X show realistic depictions of U.S. soldiers captured by Iran, a devastated Israeli city, and U.S. embassies in flames—a surge of convincing deepfakes despite efforts to limit wartime misinformation. The Middle East conflict has spurred an unprecedented flow of AI-created images and videos, often blurring the line between real and fabricated content, making it challenging for social media users to tell the difference, according to experts.
Last week, X announced it would suspend content creators from its revenue sharing program for 90 days if they post AI-generated war-related videos without clearly indicating they are artificially produced. Repeat violations could lead to permanent bans, according to X’s head of product, Nikita Bier. This shift marks a significant change for the platform, which has faced criticism for becoming a hotbed of misinformation since Musk’s $44 billion purchase in October 2022.
The new policy received some praise from State Department official Sarah Rogers, who called it a valuable addition to X’s Community Notes—a crowdsourced fact-checking system—that helps reduce the reach and monetization of false content. However, disinformation researchers remain skeptical. Joe Bodnar of the Institute for Strategic Dialogue noted that his monitoring feeds are still flooded with AI-generated war content, including a notable post from a verified Blue check account that shared an AI clip of Iran supposedly launching a nuclear strike on Israel. This video attracted more views than the official efforts to crack down on AI misinformation.
When asked how many accounts have been demonetized since Bier’s announcement, X did not provide numbers. Meanwhile, AFP’s global fact-checkers identified a series of fake AI videos about the Middle East conflict, many from premium accounts that can be purchased with verified checkmarks. These fakes include a distressed American soldier inside a bombed embassy, captured U.S. troops beside Iranian flags, and a ruined U.S. Navy fleet. The number of AI-simulated images continues to grow faster than fact-checkers can debunk them, and X’s own AI chatbot, Grok, has sometimes worsened the problem by incorrectly affirming the authenticity of some AI-created visuals.
Researchers warn that X’s earning model for premium accounts—where high engagement yields payouts—may incentivize the spread of misleading or sensationalist AI content. One such account ignored Bier’s request to label a fake AI video of Dubai’s Burj Khalifa burning, which nonetheless amassed over two million views.
A recent report by the Tech Transparency Project found that X appeared to profit from more than two dozen premium accounts linked to Iranian government officials and state-funded news outlets promoting propaganda, potentially breaching U.S. sanctions. In response, X removed verification checkmarks from some of these accounts.
Despite the new policies, many users sharing AI-generated content remain outside the revenue sharing system and continue spreading misinformation. They can still be fact-checked via Community Notes, though the system’s efficiency has been questioned—over 90% of Notes remain unpublished, according to last year’s research by the Digital Democracy Institute of the Americas. Experts warn that removing metadata from AI content and the limited application of Community Notes may hinder the effectiveness of efforts to combat misinformation.





