Select Language:
Deepfake videos of John Mearsheimer surged across YouTube, prompting the American scholar to seek their removal and exposing the difficulties in battling AI-based impersonation. Mearsheimer dedicated months to urging Google, which owns YouTube, to delete hundreds of these synthetic videos, a challenging task that serves as a warning about the vulnerability of professionals to misinformation and identity theft in the AI era.
Recently, his office identified 43 YouTube channels producing AI-generated content with his likeness, some portraying him making controversial statements about global rivalries. One deepfake circulated on TikTok, claiming he was commenting on Japan’s tense relationship with China following Prime Minister Sanae Takaichi’s support for Taiwan in November. Another realistic AI clip, voiced in Mandarin targeting Chinese viewers, falsely suggested Mearsheimer was asserting U.S. influence was waning in Asia while China’s power grew.
“This is deeply troubling because these videos are fake and crafted to appear authentic,” Mearsheimer told AFP. “They undermine honest dialogue, which we rely on and which YouTube should support.” The main obstacle was a slow, inefficient process for reporting infringement, which only permitted action if the victim’s name or photo appeared in the video’s title, description, or profile picture. Consequently, each deepfake required a separate takedown request, demanding significant manpower.
Despite efforts, the spread of AI-fabricated videos continued. Some channels changed their names slightly, such as “Jhon Mearsheimer,” to evade removal. After months of persistent effort, YouTube managed to shut down 41 of the 43 known channels. However, these videos had already gained considerable traction, and the risk of new ones emerging remains.
“AI accelerates the creation of false content,” said Vered Horesh from AI startup Bria. “When anyone can produce convincing images of you in seconds, the real damage is the loss of ability to deny perfidy. The burden shifts to the victim.” She emphasized that safety measures must be integrated into the product design, not just reliant on takedowns.
A YouTube spokesperson reaffirmed their commitment to responsible AI use, stating that they enforce policies evenly across creators regardless of AI involvement. In CEO Neal Mohan’s 2026 priorities letter, the company announced plans to improve systems for reducing low-quality AI content and to expand creative AI tools.
Mearsheimer’s ordeal highlights a new internet landscape flooded with deception, where rapid AI advances distort reality and enable malicious actors to impersonate professionals. Fake AI-generated content can often evade detection, misleading unsuspecting viewers. Recently, doctors, CEOs, and academics have all fallen victim to such impersonations used to distribute fraudulent products, financial advice, or propaganda.
Recognizing the threat, Mearsheimer plans to launch his own YouTube channel to help educate users about deepfake risks. Jeffrey Sachs, a Columbia University economics professor, is doing the same in response to the proliferation of fake videos of himself, describing the process as a “whack-a-mole” challenge. He noted that tracking down fakes is difficult and often only becomes apparent long after they’ve spread, causing ongoing headaches for his team.



