Select Language:

As misinformation surged amid a four-day conflict between India and Pakistan, social media users sought verification from an AI chatbot, only to be met with additional inaccuracies, demonstrating its unreliability as a fact-checking resource, reports AFP.
As technology companies reduce their reliance on human fact-checkers, many users are turning to AI chatbots—like xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini—to seek trustworthy information.
The phrase “Hey @Grok, is this true?” has become a frequent request on Elon Musk’s platform X, highlighting the trend of instant fact-checking through social media.
However, the answers provided by these chatbots are frequently filled with inaccuracies.
Grok, in particular, has come under scrutiny for mistakenly incorporating “white genocide,” a conspiracy theory from the far-right, into unrelated discussions. It also misidentified outdated video footage from Sudan’s Khartoum airport as a missile attack on Pakistan’s Nur Khan airbase during the recent conflict with India.
Moreover, it misrepresented uncoupled video footage of a burning building in Nepal as “likely” depicting Pakistan’s military response to Indian strikes.
“The rising dependency on Grok for fact-checking coincides with X and other major tech firms scaling back their investment in human fact-checkers,” said McKenzie Sadeghi, a researcher from the disinformation watchdog NewsGuard, to AFP.
“Our research consistently shows that AI chatbots are not dependable sources for news and information, especially concerning breaking news,” she added.
‘Fabricated’
NewsGuard’s analysis indicates that ten leading chatbots often repeat falsehoods, including disinformation narratives from Russia and misleading claims related to the recent election in Australia.
A recent study by the Tow Center for Digital Journalism at Columbia University found that chatbots generally struggled to decline questions they cannot accurately answer, instead providing incorrect or speculative responses.
In an instance where AFP fact-checkers in Uruguay queried Gemini about an AI-generated image of a woman, the bot not only confirmed its validity but also invented details about her identity and location.
Grok recently labeled a suspected video of a giant anaconda swimming in the Amazon as “real,” even referencing credible-sounding scientific activities to support its false claim.
In truth, the video was generated by AI, according to reports from AFP fact-checkers in Latin America, with many users citing Grok’s assessment as proof of the clip’s authenticity.
These findings raise alarms as surveys reveal a notable shift among online users from traditional search engines to AI chatbots for gathering and verifying information.
This transition is happening just as Meta announced it would discontinue its third-party fact-checking program in the United States, shifting the responsibility of debunking false information to ordinary users through a model called “Community Notes,” popularized by X.
Researchers have raised concerns about the effectiveness of “Community Notes” in addressing misinformation.
‘Biased Answers’
The role of human fact-checking has long been contentious in a highly polarized political environment, particularly in the U.S., where conservative advocates claim it suppresses free speech and censors content from the right—a point contested by professional fact-checkers.
AFP currently operates in 26 languages as part of Facebook’s fact-checking program, spanning Asia, Latin America, and the European Union.
The quality and accuracy of AI chatbots can vary based on their training and programming, raising concerns about potential political bias or manipulation in their outputs.
Musk’s xAI recently attributed Grok generating unsolicited references to “white genocide” in South Africa to an “unauthorized modification.”
When AI expert David Caswell queried Grok about who might have altered its system prompt, the chatbot pointed to Musk as the “most likely” person.
Having previously propagated the unfounded assertion that South African leaders were “openly advocating for genocide” against white individuals, Musk’s remarks have become controversial.
“We’ve observed how AI assistants can either invent results or provide biased responses due to specific instructions from human coders,” stated Angie Holan, director of the International Fact-Checking Network, to AFP.
“I am particularly worried about how Grok has mishandled inquiries regarding sensitive issues after being directed to deliver pre-approved answers.”