Select Language:
Research from the European Broadcasting Union (EBU) and BBC reveals that nearly half of the responses generated by popular AI assistants often distort or misrepresent news content. The study assessed 3,000 answers from leading AI tools, including ChatGPT by OpenAI, Microsoft’s Copilot, Google’s Gemini, and Perplexity, focusing on accuracy, source attribution, and the ability to distinguish fact from opinion.
Covering 14 languages, the findings highlight widespread inconsistencies and pose significant risks for users relying on AI for news. As media regulators and outlets become more alert to misinformation propagated by generative AI, the EBU and BBC emphasize the importance of transparency in how these tools handle and present news content. They warn that the rising popularity of AI assistants might blur the lines between credible journalism and fabricated information.
The study revealed that 45% of the AI responses contained at least one major issue, and 81% exhibited some form of problem. Companies behind these assistants have been contacted for comments. Google’s Gemini has previously stated on its website that it welcomes feedback to enhance its platform’s usefulness. Both OpenAI and Microsoft have acknowledged hallucinations—instances where AI outputs are inaccurate or misleading—and are actively trying to address this problem.
Perplexity claims that one of its “Deep Research” modes achieves nearly 94% accuracy regarding factual correctness. A significant portion of responses—about a third—were found to have serious sourcing errors, such as missing, misleading, or incorrect attributions. Specifically, 72% of Gemini’s responses had notable sourcing issues, a rate much higher than other assistants, which hovered below 25%. Additionally, 20% of responses across all tested AIs contained inaccuracies, including outdated information. For example, Gemini incorrectly reported recent changes to laws concerning disposable vapes, and ChatGPT erroneously identified Pope Francis as the current pope months after his passing.
The study involved 22 public-service media organizations from 18 countries, including France, Germany, Spain, Ukraine, the UK, and the US. As AI tools increasingly replace traditional search engines for news, concerns about eroding public trust grow. According to the EBU, when audiences do not know what to trust, they may end up trusting nothing, which can hinder democratic engagement.
Data from the Reuters Institute’s Digital News Report 2025 indicates that 7% of online news consumers and 15% of those under 25 use AI assistants to access news. The report calls for AI companies to be held accountable and to improve how their systems respond to news-related queries.




