AI Assistants Misrepresent News Content 45% of the Time
TL;DR: A European Broadcasting Union study coordinated by the BBC evaluated over 3,000 AI assistant responses across 14 languages, finding 45% contained significant issues. Gemini performed worst with 76% error rate, whilst sourcing problems affected 31% of all responses.
An intensive international study by the European Broadcasting Union (EBU) and BBC has revealed that AI assistants routinely misrepresent news content regardless of language, territory, or platform. The research involved 22 public service media organisations across 18 countries evaluating responses from ChatGPT, Copilot, Gemini, and Perplexity.
Systemic Failures Across Platforms
Professional journalists assessed AI responses against accuracy, sourcing, distinguishing opinion from fact, and providing context. The findings demonstrate widespread problems: 45% of all AI answers had at least one significant issue, 31% showed serious sourcing problems including missing or incorrect attributions, and 20% contained major accuracy issues such as hallucinated details.
Google’s Gemini performed notably worse than competitors, with significant issues in 76% of responses—more than double the other assistants. The poor performance stemmed largely from inadequate sourcing practices. Whilst comparison with earlier BBC research shows some improvement, error rates remain alarmingly high.
Democratic Implications
Jean Philip De Tender, EBU Media Director and Deputy Director General, emphasises the democratic stakes: “This research conclusively shows that these failings are not isolated incidents. They are systemic, cross-border, and multilingual, and we believe this endangers public trust.” The concern is particularly acute given that AI assistants are replacing search engines for many users—7% of online news consumers overall, rising to 15% amongst under-25s according to the Reuters Institute.
Separate BBC research on audience perceptions reveals many people assume AI summaries are accurate when they are not. Over a third of UK adults trust AI to produce accurate summaries, rising to almost half for those under 35. Critically, when users spot errors, they blame news providers as well as AI developers, potentially damaging trust in journalism itself.
Looking Forward
The EBU and participating members are pressing EU and national regulators to enforce existing laws on information integrity and media pluralism. They stress that ongoing independent monitoring is essential given AI’s rapid development. The research team has released a News Integrity in AI Assistants Toolkit to help address identified problems and improve media literacy amongst users.
Source Attribution:
- Source: BBC Media Centre
- Original: https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
- Published: 22 October 2025