<p>Geneva: Leading <a href="https://www.deccanherald.com/tags/artificial-intelligence">AI</a> assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the <em>BBC</em>.</p><p>The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants - software applications that use AI to understand natural language commands to complete tasks for a user.</p><p>It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity.</p><p>Overall, 45 per cent of the AI responses studied contained at least one significant issue, with 81 per cent having some form of problem, the research showed.</p>.How teachers can make use of Artificial Intelligence.<p><em>Reuters</em> has made contact with the companies to seek their comment on the findings.</p><p>Gemini, Google's AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users.</p><p>OpenAI and Microsoft have previously said hallucinations - when an AI model generates incorrect or misleading information, often due to factors such as insufficient data - are an issue that they are seeking to resolve.</p><p>Perplexity says on its website that one of its "Deep Research" modes has 93.9 per cent accuracy in terms of factuality.</p><p><strong>Sourcing errors</strong></p><p>A third of AI assistants' responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.</p><p>Some 72 per cent of responses by Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25 per cent for all other assistants, it said.</p><p>Issues of accuracy were found in 20 per cent of responses from all AI assistants studied, including outdated information, it said.</p><p>Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.</p><p>Twenty-two public-service media organisations from 18 countries including France, Germany, Spain, Ukraine, Britain and the United States took part in the study.</p><p>With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.</p><p>"When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation," EBU Media Director Jean Philip De Tender said in a statement.</p><p>Some 7 per cent of all online news consumers and 15 per cent of those aged under 25 use AI assistants to get their news, according to the <em>Reuters</em> Institute’s Digital News Report 2025.</p><p>The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.</p>
<p>Geneva: Leading <a href="https://www.deccanherald.com/tags/artificial-intelligence">AI</a> assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the <em>BBC</em>.</p><p>The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants - software applications that use AI to understand natural language commands to complete tasks for a user.</p><p>It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity.</p><p>Overall, 45 per cent of the AI responses studied contained at least one significant issue, with 81 per cent having some form of problem, the research showed.</p>.How teachers can make use of Artificial Intelligence.<p><em>Reuters</em> has made contact with the companies to seek their comment on the findings.</p><p>Gemini, Google's AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users.</p><p>OpenAI and Microsoft have previously said hallucinations - when an AI model generates incorrect or misleading information, often due to factors such as insufficient data - are an issue that they are seeking to resolve.</p><p>Perplexity says on its website that one of its "Deep Research" modes has 93.9 per cent accuracy in terms of factuality.</p><p><strong>Sourcing errors</strong></p><p>A third of AI assistants' responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.</p><p>Some 72 per cent of responses by Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25 per cent for all other assistants, it said.</p><p>Issues of accuracy were found in 20 per cent of responses from all AI assistants studied, including outdated information, it said.</p><p>Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.</p><p>Twenty-two public-service media organisations from 18 countries including France, Germany, Spain, Ukraine, Britain and the United States took part in the study.</p><p>With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.</p><p>"When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation," EBU Media Director Jean Philip De Tender said in a statement.</p><p>Some 7 per cent of all online news consumers and 15 per cent of those aged under 25 use AI assistants to get their news, according to the <em>Reuters</em> Institute’s Digital News Report 2025.</p><p>The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.</p>