<p>We all want to understand the world around us. Perhaps we want more clarity about the war in Gaza or what our government is doing about the healthcare our family relies on. It might be something as simple as changes to a bus route that will affect our daily commute. No matter how momentous or mundane the issue, we have a right to news we can trust.</p>.<p>We’ve all been there, scrolling our feed, seeing an astonishing clip or a shocking must-share story. But now we must constantly question what’s real and what’s the creation of artificial intelligence.</p>.<p>AI-generated output is so convincing today and shaping so much of the information we consume that we risk being unable to trust anything anymore. And mistrust is the fuel that drives conspiracism, social polarisation and democratic disengagement. </p>.<p>In reality, the integrity of what we call ‘news’ is being eroded by the tools that are meant to help us make sense of the world. </p>.<p>This World News Day, we want to underline that the public are entitled to the facts that professional journalists and <br>the organisations they work for worldwide are committed to finding, corroborating and sharing.</p>.<p>However, the technology companies building AI systems that millions of people use daily are falling far short of their responsibility to truth.</p>.<p>Original research carried out this year by the BBC found that half of AI-generated answers to news-related requests left out important details and made other key errors.</p>.<p>The AI assistants they tested consistently churned out garbled facts, fabricated or misattributed quotes, decontextualised information or paraphrased reporting with no acknowledgement. </p>.<p>So what? It is useful and saves time; it will improve, and we can live with the errors. Except we are not talking about a cake recipe or holiday recommendations. Democracy is at stake, because a society with no common understanding of what’s true can’t make informed choices. And individuals who rely on a deceptive distortion of originally independent, accurate journalism risk losing themselves in a toxic mire of half-truths and bad-faith manipulation.</p>.<p>This isn’t faraway, abstract paranoia. The internet is already inundated with synthetic fakery designed to deceive, drive clicks, and promote vested interests. AI-generated voices, faces, and headlines are degrading the information ecosystem, often with no clear provenance or accountability. </p>.<p>Meanwhile, the output <br>of journalists serving the public interest, especially in local, regional, and independent media, is being scraped without permission, algorithmically repackaged, and redistributed with no credit or compensation.</p>.<p>This phenomenon is arguably more pernicious than the glaring, outrageous deepfakes we have all seen because the inaccuracies are subtle, plausible and more likely to mislead. We are witnessing the sabotage of news we need to be able to rely on, and that is draining already depleted reserves of public trust.</p>.<p><strong>So, what can be done?</strong></p>.<p>The European Broadcasting Union and WAN-IFRA, together with a fast-growing collective of other organisations representing thousands of professional journalists and newsrooms around the world, are calling for urgent changes to how AI developers interact with news and the people who produce it.</p>.<p>Many of the broadcasters and news publishers we represent are using AI responsibly to enhance their journalism without compromising editorial integrity, such as through automating translation, helping detect misinformation, or personalising content. They are mindful that the deployment of these tools must be principled, transparent and carefully handled.</p>.<p>That’s why we are presenting five clear requirements to AI tech companies. These are not radical; they are realistic, common-sense standards that any ethical technology developer can and should embrace:</p>.<p>No content without consent. AI systems must not be trained on news content without permission. That content is intellectual property created through rigorous work and public trust. Unsanctioned scraping is theft that undermines both.</p>.<p>Respect value. High-quality journalism is expensive to produce but vital for society’s wellbeing. AI tools benefiting from that work must compensate its creators fairly and in good faith.</p>.<p>Be transparent. When AI-generated content feeds on news sources, those sources should always be clearly cited and linked because accuracy and attribution matter. We are entitled to know where information came from and if it differs from the original.</p>.<p>Protect diversity. AI tools should amplify pluralistic, independent, public interest journalism. A robust, healthy information environment requires a representative cross-section of voices.</p>.<p>Work with us. We invite AI companies to enter a serious, solutions-driven dialogue with the news industry. Together, we can develop standards for accuracy, safety, and transparency, but only if tech companies see journalists as partners and not as suppliers of free data to be mined and monetised. </p>.<p>We consider this a civic challenge that affects every person who relies on credible information to make decisions about their life, to form credible opinions or to decide whom to vote for.</p>.<p>Tech companies talk a lot about trust, but trust is not built on talk. We’re calling on the leaders of the AI revolution to get a handle on this problem now. They have the power to shape the future of information, but we don’t yet see them taking their tools’ dangerous shortcomings, and the potential consequences of them, seriously enough.</p>.<p>Without urgent, corrective action, AI won’t just distort the news – it will destroy the public’s ability to trust in anything and anyone, which will be disastrous news for us all.</p>.<p><em>(Liz is Director of News, EBU, and Vincent is the CEO, WAN-IFRA)</em></p>.<p><em>(DH is part of the global initiative to mark World News Day on September 28, and this article is the first in a series that will be published as part of this initiative)</em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH)</p>
<p>We all want to understand the world around us. Perhaps we want more clarity about the war in Gaza or what our government is doing about the healthcare our family relies on. It might be something as simple as changes to a bus route that will affect our daily commute. No matter how momentous or mundane the issue, we have a right to news we can trust.</p>.<p>We’ve all been there, scrolling our feed, seeing an astonishing clip or a shocking must-share story. But now we must constantly question what’s real and what’s the creation of artificial intelligence.</p>.<p>AI-generated output is so convincing today and shaping so much of the information we consume that we risk being unable to trust anything anymore. And mistrust is the fuel that drives conspiracism, social polarisation and democratic disengagement. </p>.<p>In reality, the integrity of what we call ‘news’ is being eroded by the tools that are meant to help us make sense of the world. </p>.<p>This World News Day, we want to underline that the public are entitled to the facts that professional journalists and <br>the organisations they work for worldwide are committed to finding, corroborating and sharing.</p>.<p>However, the technology companies building AI systems that millions of people use daily are falling far short of their responsibility to truth.</p>.<p>Original research carried out this year by the BBC found that half of AI-generated answers to news-related requests left out important details and made other key errors.</p>.<p>The AI assistants they tested consistently churned out garbled facts, fabricated or misattributed quotes, decontextualised information or paraphrased reporting with no acknowledgement. </p>.<p>So what? It is useful and saves time; it will improve, and we can live with the errors. Except we are not talking about a cake recipe or holiday recommendations. Democracy is at stake, because a society with no common understanding of what’s true can’t make informed choices. And individuals who rely on a deceptive distortion of originally independent, accurate journalism risk losing themselves in a toxic mire of half-truths and bad-faith manipulation.</p>.<p>This isn’t faraway, abstract paranoia. The internet is already inundated with synthetic fakery designed to deceive, drive clicks, and promote vested interests. AI-generated voices, faces, and headlines are degrading the information ecosystem, often with no clear provenance or accountability. </p>.<p>Meanwhile, the output <br>of journalists serving the public interest, especially in local, regional, and independent media, is being scraped without permission, algorithmically repackaged, and redistributed with no credit or compensation.</p>.<p>This phenomenon is arguably more pernicious than the glaring, outrageous deepfakes we have all seen because the inaccuracies are subtle, plausible and more likely to mislead. We are witnessing the sabotage of news we need to be able to rely on, and that is draining already depleted reserves of public trust.</p>.<p><strong>So, what can be done?</strong></p>.<p>The European Broadcasting Union and WAN-IFRA, together with a fast-growing collective of other organisations representing thousands of professional journalists and newsrooms around the world, are calling for urgent changes to how AI developers interact with news and the people who produce it.</p>.<p>Many of the broadcasters and news publishers we represent are using AI responsibly to enhance their journalism without compromising editorial integrity, such as through automating translation, helping detect misinformation, or personalising content. They are mindful that the deployment of these tools must be principled, transparent and carefully handled.</p>.<p>That’s why we are presenting five clear requirements to AI tech companies. These are not radical; they are realistic, common-sense standards that any ethical technology developer can and should embrace:</p>.<p>No content without consent. AI systems must not be trained on news content without permission. That content is intellectual property created through rigorous work and public trust. Unsanctioned scraping is theft that undermines both.</p>.<p>Respect value. High-quality journalism is expensive to produce but vital for society’s wellbeing. AI tools benefiting from that work must compensate its creators fairly and in good faith.</p>.<p>Be transparent. When AI-generated content feeds on news sources, those sources should always be clearly cited and linked because accuracy and attribution matter. We are entitled to know where information came from and if it differs from the original.</p>.<p>Protect diversity. AI tools should amplify pluralistic, independent, public interest journalism. A robust, healthy information environment requires a representative cross-section of voices.</p>.<p>Work with us. We invite AI companies to enter a serious, solutions-driven dialogue with the news industry. Together, we can develop standards for accuracy, safety, and transparency, but only if tech companies see journalists as partners and not as suppliers of free data to be mined and monetised. </p>.<p>We consider this a civic challenge that affects every person who relies on credible information to make decisions about their life, to form credible opinions or to decide whom to vote for.</p>.<p>Tech companies talk a lot about trust, but trust is not built on talk. We’re calling on the leaders of the AI revolution to get a handle on this problem now. They have the power to shape the future of information, but we don’t yet see them taking their tools’ dangerous shortcomings, and the potential consequences of them, seriously enough.</p>.<p>Without urgent, corrective action, AI won’t just distort the news – it will destroy the public’s ability to trust in anything and anyone, which will be disastrous news for us all.</p>.<p><em>(Liz is Director of News, EBU, and Vincent is the CEO, WAN-IFRA)</em></p>.<p><em>(DH is part of the global initiative to mark World News Day on September 28, and this article is the first in a series that will be published as part of this initiative)</em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH)</p>