<p>The story of how a contributor fooled tech magazine Wired, claiming authorship for a story written by AI, shows how the technology is sneaking past editorial due diligence.</p>.<p>Many are surprised that Wired, a highly regarded American magazine with a sophisticated understanding of tech, was duped so easily.</p>.<p>If AI is so good at erasing the line between illusion and reality, how do we protect ourselves from its consequences? A coldly comforting thought, in the light of the many cases coming to light in recent weeks, is that AI is sneaking past editorial barriers not because it is smarter than humans, but because humans are passing off its work as their own.</p>.<p><strong>Voice mimicry</strong></p>.<p>Wired is not alone. AI-generated content is catching publications of all manner unawares. On July 21, a leading Indian business news publication ran a story about Trump posting a deepfake video about Obama. The story was true so far, but it then went on to quote US disinformation expert Nina Jankowicz as saying, “This deepfake is political disinformation at its worst. It erodes public trust, damages reputations, and poses serious threats to democratic stability.”</p>.The silence around India's AI crisis is deafening.<p>A day later, she came out to say she had never spoken to the newspaper or said anything of the sort. What happened was that AI, trained on language models, had imagined what she would have said, and convincingly mimicked her tone and tenor.</p>.<p>News organisations are no strangers to cooked-up stories, and sniffing them out is all in a day’s work. But AI is a new beast, and organisations have not understood it enough to be able to come up with AI use policies.</p>.<p>On an individual level, reporters, sub-editors and page designers are using generative AI tools such as ChatGPT and Canva to write, edit and create infographics. Meta AI and Grok are other commonly used AI tools in the newsroom.</p>.<p>“AI is not a problem as long as it is doing the grunt work,” says an academic-turned-journalist who heads the online operations of a mass circulation publication with its headquarters in Delhi. AI can do the heavy lifting when it comes to summarising documents running into hundreds of pages. Where a human would have taken a day or two to read up and summarise a policy document, AI does it in minutes. Similarly, referencing is a breeze for AI. It can scour a publication’s archives, built up over decades, to understand the back story.</p>.<p>With the advent of AI tools, publications will have one of two options — downsize or use their human resources in tandem with AI to exponentially increase their productivity.</p>.<p>In print, desk and design jobs are more vulnerable than reporting ones, and an expert expects a major wave of disruption assailing the industry in about two years.</p>.<p>AI-enabled content management systems are a godsend for reporters adept at newsgathering but limited in their writing skills. “All they need to do is type a few keywords, and the system writes the story,” the web operations head says.</p>.<p>A reporter in Rajasthan who used to file one story a day now files four. The desk then takes over, and conventional due diligence kicks in.</p>.<p>A question the expert is frequently asked is whether AI will take away journalism jobs. “No, you will not be replaced by AI,” he says. “You will be replaced by someone who knows AI better than you do.”</p>.<p><strong>Editors’ concerns</strong></p>.<p>Features sections in publications such as <em>DH</em> are flooded with pitches and article submissions, and in recent months, editors have had to exercise extra caution, copy-pasting paragraphs into AI content-detection software to see if they are indeed written by the contributors claiming authorship.</p>.<p>Tools such as Quillbot and ZeroGPT help detect AI-generated submissions, but again, as with all things AI, editors must keep an eye out for dubious calls the tools may take. Wired says the discredited pitches were cleared by AI content-detection tools.</p>.<p>Problems abound when stories are aggregations of stories already published online. Where is AI sourcing the information from, and how authentic is it? What are the risks of plagiarism, and how do you protect your publication from reputational loss and legal action?</p>.<p>Vendors of content management systems are pushing AI tools with the sales pitch that they can speed up the news flow, but few are in a position to indemnify publications against legal liability. Also, the current models create text and images with a Western audience in mind, and call for repeat prompts to Indianise them.</p>.<p>The Wired story was about an abandoned mining town called Gravemont in Colorado, being used secretly to bury human remains. It was a startling, scandalously good story that no magazine would refuse, but the only problem was that everything, from the pitch to the execution, was the work of AI, and not a human.</p>.<p>The editors reportedly became suspicious only when the journalist demanded payment through Paypal, and refused other channels. Wired and other publications have since retracted other articles with the writer’s byline.</p>.<p>The term used for AI-generated narratives with no basis in reality is ‘hallucination’. AI hallucinations can be entertaining and can perhaps inspire professionals working in domains such as cinema, where fantasy narratives are legitimate. But newspapers risk serious legal liability when they accept AI-generated content without human supervision.</p>.<p>“Fabulists and plagiarists are as old as the media itself. But AI presents a new challenge. It lets anyone craft a perfect pitch with a simple prompt and play-act the role of journalist convincingly enough to fool, well, us. We acted quickly once we discovered the ruse, and we have taken steps to ensure this does not happen again. In this new era, every newsroom should be prepared to do the same,” Wired said.</p>.<p><strong>Closer home</strong></p>.<p><em>DH</em> gives out a written test to entry-level journalism aspirants. In the past year, many candidates have submitted test papers perfect in grammar and punctuation. It did not take long for it to dawn on senior editors that the applicants were taking help from ChatGPT.</p>.<p>A giveaway is the word ‘furthermore’, a ChatGPT favourite. For transitions, Indians are more likely to use words like ‘moreover’ and ‘however.’ And since AI models borrow generously from Wikipedia, questions about, say, Siddaramaiah are answered with lines like ‘He is the chief minister of the southern Indian state of Karnataka.’ Sure, who among our readers ever knew Karnataka was in southern India!</p>.<p>Younger journalists are adapting to AI quickly, and where they need help is in understanding the significance of a story for their particular readership. More seasoned journalists are able to understand the urgency, shelf life, relevance, and risks of a breaking story.</p>.<p>The diversity in newsrooms is not going to take a hit even when AI makes a more formal entry into the newsroom, says an expert. Encouragement for journalists sometimes comes from unexpected quarters: It appears Google search gives a higher ranking to human-generated content.</p>.<p>On professional forums, two extreme theoretical positions are taken. The first is that all you need to run a news organisation is a CEO and a staffer who knows their way around AI.</p>.<p>The second is that AI will never replace journalists. But it will not be long before we wake up to the phantom in the newsroom, learn about its strengths and treacheries, and explore the possibilities of co-existence. And furthermore… </p>
<p>The story of how a contributor fooled tech magazine Wired, claiming authorship for a story written by AI, shows how the technology is sneaking past editorial due diligence.</p>.<p>Many are surprised that Wired, a highly regarded American magazine with a sophisticated understanding of tech, was duped so easily.</p>.<p>If AI is so good at erasing the line between illusion and reality, how do we protect ourselves from its consequences? A coldly comforting thought, in the light of the many cases coming to light in recent weeks, is that AI is sneaking past editorial barriers not because it is smarter than humans, but because humans are passing off its work as their own.</p>.<p><strong>Voice mimicry</strong></p>.<p>Wired is not alone. AI-generated content is catching publications of all manner unawares. On July 21, a leading Indian business news publication ran a story about Trump posting a deepfake video about Obama. The story was true so far, but it then went on to quote US disinformation expert Nina Jankowicz as saying, “This deepfake is political disinformation at its worst. It erodes public trust, damages reputations, and poses serious threats to democratic stability.”</p>.The silence around India's AI crisis is deafening.<p>A day later, she came out to say she had never spoken to the newspaper or said anything of the sort. What happened was that AI, trained on language models, had imagined what she would have said, and convincingly mimicked her tone and tenor.</p>.<p>News organisations are no strangers to cooked-up stories, and sniffing them out is all in a day’s work. But AI is a new beast, and organisations have not understood it enough to be able to come up with AI use policies.</p>.<p>On an individual level, reporters, sub-editors and page designers are using generative AI tools such as ChatGPT and Canva to write, edit and create infographics. Meta AI and Grok are other commonly used AI tools in the newsroom.</p>.<p>“AI is not a problem as long as it is doing the grunt work,” says an academic-turned-journalist who heads the online operations of a mass circulation publication with its headquarters in Delhi. AI can do the heavy lifting when it comes to summarising documents running into hundreds of pages. Where a human would have taken a day or two to read up and summarise a policy document, AI does it in minutes. Similarly, referencing is a breeze for AI. It can scour a publication’s archives, built up over decades, to understand the back story.</p>.<p>With the advent of AI tools, publications will have one of two options — downsize or use their human resources in tandem with AI to exponentially increase their productivity.</p>.<p>In print, desk and design jobs are more vulnerable than reporting ones, and an expert expects a major wave of disruption assailing the industry in about two years.</p>.<p>AI-enabled content management systems are a godsend for reporters adept at newsgathering but limited in their writing skills. “All they need to do is type a few keywords, and the system writes the story,” the web operations head says.</p>.<p>A reporter in Rajasthan who used to file one story a day now files four. The desk then takes over, and conventional due diligence kicks in.</p>.<p>A question the expert is frequently asked is whether AI will take away journalism jobs. “No, you will not be replaced by AI,” he says. “You will be replaced by someone who knows AI better than you do.”</p>.<p><strong>Editors’ concerns</strong></p>.<p>Features sections in publications such as <em>DH</em> are flooded with pitches and article submissions, and in recent months, editors have had to exercise extra caution, copy-pasting paragraphs into AI content-detection software to see if they are indeed written by the contributors claiming authorship.</p>.<p>Tools such as Quillbot and ZeroGPT help detect AI-generated submissions, but again, as with all things AI, editors must keep an eye out for dubious calls the tools may take. Wired says the discredited pitches were cleared by AI content-detection tools.</p>.<p>Problems abound when stories are aggregations of stories already published online. Where is AI sourcing the information from, and how authentic is it? What are the risks of plagiarism, and how do you protect your publication from reputational loss and legal action?</p>.<p>Vendors of content management systems are pushing AI tools with the sales pitch that they can speed up the news flow, but few are in a position to indemnify publications against legal liability. Also, the current models create text and images with a Western audience in mind, and call for repeat prompts to Indianise them.</p>.<p>The Wired story was about an abandoned mining town called Gravemont in Colorado, being used secretly to bury human remains. It was a startling, scandalously good story that no magazine would refuse, but the only problem was that everything, from the pitch to the execution, was the work of AI, and not a human.</p>.<p>The editors reportedly became suspicious only when the journalist demanded payment through Paypal, and refused other channels. Wired and other publications have since retracted other articles with the writer’s byline.</p>.<p>The term used for AI-generated narratives with no basis in reality is ‘hallucination’. AI hallucinations can be entertaining and can perhaps inspire professionals working in domains such as cinema, where fantasy narratives are legitimate. But newspapers risk serious legal liability when they accept AI-generated content without human supervision.</p>.<p>“Fabulists and plagiarists are as old as the media itself. But AI presents a new challenge. It lets anyone craft a perfect pitch with a simple prompt and play-act the role of journalist convincingly enough to fool, well, us. We acted quickly once we discovered the ruse, and we have taken steps to ensure this does not happen again. In this new era, every newsroom should be prepared to do the same,” Wired said.</p>.<p><strong>Closer home</strong></p>.<p><em>DH</em> gives out a written test to entry-level journalism aspirants. In the past year, many candidates have submitted test papers perfect in grammar and punctuation. It did not take long for it to dawn on senior editors that the applicants were taking help from ChatGPT.</p>.<p>A giveaway is the word ‘furthermore’, a ChatGPT favourite. For transitions, Indians are more likely to use words like ‘moreover’ and ‘however.’ And since AI models borrow generously from Wikipedia, questions about, say, Siddaramaiah are answered with lines like ‘He is the chief minister of the southern Indian state of Karnataka.’ Sure, who among our readers ever knew Karnataka was in southern India!</p>.<p>Younger journalists are adapting to AI quickly, and where they need help is in understanding the significance of a story for their particular readership. More seasoned journalists are able to understand the urgency, shelf life, relevance, and risks of a breaking story.</p>.<p>The diversity in newsrooms is not going to take a hit even when AI makes a more formal entry into the newsroom, says an expert. Encouragement for journalists sometimes comes from unexpected quarters: It appears Google search gives a higher ranking to human-generated content.</p>.<p>On professional forums, two extreme theoretical positions are taken. The first is that all you need to run a news organisation is a CEO and a staffer who knows their way around AI.</p>.<p>The second is that AI will never replace journalists. But it will not be long before we wake up to the phantom in the newsroom, learn about its strengths and treacheries, and explore the possibilities of co-existence. And furthermore… </p>