<p class="bodytext">It was a pleasant evening in Bengaluru when a political campaign suddenly surfaced on a social media feed. The post, shared by someone with clear ideological leanings, appeared at first glance to be heavily generated content. With its sharp graphics, deep colour tones and visuals that seemed to be synthetically generated, the production was designed to mimic reality. Every word was carefully scripted, every cadence unnaturally smooth. Yet, despite the polish, the video left the viewer oddly unmoved – because it lacked any grounding in lived reality. </p>.<p class="bodytext">In the months leading up to the next round of elections, Indian politics has been quick to embrace the newest tool in the arsenal of persuasion: artificial intelligence. The Election Commission’s recent advisory to political parties—directing them to clearly label AI-generated and synthetic content—signals both the scale of adoption and the risks it carries. </p>.<p class="bodytext">On the surface, the technology is impressive. A party worker no longer needs an entire creative team to produce a rousing campaign video; a few prompts to an AI tool can generate a lifelike speech, a catchy jingle, or a digitally enhanced rally scene. For campaign strategists, AI offers scale and speed unmatched by traditional methods. Messages can be tailored to specific voter segments, rebranded overnight, and pushed across platforms before the next news cycle ends. But therein lies the problem.</p>.<p class="bodytext">The very tools that allow rapid, low-cost production also make it easier to blur the line between authentic political communication and manufactured spectacle. AI can seamlessly alter faces, voices, and events, creating material that appears real but never actually happened. Without clear labelling, voters may consume—and be influenced by—synthetic content without realising it. The danger lies not only in the spread of falsehoods but also in the erosion of trust in all campaign material. When every image or clip could be artificial, citizens may begin to doubt even genuine messages. </p>.<p class="bodytext">The EC is right to insist on disclaimers such as “AI-generated” being prominently displayed. Yet enforcement will be difficult, particularly given the decentralised nature of digital campaigning. A recent Lokniti-CSDS study on the 2024 elections found that most campaign songs on YouTube—whether for the Congress or the BJP—came from independent supporter channels rather than official party handles. In such a fragmented environment, accountability becomes elusive. A deepfake speech or AI-crafted smear can go viral before any fact-check catches up, often amplified by enthusiastic supporters unconcerned with accuracy. </p>.<p class="bodytext">Kerala’s recent experiments—where political actors hired vloggers and digital influencers—show that the digital battleground is no longer limited to party offices. This ecosystem thrives on engagement, not verification. AI, with its capacity to churn out endless variations of tailored messages, fits seamlessly into this model. But in politics, where credibility is currency, that convenience carries a cost. </p>.<p class="bodytext">There is also a subtler, longer-term effect. When voters are bombarded with AI-crafted content, the tone of political communication shifts. Campaigns risk becoming more about style than substance, more about emotional hooks than policy clarity. Carefully constructed manifestos are replaced by algorithm-optimised slogans; reasoned debates give way to viral clips. Over time, this hollows out democratic conversation, leaving citizens with a diet of polished, persuasive but shallow political messaging. </p>.<p class="bodytext">Technology will not roll back. The responsibility now lies with political parties to use AI transparently and with voters to demand that transparency. Labelling AI-generated content is not a bureaucratic formality; it is a safeguard for informed choice. If political actors allow the rush for digital dominance to erode that clarity, they will find that the short-term gains of persuasion may come at the long-term cost of trust. </p>.<p class="bodytext">The danger is not simply that voters will believe false information. It is that, over time, they may stop believing any information at all. When every speech could be faked, when every video could be staged, trust in the political process itself is at stake. </p>.<p class="bodytext">A functioning democracy depends on more than just the act of voting—it depends on an informed electorate making choices based on clear, credible information. That clarity is not optional; it is the foundation of the democratic contract between leaders and citizens. </p>.<p class="bodytext">The temptation to use AI for instant political gains will be hard to resist. In a crowded media environment, the speed and volume AI offers can be decisive. But if parties prioritise digital dominance over communicative clarity, they may find the long-term costs outweigh the short-term benefits. </p>.<p class="bodytext">Trust, once lost, is not easily rebuilt. Political actors would do well to remember that in the age of AI, every piece of synthetic content they deploy carries two messages: the one they intend and the unspoken signal about their willingness to blur the line between truth and fabrication.</p>.<p class="bodytext">As India approaches another electoral season, the choice is clear. AI can be a powerful ally in reaching voters—if used with honesty and restraint. Without that restraint, it risks becoming a weapon that erodes the very foundation of the democratic conversation. The machine-generated content often uses politically charged statements without marrying them with a deep understanding of local realities; the results feel hollow. Clarity, in politics as in life, is not a luxury. It is a responsibility. In the AI age, meeting that responsibility is the only way to ensure that technology serves democracy rather than undermines it.</p>.<p class="bodytext"><span class="italic">(Neelatphal is an assistant professor and co-project director (ICSSR-JJM), <br />Department of Media Studies, at Christ Deemed to be University, Bengaluru, and Ishayu is with the University of Leeds, UK)</span></p>
<p class="bodytext">It was a pleasant evening in Bengaluru when a political campaign suddenly surfaced on a social media feed. The post, shared by someone with clear ideological leanings, appeared at first glance to be heavily generated content. With its sharp graphics, deep colour tones and visuals that seemed to be synthetically generated, the production was designed to mimic reality. Every word was carefully scripted, every cadence unnaturally smooth. Yet, despite the polish, the video left the viewer oddly unmoved – because it lacked any grounding in lived reality. </p>.<p class="bodytext">In the months leading up to the next round of elections, Indian politics has been quick to embrace the newest tool in the arsenal of persuasion: artificial intelligence. The Election Commission’s recent advisory to political parties—directing them to clearly label AI-generated and synthetic content—signals both the scale of adoption and the risks it carries. </p>.<p class="bodytext">On the surface, the technology is impressive. A party worker no longer needs an entire creative team to produce a rousing campaign video; a few prompts to an AI tool can generate a lifelike speech, a catchy jingle, or a digitally enhanced rally scene. For campaign strategists, AI offers scale and speed unmatched by traditional methods. Messages can be tailored to specific voter segments, rebranded overnight, and pushed across platforms before the next news cycle ends. But therein lies the problem.</p>.<p class="bodytext">The very tools that allow rapid, low-cost production also make it easier to blur the line between authentic political communication and manufactured spectacle. AI can seamlessly alter faces, voices, and events, creating material that appears real but never actually happened. Without clear labelling, voters may consume—and be influenced by—synthetic content without realising it. The danger lies not only in the spread of falsehoods but also in the erosion of trust in all campaign material. When every image or clip could be artificial, citizens may begin to doubt even genuine messages. </p>.<p class="bodytext">The EC is right to insist on disclaimers such as “AI-generated” being prominently displayed. Yet enforcement will be difficult, particularly given the decentralised nature of digital campaigning. A recent Lokniti-CSDS study on the 2024 elections found that most campaign songs on YouTube—whether for the Congress or the BJP—came from independent supporter channels rather than official party handles. In such a fragmented environment, accountability becomes elusive. A deepfake speech or AI-crafted smear can go viral before any fact-check catches up, often amplified by enthusiastic supporters unconcerned with accuracy. </p>.<p class="bodytext">Kerala’s recent experiments—where political actors hired vloggers and digital influencers—show that the digital battleground is no longer limited to party offices. This ecosystem thrives on engagement, not verification. AI, with its capacity to churn out endless variations of tailored messages, fits seamlessly into this model. But in politics, where credibility is currency, that convenience carries a cost. </p>.<p class="bodytext">There is also a subtler, longer-term effect. When voters are bombarded with AI-crafted content, the tone of political communication shifts. Campaigns risk becoming more about style than substance, more about emotional hooks than policy clarity. Carefully constructed manifestos are replaced by algorithm-optimised slogans; reasoned debates give way to viral clips. Over time, this hollows out democratic conversation, leaving citizens with a diet of polished, persuasive but shallow political messaging. </p>.<p class="bodytext">Technology will not roll back. The responsibility now lies with political parties to use AI transparently and with voters to demand that transparency. Labelling AI-generated content is not a bureaucratic formality; it is a safeguard for informed choice. If political actors allow the rush for digital dominance to erode that clarity, they will find that the short-term gains of persuasion may come at the long-term cost of trust. </p>.<p class="bodytext">The danger is not simply that voters will believe false information. It is that, over time, they may stop believing any information at all. When every speech could be faked, when every video could be staged, trust in the political process itself is at stake. </p>.<p class="bodytext">A functioning democracy depends on more than just the act of voting—it depends on an informed electorate making choices based on clear, credible information. That clarity is not optional; it is the foundation of the democratic contract between leaders and citizens. </p>.<p class="bodytext">The temptation to use AI for instant political gains will be hard to resist. In a crowded media environment, the speed and volume AI offers can be decisive. But if parties prioritise digital dominance over communicative clarity, they may find the long-term costs outweigh the short-term benefits. </p>.<p class="bodytext">Trust, once lost, is not easily rebuilt. Political actors would do well to remember that in the age of AI, every piece of synthetic content they deploy carries two messages: the one they intend and the unspoken signal about their willingness to blur the line between truth and fabrication.</p>.<p class="bodytext">As India approaches another electoral season, the choice is clear. AI can be a powerful ally in reaching voters—if used with honesty and restraint. Without that restraint, it risks becoming a weapon that erodes the very foundation of the democratic conversation. The machine-generated content often uses politically charged statements without marrying them with a deep understanding of local realities; the results feel hollow. Clarity, in politics as in life, is not a luxury. It is a responsibility. In the AI age, meeting that responsibility is the only way to ensure that technology serves democracy rather than undermines it.</p>.<p class="bodytext"><span class="italic">(Neelatphal is an assistant professor and co-project director (ICSSR-JJM), <br />Department of Media Studies, at Christ Deemed to be University, Bengaluru, and Ishayu is with the University of Leeds, UK)</span></p>