<p class="bodytext">Karnataka’s decision to invest Rs 67.26 crore in an Artificial Intelligence (AI)-driven social media analytics system marks a decisive shift in how the state seeks to police the digital public square. The system will monitor content on platforms such as Facebook, YouTube, X, and Instagram, flagging hate speech, terrorism, and child kidnapping rumours at a time when traditional policing tools have failed to keep pace with the speed, scale, and virality of online misinformation. The software claims not merely to fact-check but also to trace the origin of such content, helping authorities to target those habitually spreading misinformation. In an age where a single post can ignite communal tensions within minutes, the logic behind adopting technology is difficult to dispute. AI can process volumes of data far beyond human capacity, enabling faster intervention before online provocations spill onto the streets. For a state that has repeatedly witnessed social media-fuelled flashpoints, such early-warning capabilities could save lives and prevent damage.</p>.Government tightens rules on AI-generated, deepfake content.<p class="bodytext">However, this technological leap comes with serious concerns. Continuous monitoring inevitably raises questions of privacy and proportionality. Besides, AI algorithms often struggle with sarcasm, local dialects, or cultural nuances, risking ‘false positives’ where harmless posts are flagged as threats, leading to legal harassment. These anxieties are sharpened by the Governor’s recent decision to reserve the Hate Speech and Hate Crimes (Prevention) Bill for Presidential assent, citing its vague definitions and potential to suppress legitimate dissent. An AI system trained on ambiguous legal standards risks amplifying those flaws, converting poor legislative drafting into automated overreach.</p>.<p class="bodytext">Globally, Karnataka is not an outlier. The United Kingdom uses AI tools to monitor online hate and predict offline unrest. Singapore deploys algorithmic systems under its Protection from Online Falsehoods and Manipulation Act to identify coordinated misinformation and compel platform corrections. The United States uses AI primarily for national security, while the European Union’s Artificial Intelligence Act classifies such tools as “high-risk”, permitting them only under strict safeguards. Even Ukraine, confronting hybrid warfare, relies on AI to counter hostile disinformation campaigns. However, it should be remembered that technology is not the danger; unchecked power is. While AI can assist policing, it cannot replace constitutional restraint, transparent oversight, and robust prosecution. Karnataka’s experiment will succeed not by how much it monitors, but by how carefully it governs those who wield the technology. Without clear legal guardrails and independent accountability, this measure risks becoming a double-edged sword that adversely impacts the citizens it intends to protect, fraying the threads of the democratic fabric.</p>
<p class="bodytext">Karnataka’s decision to invest Rs 67.26 crore in an Artificial Intelligence (AI)-driven social media analytics system marks a decisive shift in how the state seeks to police the digital public square. The system will monitor content on platforms such as Facebook, YouTube, X, and Instagram, flagging hate speech, terrorism, and child kidnapping rumours at a time when traditional policing tools have failed to keep pace with the speed, scale, and virality of online misinformation. The software claims not merely to fact-check but also to trace the origin of such content, helping authorities to target those habitually spreading misinformation. In an age where a single post can ignite communal tensions within minutes, the logic behind adopting technology is difficult to dispute. AI can process volumes of data far beyond human capacity, enabling faster intervention before online provocations spill onto the streets. For a state that has repeatedly witnessed social media-fuelled flashpoints, such early-warning capabilities could save lives and prevent damage.</p>.Government tightens rules on AI-generated, deepfake content.<p class="bodytext">However, this technological leap comes with serious concerns. Continuous monitoring inevitably raises questions of privacy and proportionality. Besides, AI algorithms often struggle with sarcasm, local dialects, or cultural nuances, risking ‘false positives’ where harmless posts are flagged as threats, leading to legal harassment. These anxieties are sharpened by the Governor’s recent decision to reserve the Hate Speech and Hate Crimes (Prevention) Bill for Presidential assent, citing its vague definitions and potential to suppress legitimate dissent. An AI system trained on ambiguous legal standards risks amplifying those flaws, converting poor legislative drafting into automated overreach.</p>.<p class="bodytext">Globally, Karnataka is not an outlier. The United Kingdom uses AI tools to monitor online hate and predict offline unrest. Singapore deploys algorithmic systems under its Protection from Online Falsehoods and Manipulation Act to identify coordinated misinformation and compel platform corrections. The United States uses AI primarily for national security, while the European Union’s Artificial Intelligence Act classifies such tools as “high-risk”, permitting them only under strict safeguards. Even Ukraine, confronting hybrid warfare, relies on AI to counter hostile disinformation campaigns. However, it should be remembered that technology is not the danger; unchecked power is. While AI can assist policing, it cannot replace constitutional restraint, transparent oversight, and robust prosecution. Karnataka’s experiment will succeed not by how much it monitors, but by how carefully it governs those who wield the technology. Without clear legal guardrails and independent accountability, this measure risks becoming a double-edged sword that adversely impacts the citizens it intends to protect, fraying the threads of the democratic fabric.</p>