ADVERTISEMENT
Labelling lies in AI ageIndia faces a deluge of AI-driven scams --voice-cloning, image manipulation, fake travel offers, and fabricated endorsements have become alarmingly common. A recent study found that 69% of Indian adults struggle to distinguish an AI-generated voice from a human one, while 47% have directly experienced or know someone who has been targeted by an AI-voice scam.
Harshita Gupta
Suhana
Last Updated IST
<div class="paragraphs"><p>Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration </p></div>

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration

Credit: Reuters Photos

In a telling reminder that the boom in technology has set us racing against deception itself, the government’s recent proposal to amend the Information Technology Rules to mandate labelling of AI-generated content --and a subsequent public interest litigation seeking judicial regulation of deepfakes, both arriving within fifty hours of each other – underscore a digital age that demands more than rhetoric. 

ADVERTISEMENT

The proposed amendment to Parts I and II of the IT Rules (Rules 2, 3, and 4) requires all synthetically generated audio-visual, textual, or image-based content that “reasonably appears to be authentic or true” to carry a prominent label or embedded metadata declaring its artificial origins. The burden of truth is thus shifted onto host platforms and the content creators, requiring user declarations, technical verification, and labelling that covers at least 10% of visual content or the first 10% of an audio clip. 

The need for such a measure is pressing. India faces a deluge of AI-driven scams --voice-cloning, image manipulation, fake travel offers, and fabricated endorsements have become alarmingly common. A recent study found that 69% of Indian adults struggle to distinguish an AI-generated voice from a human one, while 47% have directly experienced or know someone who has been targeted by an AI-voice scam. Meanwhile, the spectre of deepfakes being used to manufacture political or financial fraud looms large. In this light, the amendment aims not only to protect the rights of individuals whose images and voices may be misused but also to safeguard the integrity of public discourse, electoral trust, and cognitive autonomy in our hyper-connected society.

While the proposed amendment is necessary, several issues warrant closer scrutiny. First, the public consultation window closes on November 6—barely two weeks after notification—leaving stakeholders little time for meaningful participation. Experts have flagged this timeline as inadequate, given the technical, legal, and societal complexities of regulating AI-generated content. 

Secondly, the practical burden of real-time detection poses formidable challenges. Unlike the relatively monolingual markets of the US or UK, India’s digital ecosystem spans 22 official languages and hundreds of dialects. Accurately identifying content across regional languages such as Hindi, Tamil, or Bengali requires detection models far more sophisticated than those currently available. Beyond language, other challenges include the rapid evolution of deepfake technology and the high computational cost of real-time monitoring. The amendment provides no clarity on which authority will oversee detection, and law enforcement agencies often lack the necessary technical expertise and resources for scalable, real-time forensic work.

Moreover, the amendment’s definition of “synthetically generated information” is vague. The provision captures any content “modified or altered” to “reasonably appear authentic”, risking the inadvertent criminalisation of routine artistic filters, minor digital enhancements, and creative expression. The arbitrary 10% label size requirement also lacks empirical justification, threatening to stealthily intrude into and potentially cause a chilling effect on free speech, raising fundamental constitutional questions.

To fully implement a robust regulatory framework in a digital economy, integrating best international practices becomes imperative. To address the issue of language formats in India requiring more sophisticated systems, examples of countries like South Korea are compelling. South Korea’s forthcoming AI Framework Act adopts a dual-layer approach, combining immediate visual alerts with an embedded, language-agnostic metadata identifier that creates a sophisticated and scalable solution capable of maintaining content provenance and combating deepfakes across India’s official languages, moving beyond simple content-takedown models. Similarly, the use of ambiguous terminology may be addressed through a risk-based taxonomy similar to the EU’s AI Act that confines the heaviest obligations to genuinely high-risk systems and imposes calibrated transparency requirements on lower-risk uses. 

Finally, the challenge posed by the Indian draft rule’s quantifiable, yet technologically blunt, 10% visual or audio coverage threshold for labelling is best refined by mandating the embedding of a permanent, machine-readable digital signature (metadata) into all AI-generated media to ensure traceability irrespective of the content’s visible formatting, a concept used in jurisdictions like China. This metadata tracing mechanism supersedes the need for a large, intrusive, and easily cropped visual label. 

Ultimately, the promise of the proposed amendment lies not in its punitive reach but in its capacity to architect a sound ecosystem that balances innovation with accountability. The challenge before the government, therefore, is to refine ambition with clarity, to move from a posture of reaction to one of anticipation. India can create a system that not only stops people from lying but also builds digital trust by extending the consultation process, using risk-differentiated standards, and putting traceability ahead of aesthetic compliance.

(The writers are students of National Law University, Jodhpur)

ADVERTISEMENT
(Published 31 October 2025, 07:07 IST)