ADVERTISEMENT
Deepfake challenge | India needs provenance, not just platform moderationIndia can turn deepfake risk into a trust premium by embedding authenticity and accountability
Lloyd Mathias
Harsh Lailer
Last Updated IST
<div class="paragraphs"><p>Representative image for deepfake</p></div>

Representative image for deepfake

Credit: iStock Photo

In October, India introduced draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to cover ‘synthetically generated information’, commonly referred to as deepfakes. This marked India’s first serious attempt to regulate AI-generated content.

ADVERTISEMENT

The Ministry of Electronics and Information Technology (MeitY) proposes that social media platforms and online intermediaries identify, label, and trace synthetic media, and take reasonable technical steps to detect and curb its dissemination. While efficient on paper, this approach places the burden largely on platforms. Verification obligations risk turning intermediaries into arbiters of permissible speech rather than neutral conduits. Faced with liability pressures, platforms may default to pre-emptive takedowns, raising concerns of over-censorship, and suppression of legitimate expression.

Indian courts have already confronted deepfakes in the absence of clear legislative guidance. Actor Anil Kapoor secured an interim order from the Delhi High Court restraining the unauthorised use of his name, likeness, voice, and persona through AI-generated content. Aishwarya Rai Bachchan and Abhishek Bachchan similarly moved against Google and YouTube over AI-generated deepfakes misusing their images and voices, alleging violations of privacy, personality, and publicity rights. These cases illustrate how deepfakes strain existing legal doctrines not designed for synthetic media at scale.

The draft rules are an important step forward. They recognise that synthetic media is no longer fringe, but central to digital governance. Yet structural gaps remain. Deepfakes are treated primarily as a platform compliance issue, rather than as a systemic integrity challenge requiring shared accountability, technical traceability and institutional oversight.

Globally, more robust governance models rest on three pillars: clear statutory outcome obligations; a lead digital or AI regulator empowered to approve technical standards and enforce compliance; and a standardised content-provenance framework. India’s proposals lean toward the first pillar and partially toward the second. The third — provenance — remains underdeveloped. Without an embedded provenance layer, labelling risks becoming a fragile, easily stripped measure.

What the world is trying

International experience offers useful signals. The European Union (EU)’s AI Act mandates disclosure of AI-generated deepfakes and promotes machine-readable markers that travel with the content. The United States’ Take It Down Act criminalises non-consensual intimate imagery, including AI-generated deepfakes, while imposing takedown obligations on platforms once such content is flagged.

The United Kingdom has criminalised sexually-explicit deepfakes under its Online Safety Act, while Denmark has proposed copyright reforms recognising an individual’s body, face, and voice as protectable interests.

The common lesson is clear: deepfakes cannot be governed through platform moderation alone. They require enforceable rights, durable traceability and clear accountability.

A playbook for Indian scale

First, India should designate a single lead regulator for synthetic content. While MeitY anchors the current amendments, governance remains fragmented across content regulation, data protection, and platform oversight. An autonomous Digital or AI regulator empowered to approve codes of practice, mandate audits, and impose proportionate penalties would reduce fragmentation and provide regulatory certainty.

Second, India should implement a simple provenance system, call it, say, ‘CrediMark’, where every AI-generated or AI-edited file carries a persistent digital tag indicating its origin, method, and time of creation. Platforms should be required to preserve and surface this tag to users. Unlike visible labels or watermarks, which can be removed, a travelling provenance credential provides durable authenticity. Aligning this with global standards such as the C2PA ecosystem would also enable Indian startups to innovate in verification and authenticity tools.

Third, obligations should be risk-tiered. All synthetic media should carry a basic disclosure. High-impact contexts, like elections, government communications, mass media, and financial-market content, should trigger stricter duties, including pre-release authenticity certification, rapid takedown protocols, audit requirements, and transparency reporting. Calibrating regulation to risk protects critical integrity zones without stifling benign innovation.

Fourth, India should create a regulatory sandbox for generative and synthetic media technologies. Platforms and creators could test advanced watermarking, provenance flows, and detection tools under supervision, in exchange for enhanced logging and audit obligations. This ‘safe-innovation loop’ balances safeguards with India’s ambition to be a global generative AI hub.

Finally, enforcement must be meaningful, but proportionate. A graduated framework (warnings for first breaches, escalating fines for non-compliance with provenance or disclosure duties, and service restrictions for repeat offenders) would mirror global best practice. Legal rules must also be backed by infrastructure, including media-forensics labs, credential registries, and public awareness initiatives.

A layered vision

Trust emerges from layered authenticity, detection, and accountability. Detection alone cannot remain static; as generative systems evolve, recognisable artefacts will disappear. For India, pairing detection with verifiable content credentials offers economic upside. It builds a trust premium for Indian media, reduces fraud, attracts capital into safety tooling, and creates exportable expertise in verification and forensic services.

With a single regulator, clear provenance obligations, and innovation-friendly safeguards, India can turn deepfake risk into a strategic advantage and help shape global standards for digital trust.

Lloyd Mathias is an angel investor and independent director. X: @LloydMathias

Harsh Lailer is a policy enthusiast and an independent researcher.

Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.

ADVERTISEMENT
(Published 19 January 2026, 10:58 IST)