<p>The tussle between law and technology is hardly new. Generative artificial intelligence has brought this conflict into everyday life: when an AI tool writes a news-style summary in seconds, mimics a well-known author’s voice, or generates artwork strikingly similar to an illustrator’s style, a question arises: whose creativity is being used and who should be paid for it? Copyright law, long tasked with balancing innovation and incentive, now finds itself at the centre of a debate that affects not just lawyers and technologists, but journalists, artists, musicians, and ordinary users.</p>.<p>AI systems are trained on vast quantities of text, images, music, and audiovisual works; much of it is protected by copyright. At the same time, these systems promise immense public benefits, from productivity gains to breakthroughs in healthcare, education, and scientific research. The interaction of AI and copyright is thus not a niche legal dispute, but a structural policy question with implications for India’s creative economy, its technology sector, and its broader developmental goals.</p>.<p>News content created by journalists allegedly finds its way into AI training pipelines without permission or compensation. It is against this backdrop that the Working Paper on Generative AI and Copyright, released on December 8, 2025, by the Department for Promotion of Industry and Internal Trade (DPIIT), deserves close attention. Rather than waiting for judicial outcomes to crystallise the law through piecemeal decisions, it represents a bold and thoughtful attempt to seize the policy moment.</p>.<p>The working paper is aimed at facilitating a framework that safeguards the rights of content creators while enabling responsible generative AI innovation and equitable access to technology. It identifies two core problem areas. The first concerns the use of copyrighted material as input – the legal and policy questions surrounding the training of AI systems on protected works. The second relates to copyright claims over AI-generated output, including questions of authorship, originality, moral rights, and liability. Part I of the paper focuses on the input side, recognising that unless training practices are addressed, downstream questions about output will remain unresolved.</p>.<p>The paper comprehensively surveys developments across jurisdictions, examines emerging judicial thinking, and evaluates multiple regulatory models before arriving at its recommendations. It explicitly rejects voluntary licensing as the primary solution. This model, the committee notes, suffers from serious flaws: reluctance of right-holders to transact with AI developers, prohibitively high transaction costs, fragmented negotiations across sectors, and the likelihood that only well-resourced incumbents would benefit. More worryingly, insufficient access to diverse training data could result in biased or low-quality AI systems, while erecting entry barriers for startups and MSMEs.</p>.<p>Equally significant is the rejection of a text and data mining (TDM) exception with an opt-out regime. This approach would allow AI developers to freely use copyrighted works for training unless a rights-holder explicitly opts out. The committee’s objections are grounded in practical reality. If large numbers of creators opt out, data availability shrinks, degrading AI quality. If they do not, due to lack of awareness, resources or technical capacity, they receive no compensation. Implementing a meaningful opt-out system would require standardised, machine-readable notices across platforms, raising serious compliance and enforcement challenges.</p>.<p>The committee recommends a statutory licensing framework, anchored in the principle of “one nation, one licence, one payment”. A new body, the Copyright Royalties Collective for AI Training (CRCAT), would function as a central entity, with copyright societies among its members. It would collect royalties from AI developers based on rates fixed by a government-appointed committee and distribute them to rights-holders.</p>.<p>Crucially, the paper proposes a revenue-share model, rather than upfront licensing fees. Payments, a percentage of the gross global revenue earned from the commercialisation of AI systems trained on copyrighted content, would arise only once revenue is generated. This, the committee argues, lowers entry barriers for innovation and aligns remuneration with economic benefit.</p>.<p>The DPIIT working paper is nuanced, comprehensive, and balanced. While recognising that neither creators nor innovators can be sacrificed at the altar of the other, it acknowledges that waiting indefinitely for courts to resolve these questions risks creating an uncertain regulatory environment. There are questions about the calibration of royalty rates and the governance of CRCAT, but India needs a starting point. This paper provides one that is principled, pragmatic, and rooted in Indian realities.</p>.<p>In a world where machines increasingly learn from us, the challenge is not to stop them from learning, but to ensure that human creativity continues to matter and to be valued in the process.</p>.<p><em>The writer is a practising lawyer with expertise in disability rights and IP law, and is co-founder of Mission Accessibility. He wears more hats than he can himself sometimes count.</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>The tussle between law and technology is hardly new. Generative artificial intelligence has brought this conflict into everyday life: when an AI tool writes a news-style summary in seconds, mimics a well-known author’s voice, or generates artwork strikingly similar to an illustrator’s style, a question arises: whose creativity is being used and who should be paid for it? Copyright law, long tasked with balancing innovation and incentive, now finds itself at the centre of a debate that affects not just lawyers and technologists, but journalists, artists, musicians, and ordinary users.</p>.<p>AI systems are trained on vast quantities of text, images, music, and audiovisual works; much of it is protected by copyright. At the same time, these systems promise immense public benefits, from productivity gains to breakthroughs in healthcare, education, and scientific research. The interaction of AI and copyright is thus not a niche legal dispute, but a structural policy question with implications for India’s creative economy, its technology sector, and its broader developmental goals.</p>.<p>News content created by journalists allegedly finds its way into AI training pipelines without permission or compensation. It is against this backdrop that the Working Paper on Generative AI and Copyright, released on December 8, 2025, by the Department for Promotion of Industry and Internal Trade (DPIIT), deserves close attention. Rather than waiting for judicial outcomes to crystallise the law through piecemeal decisions, it represents a bold and thoughtful attempt to seize the policy moment.</p>.<p>The working paper is aimed at facilitating a framework that safeguards the rights of content creators while enabling responsible generative AI innovation and equitable access to technology. It identifies two core problem areas. The first concerns the use of copyrighted material as input – the legal and policy questions surrounding the training of AI systems on protected works. The second relates to copyright claims over AI-generated output, including questions of authorship, originality, moral rights, and liability. Part I of the paper focuses on the input side, recognising that unless training practices are addressed, downstream questions about output will remain unresolved.</p>.<p>The paper comprehensively surveys developments across jurisdictions, examines emerging judicial thinking, and evaluates multiple regulatory models before arriving at its recommendations. It explicitly rejects voluntary licensing as the primary solution. This model, the committee notes, suffers from serious flaws: reluctance of right-holders to transact with AI developers, prohibitively high transaction costs, fragmented negotiations across sectors, and the likelihood that only well-resourced incumbents would benefit. More worryingly, insufficient access to diverse training data could result in biased or low-quality AI systems, while erecting entry barriers for startups and MSMEs.</p>.<p>Equally significant is the rejection of a text and data mining (TDM) exception with an opt-out regime. This approach would allow AI developers to freely use copyrighted works for training unless a rights-holder explicitly opts out. The committee’s objections are grounded in practical reality. If large numbers of creators opt out, data availability shrinks, degrading AI quality. If they do not, due to lack of awareness, resources or technical capacity, they receive no compensation. Implementing a meaningful opt-out system would require standardised, machine-readable notices across platforms, raising serious compliance and enforcement challenges.</p>.<p>The committee recommends a statutory licensing framework, anchored in the principle of “one nation, one licence, one payment”. A new body, the Copyright Royalties Collective for AI Training (CRCAT), would function as a central entity, with copyright societies among its members. It would collect royalties from AI developers based on rates fixed by a government-appointed committee and distribute them to rights-holders.</p>.<p>Crucially, the paper proposes a revenue-share model, rather than upfront licensing fees. Payments, a percentage of the gross global revenue earned from the commercialisation of AI systems trained on copyrighted content, would arise only once revenue is generated. This, the committee argues, lowers entry barriers for innovation and aligns remuneration with economic benefit.</p>.<p>The DPIIT working paper is nuanced, comprehensive, and balanced. While recognising that neither creators nor innovators can be sacrificed at the altar of the other, it acknowledges that waiting indefinitely for courts to resolve these questions risks creating an uncertain regulatory environment. There are questions about the calibration of royalty rates and the governance of CRCAT, but India needs a starting point. This paper provides one that is principled, pragmatic, and rooted in Indian realities.</p>.<p>In a world where machines increasingly learn from us, the challenge is not to stop them from learning, but to ensure that human creativity continues to matter and to be valued in the process.</p>.<p><em>The writer is a practising lawyer with expertise in disability rights and IP law, and is co-founder of Mission Accessibility. He wears more hats than he can himself sometimes count.</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>