ADVERTISEMENT
How to navigate the web of deepfakesWe need new regulations to protect privacy and block misinformation
Ayush Prakash
Last Updated IST
<div class="paragraphs"><p>Representative image showing deepfake.</p></div>

Representative image showing deepfake.

Credit: iStock

The recent emergence of a deepfake video featuring the Prime Minister of India participating in the rhythmic joy of garba serves as a stark reminder of the expanding web of deception spun by advanced artificial intelligence (AI). This incident, however, is not just a momentary spectacle but a chilling indicator of the broader, more insidious issue that this technology portends.

ADVERTISEMENT

Deepfake is a form of synthetic media in which AI is used to create a digital copy of a person’s likeness or voice. It has a wide range of applications, including in education to generate interactive content, in the film industry to replace a lead actor with their stunt double or synchronised dubbing for foreign language films, and in the retail sector to improve the overall experience for potential buyers.

While the technology demonstrates remarkable utility across diverse domains, its multifaceted nature also engenders substantial concerns and ethical challenges. Foremost among these concerns is the potential for misinformation and the erosion of trust. The ease with which deepfake algorithms can convincingly manipulate videos raises alarming possibilities of using fabricated content to disseminate false narratives, thus compromising the integrity of information in the digital age. This can be exploited to spread false information, fake news, or manipulate
public opinion. 

It also poses a serious threat to individual privacy, as malicious actors can exploit it for identity theft or create deceptive content that can harm personal and professional reputations. Actor Rashmika Mandanna’s deepfake video, which sparked the row over deepfakes, is but one of hundreds of such videos portraying public figures in compromising circumstances.

The law in place currently being invoked to tackle the issue is the Information Technology Act, 2000, and the rules made thereunder. Section 66D of the said Act makes it a punishable offence to use computer resources to cheat by personation. The Information Technology Rules, 2021, mandate that digital intermediaries shall make all reasonable efforts not to display content that impersonates another, and if a complaint is made for the same, such content should be removed within 24 hours of the receipt of such complaint. 

The current legal framework falls short in addressing the intricacies associated with deepfakes. The IT Act makes personation an offence only when it is related to cheating. Thus, personation intended to simply malign someone’s reputation is not covered under the Act unless it results in cheating. And although the IT rules stipulate that offensive content should be removed within 24 hours of being reported, purging it completely from the internet is easier said than done, especially if it has been copied, reproduced, or republished by multiple users. The global nature of the internet facilitates the sharing of information across borders, which means that to tackle the issue comprehensively, coordinating regulations and enforcement across jurisdictions becomes paramount. The extant laws are simply not adequate enough to resolve the multitude of regulatory impediments presented by deepfake technology. Therefore, specific legislation is indeed the need of the hour to take on this newfound menace.

Laws and regulatory measures adopted in other jurisdictions can be instructive in coming up with a draft for domestic law. China is one of the first countries to have prepared a draft legislation for deepfake regulation, which places more responsibility on platforms by mandating disclosures by individuals whenever the technology is used in any media. It also prohibits the distribution of deepfakes without a clear disclaimer that the content has been artificially generated. The European Union’s draft AI Act outlines transparency requirements, meaning that intermediaries would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones, and provide safeguards against the generation of illegal content. In the US, several states, like California, Texas, and Illinois, have enacted legislation on AI targeting pornographic deepfakes.

India, like its contemporaries, must act promptly to create a framework to deal with synthetic media. In doing so, lawmakers must strike a balance between regulating harmful deepfakes and preserving freedom of expression. The term “deepfake” itself needs to be defined unambiguously, but at the same time, the definition must be broad enough to cover a wide range of deceptive content generated using AI. Taking cues from jurisdictions like China and the EU, which have already developed comprehensive drafts to mitigate threats, may provide necessary starting points for a draft of legislation. Addressing these challenges would be key to creating a safer and more secure digital landscape, ensuring that the benefits of technology are harnessed while minimizing the risks associated with the proliferation of deepfakes.

(The writer is a student at National Law University, Jodhpur)

ADVERTISEMENT
(Published 13 December 2023, 04:16 IST)