ADVERTISEMENT
Deepfake Dilemma | Why India needs harsher laws to protect democracyGlobally, nations are framing stringent laws to address this menace, targeting not only the perpetrators but also holding large social media platforms accountable for their role in spreading disinformation
Abhishek Patni
Last Updated IST
<div class="paragraphs"><p>Representative image showing a person reading a report on 'Deepfake'</p></div>

Representative image showing a person reading a report on 'Deepfake'

Credit: iStock Photo

In the dead of night, hours before the crucial Maharashtra polls, Jency Jacob of Boomlive, a Mumbai-based fact-checking company, got a call from his colleague asking him to immediately check his X (formerly Twitter) account. Four audio clips had been released from the official X handle of the Bharatiya Janata Party (BJP).

ADVERTISEMENT

The BJP’s claims were sensational. In one of the clips, Supriya Sule, a prominent NCP leader and principal opponent of the BJP in Maharashtra, was allegedly heard seeking cash in exchange for Bitcoins. The charges levelled by the BJP were serious, but, was it Sule's voice?

It wasn’t until the next morning, after hours of analysis, that Boomlive’s team discovered the truth: the NCP leader’s voice was AI-generated and synthetic. The audio was a deepfake. But by then, the damage was already done. Released by the official handle of the ruling political party, the fabricated clip had been amplified by 24/7 news channels and digital media platforms, causing immense harm to the NCP.

Deepfakes like this are now a growing concern. Almost all political parties have been accused of using AI-generated technology to manipulate elections to their advantage. If the BJP was under fire for spreading deepfake audio on the eve of the Maharashtra polls, during the April-May general elections, it was the Congress that was accused of circulating a deepfake video of Home Minister Amit Shah. The video was allegedly designed to mislead voters into believing that the ruling BJP would end reservations for Scheduled Castes and Scheduled Tribes (SC/STs) with the potential to damage the saffron party's vote-bank.

Such incidents are a grim reminder of how, in the age of AI and social media, synthetic feeds can profoundly damage the democratic process. During the 2024 general elections, Boomlive reportedly busted over 250 claims of misinformation, with 12 cases identified as AI-generated content designed to polarise voters.

In an era when candidates increasingly rely on technology to reach voters, the prevalence of synthetically-generated media is alarming. It is estimated that, in the two months leading up to the general elections, over 50 million calls were made using AI technology in India alone.

The threat extends far beyond India. A recent survey during the 2024 United States polls revealed that deepfakes and synthetic media have the potential to sway 81.5% of voters, with 36% of the voters reporting that such content changed their vote entirely. Beyond influencing elections, deepfakes also risk eroding voters’ trust in media and politicians — the foundations of any democratic society.

What is being done

Globally, nations are framing stringent laws to address this menace, targeting not only the perpetrators but also holding large social media platforms accountable for their role in spreading disinformation.

In September, California passed the ‘Defending Democracy from Deepfake Deception Act’. The law requires online platforms to remove harmful deepfake content within 72 hours of it being reported. It also allows candidates to seek relief measures against the platforms hosting such content.

However, the social media giants, for obvious reasons, are vehemently opposing this act. Just last week, Elon Musk’s X filed a lawsuit in federal court against this law, with Musk calling it an “insult to free speech”. Musk, himself a vocal supporter of US president-elect Donald Trump, has been widely criticised for allegedly using X to promote Trump’s campaign.

Taiwan, another country at the forefront of combating disinformation, has employed innovative methods to counter deepfakes. During the 2020 presidential elections, Taiwan's Ministry of Justice introduced a fact-checking bot modelled after a middle-aged Taiwanese woman, nicknamed ‘Auntie Meiyu’.

This bot appears across online campaigns, videos, social media posts, and even private social media groups like WhatsApp, offering tips and tools to help citizens identify fake news, deepfakes, and other forms of disinformation. Additionally, Taiwanese law holds platforms like Facebook and X accountable for combating disinformation, especially during elections. Noncompliance can result in fines of up to $62,000 per violation.

In the United States, several states, including California and Texas, have enacted laws to combat deepfakes during elections. California’s anti-deepfake Act prohibits creating and publishing false materials related to elections 120 days before election day and 60 days after the elections. The law also empowers the courts to stop the distribution of deepfake content and mandates the disclosure of political advertisements made using synthetic media. Meanwhile, Texas has criminalised the creation and distribution of malicious deepfake videos intended to harm candidates or influence election outcomes.

The European Union (EU) has taken a particularly tough stance on large social media platforms. Under the EU Digital Services Act, platforms are required to counter disinformation, including deepfakes, or face penalties of up to 6% of their global revenue for non-compliance. Similarly, South Korea has criminalised the creation and distribution of harmful deepfakes, with violators facing up to five years in prison or fines of approximately $43,000.

Laws against deepfakes in India

In stark contrast, India — world’s largest democracy — is yet to enact specific laws to address the growing menace of AI-generated deepfakes during elections. No stringent regulations are holding large social media platforms like X, Facebook, or WhatsApp accountable alongside the original perpetrators of such crimes.

In the absence of such laws, the Election Commission of India (ECI) is left with limited tools to counter deepfakes. During the 2024 general elections, the ECI issued basic guidelines requiring political parties to take down deepfake content within three hours of detection. Beyond these guidelines, India relies heavily on existing provisions in the Information Technology (IT) Act, 2000, such as Section 66D (punishing impersonation using electronic means) and Section 67 (prohibiting the transmission of obscene material).

In August, the Delhi High Court asked the Centre to frame laws to regulate AI and deepfake, and the judges expressed fear that many AI tools could become a “menace for the society” if this went unregulated. The court was hearing a plea filed by advocate Chaitanya Rohilla who contended that the existing laws were inadequate for addressing deep fake manifestations, and concerns persist about the Digital Personal Data Protection Act, 2023.

What can India do?

India needs to enact laws specifically targeting deepfakes and synthetic media to safeguard its democracy and the ECI must move swiftly to prevent the misuse of AI-generated deepfakes for misinformation during election campaigns, ensuring stringent penalties for violators.

Social media platforms must be mandated to detect and remove harmful deepfakes within a specified timeframe, with severe penalties for noncompliance. On the technology front, the government must leverage technology to counter technology. The government should collaborate with AI experts to develop tools that can swiftly detect and flag synthetic and unethical media before it spreads like wildfire.

The ECI must regularly launch campaigns to educate citizens about the dangers of deepfakes. Fact-checking bots, similar to ‘Auntie Meiyu’, could be deployed to debunk disinformation in real-time. The ECI must encourage political parties to sign a code of conduct committing to the ethical use of AI technologies during elections.

Lastly, it’s high time that all political parties recognise that AI is a double-edged sword. When used responsibly, it can be a powerful tool to communicate with voters and strengthen democratic participation. But its misuse threatens the very fabric of democracy.

(Abhishek Patni is a New Delhi-based senior journalist. X: @Abhishek_Patni)

Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.

ADVERTISEMENT
(Published 02 December 2024, 12:23 IST)