×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Battle of ballots in the age of AI

Who can protect India’s general elections from deepfakes and disinformation?

Political parties have the most power to effect change, only lack of intent stands in the way.
Last Updated : 08 March 2024, 21:04 IST
Last Updated : 08 March 2024, 21:04 IST

Follow Us :

Comments

With an estimated 60-80 elections (depending on your sources) covering nearly 4 billion people, 2024 is the year of elections. In India, nearly a billion people are estimated to be eligible to vote. Given the ongoing global conversation about the impact of generative AI, in general, and deepfakes in particular, it is natural to think about how they will interact, and what it may mean for democracies. The simple, but unsatisfying answer to those questions is that it depends – on many factors.

In India, it is important to look at the role generative AI may play in elections, alongside existing conditions, such as opaque electoral funding, asymmetry in resources, blending of state and party machinery (whether at the Union or state level), institutional capture, extent of ‘ethical flexibility’ in politics, as well as factors such as independence of the press, information literacy, existing social divisions, and so on.

Many of these factors will be exploited with or without the use of the capabilities that generative AI offers in terms of making it easier to produce deceptive information. Within this context, it is always going to be difficult to attribute, conclusively, the role that generative AI may have played in determining electoral outcomes. Nevertheless, it is one that does warrant attention from the perspective of how we can limit the potential impact on India’s General Elections and beyond.

A reasonable place to start is to look at the three most significant actors, their ability to respond, and where they may fall short. They are the technology companies (mainly social media platforms), the Election Commission of India (ECI), and political parties themselves.

Tech platforms and companies: Capability gap

Social media and communication platforms will serve as the main distribution channels for deceptive generative AI-based information whether they like it or not. This is to be expected in spaces with low entry costs and high degrees of reliance ‘on user-generated content’. With the Union government’s posturing on safe harbour protections, and wielding the (deeply flawed) Information Technology rules like the Damocles’ Sword, the platforms will, at the very least, want to create the impression of responding.

Even if their efforts are genuine, there are still significant challenges to overcome. Limited previews of OpenAI’s Sora indicate that detection of synthetically generated videos is going to become harder as researchers and analysts rely on ‘glitches’ as indicators. Audio ‘deepfakes’ are already extremely difficult for practitioners to detect, with investigations often being inconclusive. Much has been made of a recent pledge by technology companies, but the multi-faceted problems of scale and complex social issues mean they face a capability gap. Besides, many of them have signed and implemented ‘voluntary codes’ in India (elections), the EU, and Australia (disinformation). It is hard to say whether any of them were effective, while the same companies have pared down their trust and safety operations.

The non-deterministic nature of outputs means that moderation/restriction of prompts by generative AI tools themselves can only produce limited results. Similarly watermarking, visible or invisible, can often be bypassed by simple techniques, or using tools that do not enforce such practices. A lot of these efforts are brittle and crumble against motivated bad actors. With elections fast approaching, expect limited relief, if any, from this avenue.

ECI: Capacity constraints

Like technology companies, even a neutral, independent election commission will struggle to make meaningful interventions. While the model code of conduct will be in effect, the rapidly evolving nature of generative AI and detection challenges mean that the ECI’s ability to take action specifically against deceptive uses will be downstream from the detection capabilities of technology companies, independent fact-checking entities, researchers, etc. It also cannot, conceivably, develop the technical capacity to act independently in the time that remains.

Political actors: Lack of intent

The actors with the most agency in this scenario are the political parties, and their affiliates, themselves. A strong signal from party leadership like a public commitment against the use of deepfakes, and active, ongoing condemnation of deceptive uses of technology-based tools for negative campaigning by their own support base will send a clear message that such methods are not welcome.

Neither technology nor regulation can fix what political actors and sections of society insist on breaking. In this context, so close to elections, the political class has the most power to effect change and limit the impact of the deceptive uses of technological tools.

And yet, every day, they make an active choice not to act. What they lack is intent.

Prateek Waghre

Prateek Waghre

(The writer is the Executive Director of the Internet Freedom Foundation)

ADVERTISEMENT
Published 08 March 2024, 21:04 IST

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT