ADVERTISEMENT
DPDPA gaps delay privacy promiseIn the age of automation, safeguards against AI bias are critical to ensuring true privacy and equality
Utkarsh Yadav
Harshita Gupta
Last Updated IST
<div class="paragraphs"><p>Digital privacy.</p></div>

Digital privacy.

Credit: iStock Photo

Eight years after the landmark K S Puttaswamy judgement affirmed privacy as a fundamental right, its promise remains unfulfilled. The judgement, invoking the Preamble, recognised privacy as an enabling right essential for the fulfillment of all other fundamental rights, including equality. Yet, as automation becomes pervasive across sectors like healthcare and social security, India’s legal framework proves inadequate in addressing the biases and discrimination that arise.

ADVERTISEMENT

While the Digital Personal Data Protection Act (DPDPA) 2023 is a step in the right direction, it suffers from significant shortcomings. The Act’s Section 7 provides for “certain legitimate uses”, creating loopholes for extensive profiling and automated decisions. This allows personal data collection without explicit consent for various purposes, including state functions and employment, not just when voluntarily provided.

A more significant weakness is the Act’s failure to define profiling or regulate automated decision-making. In simple terms, automated decision-making means the use of algorithms to make a decision based on certain given facts or a collection of facts. Profiling, on the other hand, means analysing various aspects of an individual to decide about them.

Although “automated processing” is defined, it is not used to grant any substantive rights to the affected individuals. This legislative lacuna is glaring, especially given the B N Srikrishna Committee’s recommendations for automated safeguards. While Section 8(3) requires data fiduciaries to ensure data accuracy when making decisions that affect the data principal, it doesn’t mandate a right to challenge the process itself. This opacity creates an accountability vacuum, making it virtually impossible to challenge unfair or discriminatory outcomes.

The consequences of this regulatory vacuum are profound, manifesting in tangible bias and discrimination across vital sectors. In public services, algorithmic systems like Telangana’s Samagra Vedika, designed to assess welfare eligibility, have reportedly excluded approximately 15,000 marginalised individuals due to technical glitches or flawed data.

The financial sector faces a significant challenge with digital lending algorithms that can inadvertently perpetuate historical biases, leading to unequal access to credit. This was highlighted by a recent incident involving an Indian NBFC, where an Artificial Intelligence(AI) tool miscategorised over 17,000 low-income applicants as high-risk. The system’s bias, which favoured applicants with a strong digital footprint and extensive data, was corrected only after crucial human intervention, underscoring the vital role of the “human-in-the-loop” approach. The incident serves as a powerful reminder that while the RBI’s FREE-AI framework is a proactive step, human oversight remains indispensable in AI-driven credit decisions.

People engaged in platform work are also at the mercy of algorithms and automated decision-making. Studies show that unregulated use of AI in the gig economy can be detrimental to platform workers. While states such as Rajasthan and Karnataka have passed bills to regulate platform work, these bills do not address the use of AI by companies to ‘manage’ their workforce.

A case for amendment

This human cost is compounded by the absence of a “right to explanation” in the DPDPA. The Act defines “gain” and “loss,” but only uses these terms for monetary penalties under Section 33, not to grant relief for the tangible harm caused by automated data processing. Furthermore, since the right not to be solely subjected to automated decision-making has not been incorporated in the DPDP Act, people are left without a remedy in case of discrimination or errors in the automation process. This lack of legal remedy is against the tenet of ubi jus remedium (where there is a right, there is a remedy), given that privacy is a fundamental right.

This legislative vacuum in India stands in stark contrast to global frameworks. The European Union’s General Data Protection Regulation (EU GDPR) and the United Kingdom’s GDPR provide crucial safeguards like mandatory Data Protection Impact Assessments (DPIAs) and the right not to be subject to solely automated decisions. These safeguards, along with the EU AI Act’s classification of high-risk activities like credit assessment, impose strict requirements on providers, including human oversight and data quality checks. These crucial safeguards are missing from the DPDPA.

As we reflect on the anniversary of the Puttaswamy judgement, it is clear that its promise of digital rights remains unfulfilled when automated systems can discriminate without our knowledge or consent. India possesses a unique opportunity to lead in ethical AI governance by amending the DPDPA. By including a right to explanation, a clear definition of profiling, and specific regulation of automated decisions, we can fulfil the true promise of privacy and equality in the digital age.

(Utkarsh is a final-year law student at RMLNLU, Lucknow; Harshita is a student at National Law University, Jodhpur)

ADVERTISEMENT
(Published 21 August 2025, 01:13 IST)