<p>In late 2025, the Nagpur Municipal Corporation rolled out facial recognition attendance for thousands of its sanitation workers. Attendance is now recorded through a mobile app requiring facial scans. Movement is tracked by GPS through the working day. The system does not ask whether the worker consents. It asks whether they want the job. </p>.<p>This is not an efficiency upgrade. It is a surveillance architecture built on top of the most vulnerable segment of India’s workforce, dressed in the language of governance reform.</p>.<p>The problems are technical before they are philosophical. Years of physical labour wear fingerprints smooth. Facial recognition software fails regularly in low light and dusty conditions. GPS systems place workers in the wrong location. When the machine records an absence that did not happen, there is no human supervisor to hear a correction. The algorithm is the final judge. The worker loses wages for hours she worked. That outcome is not a glitch. It is a policy choice about who bears the cost of error.</p>.<p>This matters because India already knows what algorithmic exclusion costs at scale. Between 2015 and 2018, the Right to Food Campaign documented starvation deaths in Jharkhand directly tied to Aadhaar-linked authentication failures in the Public Distribution System. Biometric systems failed to recognise the fingerprints of elderly and manual labourers. Ration cards were cancelled for non-linkage. Families lost access to food.</p>.<p>The Parliamentary Public Accounts Committee flagged authentication failure rates of 8 to 17 per cent among NREGA workers in Andhra Pradesh and Telangana as a serious governance concern. The people most exposed to these failures were the people nobody asked before the system was designed.</p>.Bengaluru: Now, AI-based attendance for School Education Department employees .<p>AI-powered attendance systems in public employment repeat this structural error. The Digital Personal Data Protection Act of 2023, with rules notified in November 2025, classifies biometric data as sensitive personal information requiring explicit consent before collection. For a daily-wage worker, that consent is a fiction. The choice is between scanning your face and losing your livelihood. A signature on a form does not become voluntary because it sits next to a pen.</p>.<p>India’s AI Governance Guidelines, released on November 5, 2025, do not address this. Their framework orients itself toward developers and deployers. Its seven sutras include People First, promising human-centric design and human empowerment. Sanitation workers are people. The framework does not reach them. The proposed AI Safety Institute carries no current mandate to audit workplace surveillance systems or assess whether continuous biometric tracking of low-wage workers is proportionate to the stated administrative goal.</p>.<p>The philosophical cost of this gap is not abstract. When an algorithm determines whether a worker was present, the citizen is no longer a deliberative agent. She becomes a data point, classified by a system she did not help design, corrected through a process she may not know exists, and penalised by a machine that records no context. India, given its plural social fabric and the concentration of low-wage work in public employment, faces an acute version of this risk. What gets encoded into these systems, whose threshold for acceptable error, whose definition of a valid biometric read, whose conception of what constitutes adequate supervision, is a political question. Treating it as a technical one does not resolve the politics. It removes them from scrutiny.</p>.<p>Three changes would make a concrete difference. Every AI-based attendance system in public employment must carry a human-operated backup. A biometric failure should trigger a human review, not a wage deduction. The AI Safety Institute should be mandated to conduct proportionality audits before surveillance systems are deployed on low-wage public workers, requiring employers to demonstrate that continuous biometric tracking is necessary and not merely administratively convenient. Workers must have the right to access, correct, <br>and delete their biometric records. The DPDP Act grants these rights in principle. They need enforcement architecture with teeth.</p>.<p>The 2026 India-AI Impact Summit spoke of sovereign compute and AI for All. Across India’s cities, sanitation workers now start their shifts by holding their faces up to a phone camera in the dark. Both things are true at the same time. The distance between them is the actual state of India’s AI governance. If we build a digital India where the most exposed workers are tracked like equipment and left without recourse when machines err, we have not built a People First system. We have built a more efficient version of the same exclusion.</p>.<p><em>(The author is an independent researcher and policy writer)</em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH)</p>
<p>In late 2025, the Nagpur Municipal Corporation rolled out facial recognition attendance for thousands of its sanitation workers. Attendance is now recorded through a mobile app requiring facial scans. Movement is tracked by GPS through the working day. The system does not ask whether the worker consents. It asks whether they want the job. </p>.<p>This is not an efficiency upgrade. It is a surveillance architecture built on top of the most vulnerable segment of India’s workforce, dressed in the language of governance reform.</p>.<p>The problems are technical before they are philosophical. Years of physical labour wear fingerprints smooth. Facial recognition software fails regularly in low light and dusty conditions. GPS systems place workers in the wrong location. When the machine records an absence that did not happen, there is no human supervisor to hear a correction. The algorithm is the final judge. The worker loses wages for hours she worked. That outcome is not a glitch. It is a policy choice about who bears the cost of error.</p>.<p>This matters because India already knows what algorithmic exclusion costs at scale. Between 2015 and 2018, the Right to Food Campaign documented starvation deaths in Jharkhand directly tied to Aadhaar-linked authentication failures in the Public Distribution System. Biometric systems failed to recognise the fingerprints of elderly and manual labourers. Ration cards were cancelled for non-linkage. Families lost access to food.</p>.<p>The Parliamentary Public Accounts Committee flagged authentication failure rates of 8 to 17 per cent among NREGA workers in Andhra Pradesh and Telangana as a serious governance concern. The people most exposed to these failures were the people nobody asked before the system was designed.</p>.Bengaluru: Now, AI-based attendance for School Education Department employees .<p>AI-powered attendance systems in public employment repeat this structural error. The Digital Personal Data Protection Act of 2023, with rules notified in November 2025, classifies biometric data as sensitive personal information requiring explicit consent before collection. For a daily-wage worker, that consent is a fiction. The choice is between scanning your face and losing your livelihood. A signature on a form does not become voluntary because it sits next to a pen.</p>.<p>India’s AI Governance Guidelines, released on November 5, 2025, do not address this. Their framework orients itself toward developers and deployers. Its seven sutras include People First, promising human-centric design and human empowerment. Sanitation workers are people. The framework does not reach them. The proposed AI Safety Institute carries no current mandate to audit workplace surveillance systems or assess whether continuous biometric tracking of low-wage workers is proportionate to the stated administrative goal.</p>.<p>The philosophical cost of this gap is not abstract. When an algorithm determines whether a worker was present, the citizen is no longer a deliberative agent. She becomes a data point, classified by a system she did not help design, corrected through a process she may not know exists, and penalised by a machine that records no context. India, given its plural social fabric and the concentration of low-wage work in public employment, faces an acute version of this risk. What gets encoded into these systems, whose threshold for acceptable error, whose definition of a valid biometric read, whose conception of what constitutes adequate supervision, is a political question. Treating it as a technical one does not resolve the politics. It removes them from scrutiny.</p>.<p>Three changes would make a concrete difference. Every AI-based attendance system in public employment must carry a human-operated backup. A biometric failure should trigger a human review, not a wage deduction. The AI Safety Institute should be mandated to conduct proportionality audits before surveillance systems are deployed on low-wage public workers, requiring employers to demonstrate that continuous biometric tracking is necessary and not merely administratively convenient. Workers must have the right to access, correct, <br>and delete their biometric records. The DPDP Act grants these rights in principle. They need enforcement architecture with teeth.</p>.<p>The 2026 India-AI Impact Summit spoke of sovereign compute and AI for All. Across India’s cities, sanitation workers now start their shifts by holding their faces up to a phone camera in the dark. Both things are true at the same time. The distance between them is the actual state of India’s AI governance. If we build a digital India where the most exposed workers are tracked like equipment and left without recourse when machines err, we have not built a People First system. We have built a more efficient version of the same exclusion.</p>.<p><em>(The author is an independent researcher and policy writer)</em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH)</p>