ADVERTISEMENT
The female gaze in AI regulationThe lack of diversity among coders exacerbates inaccuracies and gender blind spots in AI models.
Prathiksha Ullal
Palak Jain
Last Updated IST
<div class="paragraphs"><p>Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration </p></div>

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration

Credit: Reuters Photos

Artificial Intelligence (AI) is emerging as a transformative force, and this has not gone unnoticed as governments worldwide clamour to get ahead in the race for AI innovation. India is not very far behind, with the launch of the India-AI Mission. Through this initiative, the government will deploy AI for societal benefits by encouraging its application in sectors such as agriculture, healthcare, weather forecasting and disaster management. 

ADVERTISEMENT

While deploying AI for good is a noble pursuit, deploying AI applications that are unchecked for bias is not. Many global studies have revealed that AI applications often exhibit machine learning biases, mirroring human prejudices such as gender bias. In a country where women make up 48.8% of the population, it is important to secure the usage of AI applications for large-scale societal benefits.

Algorithmic bias is displayed when AI systems reinforce societal inequalities like those based on race, gender or caste. Gender bias enters the system through gender-skewed datasets and coders’ inherent prejudices. These biases shape how AI interprets data, making it less inclusive and more prone to discrimination in decision-making. While human decision-making also involves some degree of bias, bias in AI is more problematic due to its ability to amplify these biases across larger populations. When deployed in decision-making, this can have higher social costs. 

For instance, a recent Unesco study revealed that commonly used large language models like OpenAI’s ChatGPT Meta’s Llama exhibited gender bias as women were associated with traditional roles such as ‘home’, ‘family’ and ‘children’, whereas men were associated with ‘salary’, ‘executive’ etc. Similarly, hiring algorithms trained on past data, where ‘successful’ candidates were mostly men, are likely to disadvantage female applicants. This was evident in Amazon’s automated hiring tool in 2018, which displayed bias against women. When such models are extrapolated at the governance level in beneficiary identification for welfare schemes, the repercussions can be far more damaging.

With India’s data challenges and the government’s push for AI-driven governance, ignoring bias can disproportionately affect women — depriving them of their fundamental rights and making gender discrimination more systemic. Many women already face Aadhaar failures due to data gaps, and unregulated biased AI could further hinder access to essential welfare schemes like maternity benefits, financial inclusion and healthcare.

Compounding this issue is the fact that AI is both “systemically and socially constructed”. This means that it mirrors the biases of its creators. Currently, only 22% of AI professionals worldwide are women. The situation in India is worse as only 14% of women are involved in STEM research. The lack of diversity among coders exacerbates inaccuracies and gender blind spots in AI models.

This is due to limited access to STEM education for girls, workforce drop-offs and inadequate recruitment efforts. Due to a lack of early exposure limits, the talent pool of women entering AI careers is limited. To curb bias in AI applications, governmental intervention must focus on eliminating gender bias and employing a female gaze in developing a regulatory framework for AI in India.

Employing a female gaze in AI regulation involves embedding feminist principles to avert the unintended impacts of gender-blind legislation. This is to break the male standard of law and, in this case, to place reliance on unbiased AI systems to prevent amplified gender discrimination. The regulatory landscape ought to take measures to remove any form of bias during the development, deployment and diffusion of AI technologies. This can be done by regulating the AI development life cycle, ensuring coder diversity and embedding anti-bias mechanisms into legislation. 

For example, the European Union’s Artificial Intelligence Act mandates stringent bias detection and mitigation provisions. It also mandates strict testing for high-risk AI systems before their deployment in the EU marketplace. These measures, among others, could help embed a female gaze in AI regulations, thereby preventing gender bias and unintended discriminatory consequences of technology on women.  

Building an inclusive and fair AI ecosystem requires a dual entry point for governmental intervention to embed a female gaze in such systems. Firstly, the government should invest in affirmative action through targeted policies to offer scholarships, and mentorship programmes to encourage women to pursue careers in STEM and AI-related research fields.

Secondly, governments must regulate AI applications to eliminate gender bias by enforcing accountability laws like GDPR and incentivising bias-mitigation tools. Technological measures such as explainable AI, ethical testing, fairness monitoring and clear regulatory frameworks must be in place to prevent discrimination in deployment and decision-making.

The way forward to deploy AI-for-All and AI-for-good is to remove all forms of bias in AI systems to leverage the good in the world.

(Prathiksha is a research fellow at Vidhi; Palak is a lawyer and independent policy consultant)

ADVERTISEMENT
(Published 24 March 2025, 00:57 IST)