<p>The recent India AI Impact Summit, while showcasing the country’s advancing technological capabilities, also raised concerns over AI’s ethical inconsistencies. It is now accepted that biases entrenched in society are reflected and amplified by algorithms and AI-based products. This can be traced to inherently discriminatory training datasets and unequal modes of data collection.</p>.<p>Industries are increasingly employing AI systems to make important decisions at a brisker pace. Algorithms are deployed by companies to shortlist candidates for hiring, by financial institutions to offer credit to customers, by healthcare workers to prioritise candidates for vaccines, and in multiple other fields. However, since these algorithms are a reflection of the prevalent norms of society due to the data fed into them, the resultant gender bias may reinforce stereotypes and discriminate against women.</p>.<p>A feminist lens is essential for identifying and eliminating gender bias in the data economy. The times call for a scrutiny of data science from the perspective of what is being termed “data feminism”.</p>.<p>In 2018, Amazon landed in controversy when its automated system to sort resumes of job applicants rejected women’s applications. The algorithm’s preference was based on the statistics of the company’s existing workforce, which consisted mostly of men. The AI was trained to reject women’s candidature on detecting terms like “female”, “women’s college” or “women’s team” on their resumes. This demonstrates how technology can perpetuate existing biases.</p>.<p>Consumer credit industries have faced criticism for discriminating against women in the allocation of credit. In a striking case of algorithmic bias, a 2019 incident revealed that a woman and her husband reported the same income, expenses, and debt, but a credit card company set the woman’s credit limit to almost half the amount allotted to her husband.</p>.<p>Moreover, feedback loops between data inputs and outputs tend to reinforce existing stereotypes and prejudices about gender roles. For instance, translation software often translates gender-neutral English terms into stereotypical, gendered words in other languages, such as Spanish. A researcher noted that a translation application would convert “the doctor” and “the nurse” in English to “el doctor” and “la enfermera” in Spanish, reinforcing the notion that doctors are male and nurses are female.</p>.<p>Similarly, many image recognition technologies have been reported for bias in images corresponding to occupations. A study observed this during a keyword search on a platform: for “nurse,” 80% of the generated images were of females, while searching for “CEO,” only 10% of the images shown were of females. In reality, about 30% of the CEOs of leading businesses are women. Such search results and translations feed into existing prejudices and mislead people about women in the workforce.</p>.<p>Inclusion through participatory design</p>.<p>Humans generate and collect data that goes into the training modules; humans determine which datasets algorithms can learn from to make predictions. These stages can introduce human biases in an algorithm, affirming a reality – data cannot be truly neutral. Such biases in algorithms can be reformed by using a more participatory process or developing better training data modules that accommodate diverse genders and communities. Another promising way to correct these gender biases would be to improve diversity by increasing the representation of women in the tech industry.</p>.<p>Prioritising gender justice in data science can help in creating more functional AI systems. A 2020 study by Bo Cowgill et al. has shown that diverse demographic groups are better at reducing algorithmic bias. Participatory design in the development of technologies would ensure gender sensitivity. Bias in data collection can also be avoided if the data analyst employs personal modes of collection, such as crowdsourcing. A 2019 UNESCO report on embedding gender equality into AI principles is an important document in this regard. Moving past the gender binary, valuing multiple forms of knowledge, and using data to challenge unequal power structures are some healthy practices that could help fill gender gaps in data. The National Council of German Women’s Organisations has published a policy paper titled ‘Achieving a gender-equitable digital transformation’, wherein scholars have come together to devise ethical principles for designing gender-sensitive datasets.</p>.<p>Improving the approach to design and management of AI systems can ensure a more just and representative world. Ethical data governance should be a top priority for all stakeholders involved. Conversations on gender are vital in creating inclusive, socially beneficial data systems and AI. The challenges are manifold, but as long as critical enquiry is not muted, there is hope.</p>.<p>(The writer is a judge based in Rajasthan, with an avid interest in gender studies and human rights)</p> <p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>The recent India AI Impact Summit, while showcasing the country’s advancing technological capabilities, also raised concerns over AI’s ethical inconsistencies. It is now accepted that biases entrenched in society are reflected and amplified by algorithms and AI-based products. This can be traced to inherently discriminatory training datasets and unequal modes of data collection.</p>.<p>Industries are increasingly employing AI systems to make important decisions at a brisker pace. Algorithms are deployed by companies to shortlist candidates for hiring, by financial institutions to offer credit to customers, by healthcare workers to prioritise candidates for vaccines, and in multiple other fields. However, since these algorithms are a reflection of the prevalent norms of society due to the data fed into them, the resultant gender bias may reinforce stereotypes and discriminate against women.</p>.<p>A feminist lens is essential for identifying and eliminating gender bias in the data economy. The times call for a scrutiny of data science from the perspective of what is being termed “data feminism”.</p>.<p>In 2018, Amazon landed in controversy when its automated system to sort resumes of job applicants rejected women’s applications. The algorithm’s preference was based on the statistics of the company’s existing workforce, which consisted mostly of men. The AI was trained to reject women’s candidature on detecting terms like “female”, “women’s college” or “women’s team” on their resumes. This demonstrates how technology can perpetuate existing biases.</p>.<p>Consumer credit industries have faced criticism for discriminating against women in the allocation of credit. In a striking case of algorithmic bias, a 2019 incident revealed that a woman and her husband reported the same income, expenses, and debt, but a credit card company set the woman’s credit limit to almost half the amount allotted to her husband.</p>.<p>Moreover, feedback loops between data inputs and outputs tend to reinforce existing stereotypes and prejudices about gender roles. For instance, translation software often translates gender-neutral English terms into stereotypical, gendered words in other languages, such as Spanish. A researcher noted that a translation application would convert “the doctor” and “the nurse” in English to “el doctor” and “la enfermera” in Spanish, reinforcing the notion that doctors are male and nurses are female.</p>.<p>Similarly, many image recognition technologies have been reported for bias in images corresponding to occupations. A study observed this during a keyword search on a platform: for “nurse,” 80% of the generated images were of females, while searching for “CEO,” only 10% of the images shown were of females. In reality, about 30% of the CEOs of leading businesses are women. Such search results and translations feed into existing prejudices and mislead people about women in the workforce.</p>.<p>Inclusion through participatory design</p>.<p>Humans generate and collect data that goes into the training modules; humans determine which datasets algorithms can learn from to make predictions. These stages can introduce human biases in an algorithm, affirming a reality – data cannot be truly neutral. Such biases in algorithms can be reformed by using a more participatory process or developing better training data modules that accommodate diverse genders and communities. Another promising way to correct these gender biases would be to improve diversity by increasing the representation of women in the tech industry.</p>.<p>Prioritising gender justice in data science can help in creating more functional AI systems. A 2020 study by Bo Cowgill et al. has shown that diverse demographic groups are better at reducing algorithmic bias. Participatory design in the development of technologies would ensure gender sensitivity. Bias in data collection can also be avoided if the data analyst employs personal modes of collection, such as crowdsourcing. A 2019 UNESCO report on embedding gender equality into AI principles is an important document in this regard. Moving past the gender binary, valuing multiple forms of knowledge, and using data to challenge unequal power structures are some healthy practices that could help fill gender gaps in data. The National Council of German Women’s Organisations has published a policy paper titled ‘Achieving a gender-equitable digital transformation’, wherein scholars have come together to devise ethical principles for designing gender-sensitive datasets.</p>.<p>Improving the approach to design and management of AI systems can ensure a more just and representative world. Ethical data governance should be a top priority for all stakeholders involved. Conversations on gender are vital in creating inclusive, socially beneficial data systems and AI. The challenges are manifold, but as long as critical enquiry is not muted, there is hope.</p>.<p>(The writer is a judge based in Rajasthan, with an avid interest in gender studies and human rights)</p> <p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>