×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Artificial Intelligence: Align human rights, biz imperatives

There’s limited State intervention in the technical nuances of how private businesses use technology
Last Updated 19 September 2022, 17:56 IST

Abundant data, widespread digitisation, and attractive efficiency gains have driven the development and use of Artificial Intelligence (AI). However, this rapid growth is not isolated from human rights abuses which often stem from the way AI technologies are deployed. In July 2022, Aapti Institute, a Bengaluru-based tech think-tank, in collaboration with the Business and Human Rights (Asia) programme at UNDP India, examined the impact of AI deployment on the human rights of consumers in finance, healthcare, and on the labour force in gig work and retail in India. This work builds on existing research, such as the Human Rights Guide by the Danish Institute on Human Rights, which has found that a human rights-respecting approach by businesses can enhance individual and community well-being and drive sustainable economic growth.

Our research identified numerous sector-specific risks and found commonalities across sectors. There was an overarching risk to privacy as sensitive data was being collected without adequate safeguards. Additionally, businesses were unable to explain how AI technology arrived at decisions, such as health predictions of patients in the healthcare sector or credit scores of borrowers in the financial sector.

Some risks were sector-specific. Our research found that in healthcare, inaccurate diagnoses were stemming from the use of biased datasets. Predictions around heart attacks, for example, were based on a diagnosis of symptoms experienced by Indian men. Retail, on the other hand, faces the more popular AI risk: replacement of workers with automation.

Despite varying risks, we observed a common thread across sectors. AI deployment-related risks can cause reputational damage leading to a loss of customers as well as reduction in goodwill leading to decreased turnover for businesses. What’s more, the research made it clear that the risks were not merely stemming from the technology itself, but from a combination of the technology, the company’s internal business policies, and the regulatory landscape outlined by the State.

There’s limited State intervention in the technical nuances of how private businesses use technology. It’s the businesses which play a critical role in the development of AI and its working. In gig work, bonuses and monetary incentives for workers are tied to the minimum number of work hours as a matter of company policy, not as a function of the operation of the AI system. Similarly, in digital lending, the parameters that are considered by AI technology to determine an individual’s credit score are set by the company developing the AI. AI thus mirrors business policy, preferences and choices.

Legal and regulatory frameworks can guide AI deployment by businesses and influence core company functioning and data protection practices. But a lack of regulation is making the State a stakeholder in contributing to human rights risks. For instance, a lack of data protection legislation in India is depriving citizens of rights in relation to their data or any meaningful avenues for recourse against grievances.

The UN Guiding Principles on Business and Human Rights (UNGP) highlight the responsibility of both States and businesses in providing remedies to those adversely impacted by business operations. Thus, businesses can take the right step forward by formulating internal policies that enable business actions with respect for human rights.

Another critical initiative businesses can take is to improve the explainability of AI to both consumers and workers.

On its part, the State can create incentives for businesses to respect human rights through regulation. It can consider extending the applicability of current laws to AI and support businesses by establishing capacity-building measures. For instance, in the financial services sector, RBI norms backing the right to know the reason for credit denial are applicable only to formal lending institutions;it must be made applicable to AI-based credit lending as well.

Deployment of AI in accordance with the UNGP framework can address harms such as exclusion, misuse of data, and privacy intrusion that impinge on constitutionally guaranteed fundamental rights and underlying human rights. A human-rights-respecting approach also has multi-fold gains for businesses, with research finding clear linkages between increased employee well-being and improved returns for companies.

(Rai works at Bengaluru- based Aapti Institute, a research institution at the intersection of tech and society. Nusrat Khan is the Business and Human Rights national specialist at UNDP India)

ADVERTISEMENT
(Published 19 September 2022, 16:59 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT