Are you safe, if your data isn't?

Are you safe, if your data isn't?

Are you safe, if your data isn't?

A five-judge bench of the Supreme Court has begun its deliberations on the Aadhaar matter and is expected to rule conclusively on whether the government's landmark identity project is constitutionally valid or not. In the run up to the case, there has been an increase in the public discourse around Aadhaar and the exclusion that it's being seen to cause. Concerns have been raised around how it will result in the creation of a surveillance state and the many flaws in implementation that have resulted in personal information of citizens being exposed for paltry sums of money.

Regardless of what the Supreme Court decides, it seems that we have finally developed a heightened cultural awareness of the concept of privacy and the harms that could come from the misuse of personal information. Now that Aadhaar has made us dependent on a digital identity and as the decisions that are being made about us are increasingly based on the information that other people have of us, we are finally becoming aware of the urgent need for a privacy law.

Data informs almost everything we do today. Whenever we sign up to a service or download a new app, we are obliged to disclose personal information about ourselves to be able to receive these services. But privacy means so much more in today's world of networked mobile apps. Thanks to the ubiquitous reach of the internet and the widespread use of cloud computing, service providers are capable of offering us incredibly personalised service offerings leveraging their knowledge of our usage trends to predict what we might like to do. It has got to the point where the trends that their algorithms predict are so eerily accurate that it almost seems as if they are eavesdropping on our conversations or snooping through our private messages.

But algorithms influence many other facets of our lives too, in ways that are far less obvious. When we apply for a job through a job portal, recruitment algorithms scan through our resumes, selecting one candidate over the others based on parameters that humans have no control over and which results in some of us being excluded from even the shortlist, based on reasons that the computers have themselves determined are appropriate. In the financial services sector, new credit-ranking algorithms use a number of extraneous criteria to assess our eligibility for a loan, allowing the banking sector to cast its net far wider than was possible before but at the same time offering new and less obvious grounds for exclusion. Even our use of the internet is constrained by algorithms that predict what news must be included in our newsfeed based on what they have determined we will like -- resulting in the phenomenon of fake news and a growing lack of faith in the news industry.

For the most part, these decisions are based on an analysis of information personal to the individual concerned, using available data about the person to build profiles about him or her that are often deeply personal and reveal personality traits that we sometimes do not even know we possess. While there is no doubt that many of these data-driven decisions benefit us, they are, at the same time, capable of causing us considerable harm. For every person who gets a job based on algorithmic shortlisting, a hundred are denied the chance to even present their case before a human interviewer. For every micro-entrepreneur who gets a loan to take her small business to the next level, there are an equal number who will be denied that same opportunity based on criteria that the machine determines is relevant to creditworthiness.

While I have no generic objection to any of this, I am concerned that in India, we have no legal framework within which to evaluate whether these algorithms are performing as well as they have been advertised. Since most of them arrive at their decisions based on their own machine analysis of historical data, I am concerned that their decisions are coloured by the same biases that have influenced the human decisions that they are learning from. Without a proper legal and regulatory framework against which these algorithms can be tested and held accountable, the harms that they could cause are considerable and would affect those least likely to be able to deal with the financial and reputational fallout.

This is why a data protection law and an overarching privacy law are so important. Along with our digital identity, the government has built a powerful digital infrastructure that has the potential of reaching into every aspect of our modern life and making everyone in this country far more deeply reliant on data than we have ever been before. This is why we need the shelter of a privacy law to protect us from the excesses of this sort of a data-driven world. We need guidelines that will stipulate what corporations are allowed to do with the data they collect and who must be held accountable for its misuse. We need a regulator to determine how we should be thinking about breaking down the asymmetry that exists between us and the organisations that collect and use our data so that we have more control over what they do with our data and the harm that might be caused to us by such use.

The Justice Srikrishna Committee is currently working on just this. They have released an extensive White Paper with over 200 questions on various aspects of data protection, seeking comments from the public on the form and structure of the privacy law. When they conclude their exercise, they are expected to produce a model law that will address the requirements of our country as well as reflect the needs of an increasingly data-driven world.

I hope that they will use this opportunity to prepare a truly forward-thinking law that will protect us from the harms that can be visited upon us but which will, at the same time, allow data to empower us.

(The writer is Partner, Trilegal, a law firm)