The wrong way to regulate disinformation

Kerala govt actions show us what not to do; it’s important to focus on cleaning information ecosystem through positive incentives & regulate for intent
Last Updated 26 November 2020, 07:56 IST

When the Kerala Governor signed a controversial Ordinance, now withdrawn, proposing amendments to the Kerala Police Act, there was understandably a significant amount of criticism and ire directed at the state government for a provision that warranted a three-year jail term for intentionally and falsely defaming a person or a group of people. After the backlash, the state’s Chief Minister announced his intention not to implement the fresh amendment.

How not to regulate information disorder

For anyone tracking the information ecosystem and how different levels of state administration are responding to information disorder (misinformation, disinformation and malinformation) this attempted overreach is not surprising. In Kerala alone, over the last few months, we have witnessed accusations from the opposition of ‘Trump-ian’ behaviour on the part of the state administration to decry any unflattering information as ‘fake news’. Even in September, the Chief Minister had to assure people that measures to curb information disorder will not affect media freedom, after pushback against decisions to expand fact-checking initiatives beyond Covid-19 related news. In October, it was reported that over 200 cases were filed for ‘fake news’ in the preceding five months.

Of course, this is by no means limited to one state, or a particular part of the political spectrum. Across the country, there have been measures such as banning social media news platforms, notifications/warnings to WhatsApp admins, a PIL seeking Aadhaar linking to social media accounts, as well as recommendations to the Union Home Minister for ‘real-time social media monitoring’. Arrests/FIRs against journalists and private citizens for ‘fake news’ and ‘rumour-mongering’ have taken place in several states.

How to regulate information disorder?

Before proceeding to ‘the how’, it is important to consider two fundamental questions when it comes to the topic of regulating disinformation. First, should we? Four or five years ago, many people would have said no. Yet, today, many people will probably say yes. What will we say in the four or five years from now? We don’t know.

Then, do we need new laws? It is not uncommon to witness the political theatre of rushed legislation to create the appearance of ‘taking action’ when very often, the solution lies somewhere in between building capacity and enforcing existing laws.

These are questions that we need to debate and answer in ways that will stand the test of time.

In The Disinformation Age Heidi Tworek of the University of British Columbia, and Ben Epstein from DePaul University, approach the question of regulating information disorder. Similarly, Richard Allan, (a visiting fellow at the Reuters Institute for the Study of Journalism) engaged with the role of a ‘misinformation regulator’ on Regulate.Tech. Keep in mind that these works are from the perspective of the US and UK respectively, but there are lessons that we can draw when it comes to the broader topic of regulating disinformation online.

Heidi Tworek warns of ‘novelty hype’, in that, we consider prevailing circumstances to be unique and in the process risk the following: Misdiagnosing issues as content problems rather than situating them in the broader context of international relations, economics and society; overlooking the path dependence of the internet (that is, the history of technologies preceding the internet matters); focusing on day-to-day occurrences instead of investigating underlying structures and thinking short term instead of the long-term and unintended consequences of regulation.

She then draws historical lessons from the German inter-war period that could be applicable today. Some that are more applicable to the Indian context are: Business structures are often more important than individual pieces of content, that solutions should address societal divisions that the media is being used to exploit and the need for robust regulatory institutions and to “democracy-proof” solutions so they cannot be bent or captured by the powers of the day.

Ben Epstein poses three important questions that can offer some further insight on how to proceed. First, how should the problem be framed? Next, who should control the regulation? And finally, what would effective regulation look like? Let’s take these one by one.

When you consider the information ecosystem and problem framing, it can be tempting to disregard intentions since the concerns are with the consequences (they are not often easy to establish either). However, when considering punitive measures, limiting the scope by intent is a useful guardrail to start with. In other words, clean the ecosystem through positive incentives and regulate and/or prosecute for intent/disinformation.

Principles to regulate information disorder

The control of regulation is often posed as a binary between self-regulation and government regulation. And for many, it seems the self-regulation option is closed. This is partly down to the actions (or inactions) of the platforms themselves. But Covid-19 and the US elections have demonstrated that platforms can act (whether these actions are effective is another topic) when strong enough incentives, positive or negative, are created for them to do so – in this case, likely political and public pressure.

It is also important to keep in mind that the likelihood of systemic changes to incentives without a state-backed push, in a timeframe we need, seems unlikely. Though, events around the world suggest that the information ecosystem is also too fragile to be controlled by an unfettered state. Here, Epstein and Richard Allan offer a middle-ground of sorts in the form of independent commissions.

And finally, on what this regulation should look like. Based on these efforts, we arrive at four guiding principles. First, disinformation regulation should minimise the harm posed by disinformation but cause minimal additional harm. Second, it should be commensurate with the size of the harm caused by disinformation as well as the market size of the companies subject to it. Third, such regulation should be forward-thinking, adaptable and responsive to changes in the information ecosystem. And finally, they should be informed by research in the field and ‘determined’ by independent agencies.

Now, contrast these with what we have seen so far.

(Prateek Waghre is a research analyst at The Takshashila Institution. He writes MisDisMal-Information, a newsletter on the information ecosystem in India)

Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.

(Published 26 November 2020, 07:56 IST)

Follow us on