×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Learnings from FB, Twitter political advertising row

The developments of the past few weeks show why we cannot expect these platforms to be neutral places in the context of public discourse solely through self-regulation.
Last Updated 21 November 2019, 13:07 IST

Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.

In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.

Potential for misuse

To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).

While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.

While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?

Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.

A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.

Fixing Accountability

The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.

Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.

Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.

(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)

Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.

ADVERTISEMENT
(Published 21 November 2019, 09:39 IST)

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on

ADVERTISEMENT
ADVERTISEMENT