<p><em>Stuti Agrawal</em></p><p>Meta’s move to discontinue its fact-checking programme and replace it with Community Notes, a system already implemented by X (formerly Twitter) under Elon Musk’s leadership, has sparked widespread reactions. What is particularly striking is the timing of this announcement, with Donald Trump all set to be sworn in as the US President for his second term. Meta’s justification of bias in the functioning of the International Fact-Checking Network (IFCN) and its motive to champion free speech loses ground as one considers the underlying intent to prevent the censorship of right-wing content that had been affected by the fact-checking programme in place earlier. </p>.<p>Meta introduced its fact-checking programme in response to widespread criticism for allowing the proliferation of fake news during the 2016 elections. The shift to and reliance on the Community Notes system is being questioned, as it has failed to curb the spread of misinformation and disinformation on X. Community Notes utilises platform users as fact-checkers who add notes and react to existing ones under posts. This approach contrasts with the now-suspended programme that employed independent, certified third-party fact-checkers to review and rate content. While the new model claims to include safeguards to prevent bias, the potential for polarising posts and fake news to slip through or even be amplified by the current algorithm cannot be ignored. Given the critical role social media platforms play in shaping public discourse, this development has significant implications for users of Facebook, Instagram, and WhatsApp.</p>.Multimodal AI: The next tech frontier.<p>When platforms shirk their responsibility to moderate content effectively, the consequences extend beyond individual users and undermine democratic processes. In response to Meta chief Mark Zuckerberg’s announcement, UN High Commissioner for Human Rights Volker Türk reiterated that regulating hate speech and harmful content online is not censorship. This view stems from the understanding that while free speech is fundamental, it is not absolute and must be balanced against other rights, such as protection from hate speech, discrimination and access to adequate resources. Inaction under the guise of free speech disproportionately impacts marginalised groups, amplifies divisive narratives and creates an electorate swayed by falsehoods rather than informed debate. The situation is worsened by the authority’s failure to provide the public with reliable information. This promotes the spread of baseless, unscientific narratives and contributes to an atmosphere of fear. </p>.<p>The political leanings of key decision-makers undeniably shape the circulation of information on their platforms. Zuckerberg’s focus on issues of ‘gender’ and ‘immigration’ — key agendas during the Trump administration — during his five-minute address provides a glimpse into the anticipated priorities and dynamics after the implementation of the new system.</p>.<p>The ramifications of Meta’s decision will not be confined to the US. Brazil’s proactive stance in questioning Meta’s policy change is a hopeful sign of accountability. However, the prospect of this policy expanding to other countries, particularly democracies like India, is alarming. A recent study conducted by the Indian School of Business (ISB) and Cyber Peace revealed that social media is responsible for 77.4% of misinformation cases, with X (61%) and Facebook (34%) identified as the leading platforms for spreading fake news in India. Meta has previously come under scrutiny in India, with allegations from the Wall Street Journal suggesting that Facebook failed to remove anti-Muslim content, while the government accused the platform of being biased against right-wing content. India, with its vast and diverse electorate, is especially vulnerable to misinformation during elections, with political fake news accounting for the most significant share (46%) of misinformation, as per the above study. This has also triggered outbreaks of violence, such as during the ethnic conflict in Manipur, where false news circulated online incited unrest. A weakened fact-checking framework will only exacerbate existing challenges, from voter suppression to the amplification of communal tensions.</p>.<p>The introduction of the Community Notes system is less concerning compared to Meta’s decision to discontinue its fact-checking programme. The fact-checking programme could have remained intact while the Community Notes model was being rolled out, providing a more effective approach to combating misinformation and disinformation. One potential option for users is to migrate to alternative platforms. However, this seems impractical given the vast reach and influence of social media platforms owned by these tech giants. While Zuckerberg’s actions could be interpreted as yielding to the Trump administration, other governments must work to prevent the extension of this system beyond the US. It is essential to evaluate the effectiveness of Community Notes across various jurisdictions and hold Meta accountable for its shortcomings in addressing misinformation and fake news.</p>.<p>(The writer is a fourth-year law student at Jindal Global Law School)</p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p><em>Stuti Agrawal</em></p><p>Meta’s move to discontinue its fact-checking programme and replace it with Community Notes, a system already implemented by X (formerly Twitter) under Elon Musk’s leadership, has sparked widespread reactions. What is particularly striking is the timing of this announcement, with Donald Trump all set to be sworn in as the US President for his second term. Meta’s justification of bias in the functioning of the International Fact-Checking Network (IFCN) and its motive to champion free speech loses ground as one considers the underlying intent to prevent the censorship of right-wing content that had been affected by the fact-checking programme in place earlier. </p>.<p>Meta introduced its fact-checking programme in response to widespread criticism for allowing the proliferation of fake news during the 2016 elections. The shift to and reliance on the Community Notes system is being questioned, as it has failed to curb the spread of misinformation and disinformation on X. Community Notes utilises platform users as fact-checkers who add notes and react to existing ones under posts. This approach contrasts with the now-suspended programme that employed independent, certified third-party fact-checkers to review and rate content. While the new model claims to include safeguards to prevent bias, the potential for polarising posts and fake news to slip through or even be amplified by the current algorithm cannot be ignored. Given the critical role social media platforms play in shaping public discourse, this development has significant implications for users of Facebook, Instagram, and WhatsApp.</p>.Multimodal AI: The next tech frontier.<p>When platforms shirk their responsibility to moderate content effectively, the consequences extend beyond individual users and undermine democratic processes. In response to Meta chief Mark Zuckerberg’s announcement, UN High Commissioner for Human Rights Volker Türk reiterated that regulating hate speech and harmful content online is not censorship. This view stems from the understanding that while free speech is fundamental, it is not absolute and must be balanced against other rights, such as protection from hate speech, discrimination and access to adequate resources. Inaction under the guise of free speech disproportionately impacts marginalised groups, amplifies divisive narratives and creates an electorate swayed by falsehoods rather than informed debate. The situation is worsened by the authority’s failure to provide the public with reliable information. This promotes the spread of baseless, unscientific narratives and contributes to an atmosphere of fear. </p>.<p>The political leanings of key decision-makers undeniably shape the circulation of information on their platforms. Zuckerberg’s focus on issues of ‘gender’ and ‘immigration’ — key agendas during the Trump administration — during his five-minute address provides a glimpse into the anticipated priorities and dynamics after the implementation of the new system.</p>.<p>The ramifications of Meta’s decision will not be confined to the US. Brazil’s proactive stance in questioning Meta’s policy change is a hopeful sign of accountability. However, the prospect of this policy expanding to other countries, particularly democracies like India, is alarming. A recent study conducted by the Indian School of Business (ISB) and Cyber Peace revealed that social media is responsible for 77.4% of misinformation cases, with X (61%) and Facebook (34%) identified as the leading platforms for spreading fake news in India. Meta has previously come under scrutiny in India, with allegations from the Wall Street Journal suggesting that Facebook failed to remove anti-Muslim content, while the government accused the platform of being biased against right-wing content. India, with its vast and diverse electorate, is especially vulnerable to misinformation during elections, with political fake news accounting for the most significant share (46%) of misinformation, as per the above study. This has also triggered outbreaks of violence, such as during the ethnic conflict in Manipur, where false news circulated online incited unrest. A weakened fact-checking framework will only exacerbate existing challenges, from voter suppression to the amplification of communal tensions.</p>.<p>The introduction of the Community Notes system is less concerning compared to Meta’s decision to discontinue its fact-checking programme. The fact-checking programme could have remained intact while the Community Notes model was being rolled out, providing a more effective approach to combating misinformation and disinformation. One potential option for users is to migrate to alternative platforms. However, this seems impractical given the vast reach and influence of social media platforms owned by these tech giants. While Zuckerberg’s actions could be interpreted as yielding to the Trump administration, other governments must work to prevent the extension of this system beyond the US. It is essential to evaluate the effectiveness of Community Notes across various jurisdictions and hold Meta accountable for its shortcomings in addressing misinformation and fake news.</p>.<p>(The writer is a fourth-year law student at Jindal Global Law School)</p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>