Image for representational purposes.
Credit: iStock Photo
New Delhi: Researchers who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse improved.
Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language.
Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high quality online conversation and "substantially increase (one's) openness to alternative viewpoints", according to findings published in the journal Science Advances.
Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found.
Large language models could provide "light-touch suggestions", such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI.
"To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics," Eady said.
Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, "(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups." Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post.
This was countered by ChatGPT -- a "fictitious social media user" for the participants -- which tailored its argument "on the fly" according to the text's position and reasoning. The participants then responded as if replying to a social media comment.
"An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors wrote in the study.
Eady said, "Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc." AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced.
Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet.
The study itself involved humans to rate responses as well, she said.
Additionally, context, culture, and timing would need to be considered for such regulation, she added.
Eady too is apprehensive about "using LLMs to regulate online political discussions in more heavy-handed ways." Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward.
Eady added, "The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India." "Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible," the author added.
Kapoor said, "In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here." Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement.
Findings of researchers from Singapore's Nanyang Technological University suggest that "those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement." Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed.
Describing the study "interesting", Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour.
Her team, which has developed a scale to measure one's political ideology in India (published in a pre-print paper), found that dark personality traits were associated with a disregard for norms and hierarchies.