×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

A welcome commitment

There has been criticism that ‘Big Tech’ companies have not taken active steps to prevent the misuse of AI technology for commercial or political reasons. The new agreement may be an attempt on their part to counter the criticism.
Last Updated 28 February 2024, 01:45 IST

An agreement signed by large global technology companies and social media platforms in Munich last week committing themselves to work together to avoid harmful deployment of Artificial Intelligence (AI) in public activities is a welcome attempt to deal with an emerging technological threat. The agreement was specifically meant to ensure that AI is not misused in elections.

This is relevant when dozens of countries, including India, the US and the UK, are set to hold their national elections this year. The signatories have said that “the accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work.” 

The signatories include Meta, X, Google, LinkedIn, IBM, Adobe, OpenAI, Amazon, TikTok and Microsoft. There has been criticism that ‘Big Tech’ companies have not taken active steps to prevent the misuse of AI technology for commercial or political reasons. The agreement may be an attempt on their part to counter the criticism. 

False and mischievous content generated with the help of AI can influence voter behaviour and subvert the democratic process. The signatories have said that they will collaborate to detect and address online distribution of fake AI content, drive educational campaigns, and provide transparency about AI usage in generating political content.

They have also agreed on a broad set of principles, such as the importance of tracking the origin of deceptive political content and the need to raise public awareness about it. They have pledged to develop technologies to "mitigate risks" related to deceptive election content generated by AI.

A successful collaborative effort by them would help to restrict the spread of such content on social media platforms or search engines. The companies would try to monitor use of AI tools, or deny access to these for generating political content. They will share best practices and provide "swift and proportionate responses" when that content starts to spread.

While the companies have made a good commitment, it may not be easy to ensure the avoidance of wrong use of AI in election campaigns and in politics altogether. This is because the right and wrong use of technology cannot always be differentiated. Great technological effort will be needed to detect the wrong use of AI and to prevent dissemination of disinformation and malicious intent.

There may not be agreement on the definition of such content. The power and influence of governments or other entities and even the laws may be used to shield the spread of disinformation. It has been noted that the agreement is not binding and the commitments could be clearer. But an honest effort to implement them may have a positive impact to some extent.  

ADVERTISEMENT
(Published 28 February 2024, 01:45 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT