<p>If you wanted to see the way the artificial intelligence (AI) world works, you only had to experience the India AI Impact Summit in New Delhi. Ignore how it’s described. What matters is how it operated.</p><p>Throughout the week of February 16 to 20, when the summit took place, roads in India’s national capital were <a href="https://www.deccanherald.com/india/delhi/ai-summit-in-delhi-all-you-need-to-know-about-traffic-restrictions-security-arrangements-3900063">frequently closed</a> to enable dignitaries to have a smooth journey, while normal delegates struggled to reach or leave the venue through chaotic traffic. Students were shipped in to drink the AI kool-aid and boost engagement numbers. Women were drafted onto panels last-minute, in a tokenistic nod to diversity. On the big day, most attendees were entirely excluded from a programme that lacked any time for dialogue or dissent from the visions espoused by the United States and domestic tech CEOs.</p><p>The banners on the sides of the road read ‘AI stands for All Inclusive’. The reassuring words were all there, in the speeches and the PR — ‘responsibility’, ‘openness’, ‘democratisation’ — but they rang hollow. Inclusivity needs to be practical, not tokenistic. Responsibility starts from care, not self-congratulation. Openness means dialogue, not exposition. Democracy entails friction, not suppression. </p>.Still refugees, after 30 years.<p>AI needs to serve our purposes — ours, humanity, the <a href="https://aigovernance.co.uk/we-need-to-ensure-artificial-intelligence-benefits-8-billion-people-not-just-8-billionaires/">eight billion, not eight billionaires</a>. We need to be able to direct it towards the problems we care about, and carefully avoid systems that could cause systemic or catastrophic harm. We must have the option to refuse — to not <a href="https://www.theguardian.com/accenture/2026/feb/19/accenture-links-staff-promotions-to-use-of-ai-tools">use AI in our workplaces</a>; to not have conversations with our doctors recorded; and, to be supported by a real teaching assistant rather than an AI tutor. Particularly in the Global South, where the scale of harm will be significant.</p><p>This is a matter of agency, dignity, and rights; the preservation of care, and the importance of connection with each other. But it is also practical, even within the narrow growth-and-productivity framings of governments. If the goal is to drive AI adoption, we need tools that we want to use because they enrich our lives. Nothing drives distrust, resistance, and avoidance like the feeling of being out of control.</p><p>At the time, the first of this series of AI summits, hosted in <a href="https://www.gov.uk/government/topical-events/ai-safety-summit-2023">Bletchley (England) in November 2023</a> and with a focus on AI safety, already felt disconnected from public concerns. Over 100 high-profile signatories, including leading experts, international human rights organisations and unions representing millions of workers, wrote an <a href="https://ai-summit-open-letter.info/">open letter to then UK Prime Minister Rishi Sunak</a> that highlighted the risks and harms of AI being felt in the here and now: management by algorithm; inaccurate profiling; biometric surveillance; and reduced opportunities. The <a href="https://aifringe.org/2023">AI Fringe</a> offered an alternate space to widen the conversation, and a small <a href="https://connectedbydata.org/projects/2023-peoples-panel-on-ai">People’s Panel on AI</a> — 11 representative members of the public who attended, observed, and discussed key events there, gave their verdict on AI, along with recommendations for further action.</p>.India delays US trade talks after Supreme Court rejects Trump tariffs: Report.<p>In retrospect, the Bletchley summit was the closest we have come to achieving concrete government commitments with tangible requirements on tech companies, and to bringing AI advocates face-to-face with their critics. It was undoubtedly an exclusive affair. Civil society organisations that took part in the AI Safety Summit roundtable discussions alongside ministers and the leaders of the largest AI companies <a href="https://ainowinstitute.org/news/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to-protect-the-public">reflected</a>, “Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us.” However, while small in numbers, their influence was visible in the <a href="https://www.gov.uk/government/publications/ai-safety-summit-2023-chairs-statement-2-november/chairs-summary-of-the-ai-safety-summit-2023-bletchley-park">chair’s summary of discussions</a>.</p><p>France’s AI Action Summit took place in Paris in 2025. While there was a genuine attempt to garner worldwide inputs, in the event the urge to champion national innovation and entice inward investment drove the agenda. It was striking for US Vice President J D Vance’s speech, and the fact that the US and UK refused to sign the declaration. For their headline announcement, governments and philanthropic ventures could only manage to scrape together €400 million, spread over five years, to advance public interest AI through the <a href="https://www.currentai.org/">Current AI foundation</a> — that’s less than 0.05% of the over $200 billion invested in AI in 2025.</p><p>Just as in the UK, the side events were where the real discussions happened. Alongside the French summit, a range of global partners co-hosted the first <a href="https://www.pairs.site/PAIRS-2025-26a260e24e1a804c9f79c8ae7cbf2615">Participatory AI Research and Practice Symposium (PAIRS 2025)</a>, an event we have repeated this year in India. There, diverse voices reflect on the action that exposes those empty signifiers: campaigners resisting AI, academics critically evaluating participatory AI audits, and those few remaining industry actors working to embed deliberative public engagement in AI development. We have learned from practitioners around the world who are already working with diverse communities to shape AI. Hearing their voices isn’t hard. You just have to listen.</p><p>The French summit tiptoed around the early days of Donald Trump’s presidency. The <a href="https://www.weforum.org/stories/2026/01/davos-2026-special-address-by-mark-carney-prime-minister-of-canada/">rupture in the world order</a> is more apparent now, and countries are waking up to the need to retain sovereignty and security, and make new alliances. </p><p>India chose to showcase its own billionaire elites and tech entrepreneurs, prioritise quantity over quality, and used it as an opportunity to demonstrate India’s own prowess and <a href="https://fortune.com/2026/02/17/india-cobbles-together-200-billion-plus-for-data-center-investment/">investment worthiness</a>. While the government narrative will focus on ‘inclusion’, the objective has been to drive the adoption of AI and nothing more. </p><p>Switzerland, which will host next year’s AI summit, could bring an entirely different approach. Its policy of neutrality, and the self-imposed isolation of the US, provide opportunities for bolder multi-lateral agreements. The Swiss experience with direct democracy means it could be a champion for public voice. But if it follows the pattern of last-minute planning that has characterised the previous summits, it too will fail to provide the space and structures for multi-stakeholder negotiation that provide a meaningful check on tech power.</p><p>The work for that, however, needs to start now. AI’s impact on our societies is happening at pace. International diplomacy is not keeping up; nor is the media accountability, philanthropic funding, or civil society action and mobilisation that is the necessary friction for democracy. We must hold our governments accountable and demand substance behind the empty rhetoric.</p><p><em><strong>Jeni Tennison is Executive Director Connected by Data</strong></em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.)</p>
<p>If you wanted to see the way the artificial intelligence (AI) world works, you only had to experience the India AI Impact Summit in New Delhi. Ignore how it’s described. What matters is how it operated.</p><p>Throughout the week of February 16 to 20, when the summit took place, roads in India’s national capital were <a href="https://www.deccanherald.com/india/delhi/ai-summit-in-delhi-all-you-need-to-know-about-traffic-restrictions-security-arrangements-3900063">frequently closed</a> to enable dignitaries to have a smooth journey, while normal delegates struggled to reach or leave the venue through chaotic traffic. Students were shipped in to drink the AI kool-aid and boost engagement numbers. Women were drafted onto panels last-minute, in a tokenistic nod to diversity. On the big day, most attendees were entirely excluded from a programme that lacked any time for dialogue or dissent from the visions espoused by the United States and domestic tech CEOs.</p><p>The banners on the sides of the road read ‘AI stands for All Inclusive’. The reassuring words were all there, in the speeches and the PR — ‘responsibility’, ‘openness’, ‘democratisation’ — but they rang hollow. Inclusivity needs to be practical, not tokenistic. Responsibility starts from care, not self-congratulation. Openness means dialogue, not exposition. Democracy entails friction, not suppression. </p>.Still refugees, after 30 years.<p>AI needs to serve our purposes — ours, humanity, the <a href="https://aigovernance.co.uk/we-need-to-ensure-artificial-intelligence-benefits-8-billion-people-not-just-8-billionaires/">eight billion, not eight billionaires</a>. We need to be able to direct it towards the problems we care about, and carefully avoid systems that could cause systemic or catastrophic harm. We must have the option to refuse — to not <a href="https://www.theguardian.com/accenture/2026/feb/19/accenture-links-staff-promotions-to-use-of-ai-tools">use AI in our workplaces</a>; to not have conversations with our doctors recorded; and, to be supported by a real teaching assistant rather than an AI tutor. Particularly in the Global South, where the scale of harm will be significant.</p><p>This is a matter of agency, dignity, and rights; the preservation of care, and the importance of connection with each other. But it is also practical, even within the narrow growth-and-productivity framings of governments. If the goal is to drive AI adoption, we need tools that we want to use because they enrich our lives. Nothing drives distrust, resistance, and avoidance like the feeling of being out of control.</p><p>At the time, the first of this series of AI summits, hosted in <a href="https://www.gov.uk/government/topical-events/ai-safety-summit-2023">Bletchley (England) in November 2023</a> and with a focus on AI safety, already felt disconnected from public concerns. Over 100 high-profile signatories, including leading experts, international human rights organisations and unions representing millions of workers, wrote an <a href="https://ai-summit-open-letter.info/">open letter to then UK Prime Minister Rishi Sunak</a> that highlighted the risks and harms of AI being felt in the here and now: management by algorithm; inaccurate profiling; biometric surveillance; and reduced opportunities. The <a href="https://aifringe.org/2023">AI Fringe</a> offered an alternate space to widen the conversation, and a small <a href="https://connectedbydata.org/projects/2023-peoples-panel-on-ai">People’s Panel on AI</a> — 11 representative members of the public who attended, observed, and discussed key events there, gave their verdict on AI, along with recommendations for further action.</p>.India delays US trade talks after Supreme Court rejects Trump tariffs: Report.<p>In retrospect, the Bletchley summit was the closest we have come to achieving concrete government commitments with tangible requirements on tech companies, and to bringing AI advocates face-to-face with their critics. It was undoubtedly an exclusive affair. Civil society organisations that took part in the AI Safety Summit roundtable discussions alongside ministers and the leaders of the largest AI companies <a href="https://ainowinstitute.org/news/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to-protect-the-public">reflected</a>, “Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us.” However, while small in numbers, their influence was visible in the <a href="https://www.gov.uk/government/publications/ai-safety-summit-2023-chairs-statement-2-november/chairs-summary-of-the-ai-safety-summit-2023-bletchley-park">chair’s summary of discussions</a>.</p><p>France’s AI Action Summit took place in Paris in 2025. While there was a genuine attempt to garner worldwide inputs, in the event the urge to champion national innovation and entice inward investment drove the agenda. It was striking for US Vice President J D Vance’s speech, and the fact that the US and UK refused to sign the declaration. For their headline announcement, governments and philanthropic ventures could only manage to scrape together €400 million, spread over five years, to advance public interest AI through the <a href="https://www.currentai.org/">Current AI foundation</a> — that’s less than 0.05% of the over $200 billion invested in AI in 2025.</p><p>Just as in the UK, the side events were where the real discussions happened. Alongside the French summit, a range of global partners co-hosted the first <a href="https://www.pairs.site/PAIRS-2025-26a260e24e1a804c9f79c8ae7cbf2615">Participatory AI Research and Practice Symposium (PAIRS 2025)</a>, an event we have repeated this year in India. There, diverse voices reflect on the action that exposes those empty signifiers: campaigners resisting AI, academics critically evaluating participatory AI audits, and those few remaining industry actors working to embed deliberative public engagement in AI development. We have learned from practitioners around the world who are already working with diverse communities to shape AI. Hearing their voices isn’t hard. You just have to listen.</p><p>The French summit tiptoed around the early days of Donald Trump’s presidency. The <a href="https://www.weforum.org/stories/2026/01/davos-2026-special-address-by-mark-carney-prime-minister-of-canada/">rupture in the world order</a> is more apparent now, and countries are waking up to the need to retain sovereignty and security, and make new alliances. </p><p>India chose to showcase its own billionaire elites and tech entrepreneurs, prioritise quantity over quality, and used it as an opportunity to demonstrate India’s own prowess and <a href="https://fortune.com/2026/02/17/india-cobbles-together-200-billion-plus-for-data-center-investment/">investment worthiness</a>. While the government narrative will focus on ‘inclusion’, the objective has been to drive the adoption of AI and nothing more. </p><p>Switzerland, which will host next year’s AI summit, could bring an entirely different approach. Its policy of neutrality, and the self-imposed isolation of the US, provide opportunities for bolder multi-lateral agreements. The Swiss experience with direct democracy means it could be a champion for public voice. But if it follows the pattern of last-minute planning that has characterised the previous summits, it too will fail to provide the space and structures for multi-stakeholder negotiation that provide a meaningful check on tech power.</p><p>The work for that, however, needs to start now. AI’s impact on our societies is happening at pace. International diplomacy is not keeping up; nor is the media accountability, philanthropic funding, or civil society action and mobilisation that is the necessary friction for democracy. We must hold our governments accountable and demand substance behind the empty rhetoric.</p><p><em><strong>Jeni Tennison is Executive Director Connected by Data</strong></em></p><p>(Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.)</p>