<p>A politically contested AI system continues to underpin an ongoing war analysis. The episode underscores a central governance dilemma that demands attention.</p><p>The United States-led Operation Epic Fury and Israel’s Operation Roaring Lion have signalled a turning point in contemporary conflict. More than a thousand sites in Iran were reportedly targeted in co-ordinated strikes, with AI tools such as Anthropic’s Claude used not as autonomous triggers, but as analytical engines aggregating vast volumes of human intelligence, satellite imagery, and targeting data. Whether or not one views this as the formal beginning of an AI warfare era, the use of AI in active conflicts exposes a widening gap between technological capability and legal control.</p><p>Existing multilateral efforts, particularly discussions within the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems under the Convention on Certain Conventional Weapons, have reaffirmed that international humanitarian law applies. However, reiteration of principles such as distinction and proportionality is not the same as designing systems that can operationalise them in code. The absence of a binding treaty reflects geopolitical fragmentation, not conceptual clarity.</p>.Beyond the rhetoric of war.<p>Over the past few years, states have endorsed non-binding guiding principles and tabled resolutions in the UN General Assembly calling for responsible military AI use. However, these remain diplomatic signals rather than enforceable obligations. As observed in international negotiations, the core disagreement is no longer about whether AI affects warfare, it is about how much autonomy States are willing to renounce. Major military powers continue to invest heavily in AI-enabled systems while resisting legally-binding constraints, and this tension is structural because AI is seen as a strategic multiplier, and no State wants to concede relative advantage.</p><p>The biggest concern here is not science fiction scenarios of fully autonomous machines, but the steady embedding of AI into targeting, logistics, intelligence fusion, and simulation systems. Contemporary military doctrine increasingly integrates AI as a decision support layer. Analytical models now process satellite imagery, signals intelligence, and battlefield telemetry at speeds no human team could match. This produces what strategists describe as decision compression, where the time available for assessment and authorisation shrinks dramatically. When AI tools compress decision cycles from days to hours, sometimes minutes, the pressure on human oversight becomes structural rather than procedural.</p><p>Human-in-the-loop cannot be a symbolic safeguard inserted at the final stage of a strike. It must mean meaningful veto power, traceable audit logs, and documented reasoning pathways that can be reviewed after action. In practical terms, this requires technical architectures that preserve explainability, and legal review mechanisms capable of interrogating complex systems. Without this, the invocation of human control risks becoming a rhetorical shield for automated escalation.</p><p>The reported integration of large language models (LLMs) and predictive systems into military planning environments further complicates accountability. Even when such systems are described as advisory rather than autonomous, their recommendations shape the information environment within which commanders decide. If an AI system influences who to strike, where, and when, responsibility cannot evaporate into algorithmic opacity. The doctrine of meaningful human control requires more than a signature at the bottom of a briefing document.</p><p>Article 36 of Additional Protocol to the Geneva Conventions obliges States to review new weapons for legality. However, these reviews were conceived for discrete platforms such as missiles or aircraft, not adaptive reasoning systems trained on vast datasets that evolve over time. An AI model can be updated, fine-tuned, or retrained, altering its behaviour without changing its physical form. This challenges traditional weapons review processes, which are episodic rather than continuous. From a governance perspective, we need lifecycle accountability, and that means scrutiny not only at procurement but also during training, deployment, updates, and post-strike analysis.</p><p>Academic work on future AI-enabled warfare, including operational studies emerging from Australia and elsewhere, underscores that militaries are experimenting with AI across sea, land, and air domains. These experiments aim to integrate ethics and human oversight, yet they also emphasise speed, adaptability, and competitive advantage. The governance dilemma is, therefore, embedded in doctrine itself, wherein States are preparing for high-tempo conflicts in which hesitation could be fatal. At the same time, international law depends on deliberation and proportional assessment.</p><p>For India, this is a strategic moment because, as a State with growing defence technology ambitions and a strong voice in digital governance debates, India is uniquely placed to bridge divides between technologically advanced militaries and developing countries concerned about destabilisation. India has consistently argued for responsible AI and for equitable global governance frameworks in civilian domains, and now the same clarity is required in the military sphere.</p><p>India should push for binding international norms, or push for a treaty that mandates lifecycle accountability, technical auditability, and clear command responsibility in military AI systems. This could include internationally agreed standards for audit trails, requirements for operational human veto authority, and transparency measures around testing and validation. Rather than framing regulation as a constraint on sovereignty, it should be articulated as a stabilising mechanism that reduces miscalculation and unintended escalation.</p><p><em><strong>Vidhi Sharma is Head of Responsible AI, and Sagar Vishnoi is Director & CoFounder, Future Shift Labs.</strong></em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>A politically contested AI system continues to underpin an ongoing war analysis. The episode underscores a central governance dilemma that demands attention.</p><p>The United States-led Operation Epic Fury and Israel’s Operation Roaring Lion have signalled a turning point in contemporary conflict. More than a thousand sites in Iran were reportedly targeted in co-ordinated strikes, with AI tools such as Anthropic’s Claude used not as autonomous triggers, but as analytical engines aggregating vast volumes of human intelligence, satellite imagery, and targeting data. Whether or not one views this as the formal beginning of an AI warfare era, the use of AI in active conflicts exposes a widening gap between technological capability and legal control.</p><p>Existing multilateral efforts, particularly discussions within the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems under the Convention on Certain Conventional Weapons, have reaffirmed that international humanitarian law applies. However, reiteration of principles such as distinction and proportionality is not the same as designing systems that can operationalise them in code. The absence of a binding treaty reflects geopolitical fragmentation, not conceptual clarity.</p>.Beyond the rhetoric of war.<p>Over the past few years, states have endorsed non-binding guiding principles and tabled resolutions in the UN General Assembly calling for responsible military AI use. However, these remain diplomatic signals rather than enforceable obligations. As observed in international negotiations, the core disagreement is no longer about whether AI affects warfare, it is about how much autonomy States are willing to renounce. Major military powers continue to invest heavily in AI-enabled systems while resisting legally-binding constraints, and this tension is structural because AI is seen as a strategic multiplier, and no State wants to concede relative advantage.</p><p>The biggest concern here is not science fiction scenarios of fully autonomous machines, but the steady embedding of AI into targeting, logistics, intelligence fusion, and simulation systems. Contemporary military doctrine increasingly integrates AI as a decision support layer. Analytical models now process satellite imagery, signals intelligence, and battlefield telemetry at speeds no human team could match. This produces what strategists describe as decision compression, where the time available for assessment and authorisation shrinks dramatically. When AI tools compress decision cycles from days to hours, sometimes minutes, the pressure on human oversight becomes structural rather than procedural.</p><p>Human-in-the-loop cannot be a symbolic safeguard inserted at the final stage of a strike. It must mean meaningful veto power, traceable audit logs, and documented reasoning pathways that can be reviewed after action. In practical terms, this requires technical architectures that preserve explainability, and legal review mechanisms capable of interrogating complex systems. Without this, the invocation of human control risks becoming a rhetorical shield for automated escalation.</p><p>The reported integration of large language models (LLMs) and predictive systems into military planning environments further complicates accountability. Even when such systems are described as advisory rather than autonomous, their recommendations shape the information environment within which commanders decide. If an AI system influences who to strike, where, and when, responsibility cannot evaporate into algorithmic opacity. The doctrine of meaningful human control requires more than a signature at the bottom of a briefing document.</p><p>Article 36 of Additional Protocol to the Geneva Conventions obliges States to review new weapons for legality. However, these reviews were conceived for discrete platforms such as missiles or aircraft, not adaptive reasoning systems trained on vast datasets that evolve over time. An AI model can be updated, fine-tuned, or retrained, altering its behaviour without changing its physical form. This challenges traditional weapons review processes, which are episodic rather than continuous. From a governance perspective, we need lifecycle accountability, and that means scrutiny not only at procurement but also during training, deployment, updates, and post-strike analysis.</p><p>Academic work on future AI-enabled warfare, including operational studies emerging from Australia and elsewhere, underscores that militaries are experimenting with AI across sea, land, and air domains. These experiments aim to integrate ethics and human oversight, yet they also emphasise speed, adaptability, and competitive advantage. The governance dilemma is, therefore, embedded in doctrine itself, wherein States are preparing for high-tempo conflicts in which hesitation could be fatal. At the same time, international law depends on deliberation and proportional assessment.</p><p>For India, this is a strategic moment because, as a State with growing defence technology ambitions and a strong voice in digital governance debates, India is uniquely placed to bridge divides between technologically advanced militaries and developing countries concerned about destabilisation. India has consistently argued for responsible AI and for equitable global governance frameworks in civilian domains, and now the same clarity is required in the military sphere.</p><p>India should push for binding international norms, or push for a treaty that mandates lifecycle accountability, technical auditability, and clear command responsibility in military AI systems. This could include internationally agreed standards for audit trails, requirements for operational human veto authority, and transparency measures around testing and validation. Rather than framing regulation as a constraint on sovereignty, it should be articulated as a stabilising mechanism that reduces miscalculation and unintended escalation.</p><p><em><strong>Vidhi Sharma is Head of Responsible AI, and Sagar Vishnoi is Director & CoFounder, Future Shift Labs.</strong></em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>