ADVERTISEMENT
Driver? Manufacturer? Software developer…who is liable when an AI-powered car meets with an accident?Without reform, courts will be forced to retrofit 19th-century negligence doctrines and 20th-century statutes onto 21st-century machines, producing uncertainty and injustice.
Yogesh Anand
Sumit Kumar Singh
Last Updated IST
<div class="paragraphs"><p>Representative image.&nbsp;</p></div>

Representative image. 

Credit: iStock Image. 

Imagine an Artificial Intelligence (AI)-assisted vehicle or a vehicle with an AI system in full control is involved in an accident — who will be held liable: the driver, the vehicle manufacturer, the AI/software developer, or someone else? The existing Motor Vehicles Act presumes a human driver is in control; but that’s not the case in a vehicle with an AI system. The gravity of this scenario should prompt us to explore the critical legal and policy challenges to integrate software-assisted vehicles (SAVs) into India’s statutory and legislative framework.

ADVERTISEMENT

The NITI Aayog’s September report, ‘AI for Viksit Bharat: The Opportunity for Accelerated Economic Growth, ’ underscores AI’s transformative role across sectors. Now, SAVs represent the next generation of automobiles, wherein software rather than hardware-intensive systems form the core of functionality and performance. These vehicles operate across five levels of autonomy, defined by the Society of Automotive Engineers (SAE) International.

India’s automotive industry is transitioning from Level 2, which is partial vehicle autonomy. India is expected to achieve Level 3 autonomy by 2035. By 2035, 40-50% of India’s projected 40-45 million vehicle market, equivalent to 18–20 million units, will be software-enabled. Through AI-enabled SAV subscriptions alone, India stands to unlock $6-8 billion in cumulative domestic value by 2035, with an estimated $1.5-2 billion in annual recurring value by 2035.

This brings us to the question asked at the beginning: who should be held liable? Currently, liability for road accidents rests largely on the driver, owner, or insurer, and compensation is available under ‘no-fault’ provisions. Yet in the case of SAVs, responsibility may need to shift or be shared among the vehicle manufacturer, the AI/software developer, the component supplier, the insurer, or even infrastructure providers if road or signal failures contributed to the accident. For example, a software defect or inadequate security patch may make the developer liable, whereas failure to maintain or update the system could render the owner responsible, and cyber-terrorism might trigger insurer liability, while raising questions about broader State responsibility for cybersecurity.

Given this, India, clearly, needs to update its legal framework. The definitions of ‘driver’, ‘operator’, and ‘owner’ must be revisited to reflect AI-enabled mobility, mandatory safety, and cybersecurity standards must be laid down; and robust accident-data logging should be required to establish fault. Additionally, insurance frameworks need to evolve, with product liability coverage for manufacturers, and strict liability regimes for software failures. In short, while the economic promise of SAVs is immense, realising it will depend not only on technological adoption but also on building a clear, predictable, and fair liability regime that protects consumers, ensures accountability, and encourages innovation.

Against this backdrop, several policy interventions and questions arise: Is our legislative framework prepared to adopt an AI-integrated model? Under existing statutes such as the Motor Vehicles Act, the Consumer Protection Act, and the provisions on liability under the Bharatiya Naya Sanhita (BNS) 2023, there is limited clarity on how AI systems, algorithms, and autonomous decision-making are to be addressed. The BNS 2023 envisages punishment for acts committed by a ‘man’/’woman’/’any person’. However, this framework does not extend to acts committed by AI systems or autonomous machines, leaving a significant gap in legal interpretation and liability determination.

Given the policy gaps and the loose ends in the current framework, certain reforms can be implemented and actualised. First, under the Motor Vehicle Act, the concepts of automated driving systems and automated vehicles must be formally introduced, corresponding to different levels of AI integration. Second, a mechanism akin to a ‘black box’ should be mandated for AI-driven vehicles. These data recorders would capture software decisions and vehicle status, which can then be examined in the aftermath of an incident to determine causation and liability. Third, ‘product liability’ must be strengthened with respect to AI under the Consumer Protection Act, 2019, with corresponding regulatory accountability. Finally, cybersecurity laws must be expanded to cover scenarios where AI-driven vehicles are hacked or compromised, incorporating anti-tampering standards and protection of critical infrastructure projects.

India’s choice is stark. Without reform, courts will be forced to retrofit 19th-century negligence doctrines and 20th-century statutes onto 21st-century machines, producing uncertainty and injustice. With reform, India can not only safeguard victims and citizens but also provide the clarity manufacturers and insurers need to invest in autonomous vehicle deployment.

Yogesh Anand and Sumit Kumar Singh are students at the National University of Study and Research in Law, Ranchi.

Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.

ADVERTISEMENT
(Published 18 September 2025, 12:18 IST)