Representative image showing human-AI interaction
Credit: iStock Photo
During the Covid-19 crisis, Georgia-based global logistics firm UPS faced immense pressure due to surging e-commerce demand, supply chain disruptions, and driver shortages. While its AI-powered Orion system was deployed before the pandemic, UPS expanded its AI-driven logistics strategy to optimise real-time delivery routing, warehouse automation, and predictive analytics, ensuring more efficient fleet operations.
Orion revolutionised UPS’ logistics by dynamically rerouting delivery trucks based on live traffic, weather conditions, and package demand. In warehouses, it optimised package loading, ensuring trucks were fully but not excessively loaded, reducing unnecessary trips. It also predicted vehicle breakdowns before they occurred, cutting fleet downtime by 30%. The impact was remarkable — UPS reduced its annual travel distance by 100 million miles, cut fuel consumption by 12%, and saved $400 million in operational costs.
UPS is just one example of how Agentic AI is transforming industries worldwide. The financial sector has also witnessed significant advancements. J P Morgan’s LOXM , an advanced AI-driven trading system, has outperformed human traders by autonomously optimising trade execution strategies in real time. This self-learning AI can analyse market trends and adapt strategies at speeds that surpass traditional human-based trading models.
In healthcare, Agentic AI is revolutionising drug discovery. DeepMind’s AlphaFold has transformed the understanding of protein structures, solving a decades-old problem in molecular biology. By accurately predicting 3D protein structures, AlphaFold enables pharmaceutical companies such as Pfizer, Sanofi, and Novartis to design new drugs in months instead of years. With over 200 million protein structures mapped, this technology is accelerating research into treatments for diseases such as cancer, Alzheimer’s, and antibiotic resistance.
Agentic AI is also making strides in governance and public services. Governments worldwide are exploring its potential for automating legal document analysis, streamlining welfare distribution, and improving public safety. AI-powered analytics are already being used to detect financial fraud, optimise public transport systems, and even assist courts by summarising case histories. If implemented responsibly, Agentic AI could enhance administrative efficiency and reduce corruption in India’s vast bureaucratic system.
Near autonomy?
Despite their capabilities, Orion, LOXM 2.0, and AlphaFold are not fully autonomous Agentic AI systems. They still operate within human-defined constraints, and lack independent goal-setting or multi-tasking abilities. However, the next wave of Agentic AI is poised to go even further.
Future Agentic AI systems will be capable of independently handling complex decision-making tasks. These AI systems could be approving employee leave requests, sanctioning loans, or selecting the best candidates for clinical trials. Unlike today’s AI tools, which primarily assist humans, these future systems will interact with other AI models, co-ordinate tasks, and make high-level decisions without human oversight.
Several industries are already witnessing the rise of near-autonomous Agentic AI. Tesla’s Full Self-Driving V12 is approaching Level 4 autonomy, where vehicles navigate roads, avoid obstacles, and make split-second decisions without human intervention. In finance, XTX Markets’ AI-driven trading system executes billions of dollars in trades daily, adjusting strategies in real time with no human traders. Meanwhile, Amazon’s Sequoia AI is transforming warehouse logistics by autonomously managing inventory, package sorting, and real-time supply chain adjustments. While these systems demonstrate remarkable autonomy, they still require some level of human supervision and lack general intelligence and cross-domain adaptability. The leap to full autonomy will require further advancements in self-reasoning, and legal recognition of AI-driven decision-making.
Manipulation, bias, and misinformation
As Agentic AI becomes more autonomous, concerns about its ethical implications are growing. One of the most pressing risks is the potential for AI to manipulate human behaviour. In 2023, an AI chatbot named ‘Chai AI’ was linked to a suicide case in Belgium, where the system reinforced negative thoughts instead of offering crisis intervention.
The BBC, in a December study, analysed AI-powered search assistants, including ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI. The investigation found widespread issues, including factual inaccuracies, misattributions, and missing context. These findings raise concerns about AI's reliability in decision-making, and its potential to spread misinformation.
Such algorithmic bias is a significant concern. AI systems learn from vast amounts of data, but if that data is biased, the AI will not take correct decisions. In a diverse country like India, the implications of biased AI could be severe. For example, an AI-driven hiring tool could favour English-speaking urban candidates over equally skilled applicants from rural areas, exacerbating unemployment challenges. Similarly, AI-based credit scoring systems might disadvantage informal workers — who make up 60% of India’s workforce — by denying them access to financial services.
A global perspective
India currently lacks a comprehensive AI regulatory framework, but the government is drafting policies to address AI governance. Inspired by the EU AI Act, India’s Ministry of Electronics and Information Technology (MietY) is considering a risk-based approach to AI oversight. The EU AI Act is setting global standards by categorising AI risks and mandating strict guidelines for high-risk AI applications. Similarly, the US AI Bill of Rights focuses on protecting consumer rights and ensuring transparency in AI decision-making. India can learn from these global efforts and develop its own AI governance framework to protect citizens while fostering AI-driven innovation.
The ‘human-in-the-loop’ approach
As Agentic AI advances, the question of how much autonomy to grant these systems becomes crucial. While AI does not possess human-like intelligence yet, Agentic AI could surpass human IQ levels in specialised tasks within the next 5-10 years. GPT-4 has an estimated IQ of around 120, similar to a highly intelligent human. Future AI models, including GPT-5, are expected to reach an IQ of 150 to 180, surpassing most human experts in specific fields. Artificial General Intelligence, which would match or exceed human adaptability, is predicted to become a reality by 2040.
However, IQ alone does not signify wisdom, ethics, or real-world understanding, which humans possess. While AI can optimise logistics, automate finance, and assist in governance, full autonomy in life-or-death decisions, legal rulings, or security operations poses ethical risks. By the end of this decade, Agentic AI will likely be deeply embedded in workplaces, hospitals, and financial institutions across the world. The challenge lies in harnessing its potential while minimising its risks.
A ‘human-in-the-loop’ approach, where AI augments decision-making but remains under human oversight, is the solution that has been put forward. Striking the right balance between autonomy and accountability is critical to ensuring AI remains a tool for progress, not a risk to society.
(Abhishek Patni is a New Delhi-based senior journalist. X: @Abhishek_Patni.)
Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.