<p>We are approaching the third anniversary of ChatGPT’s release. When OpenAI launched it, the world was stunned. For the first time, the wider public directly experienced the power of Generative Artificial Intelligence (Gen AI): large language models (LLMs) could generate coherent, context-aware paragraphs from just a few words of instruction. </p><p>Three years later, we stand at a crossroads, not only for the global economy but for human civilisation itself. It is now evident that Gen AI is a transformative technology that is reshaping nearly every aspect of our lives, from education to business and beyond.</p>.<p>Gen AI differs from every previous wave of disruptive innovation in several fundamental ways. Its adoption has been astonishingly rapid and global, both in the number of users and in its spread across borders. Nearly 70 per cent of new API users of OpenAI’s platforms now come from outside the United States. In the last five decades of American technological dominance, no other innovation has seen such swift and widespread international uptake. </p>.The risk of unequal AI education in India's schools.<p>Moreover, with reinforcement learning and continuous model upgrades, LLMs are becoming increasingly adaptive, forming a self-improving loop that enhances their performance over time. While the industrial revolution automated muscle power, Gen AI automates cognition and decision-making. That marks a profound shift, as these were tasks once considered uniquely human.</p>.<p>As Prof Diganta Mukherjee of ISI Kolkata observes, “AI is the next big thing after computerisation. The way we replaced repetitive processes at the lower end of the value chain in the early 1990s is now being replicated at a higher level, with AI taking over more complex yet still repetitive tasks such as decision-making.”</p>.<p>While no other technology has achieved such widespread adoption, the challenge with AI today lies in the scale of investment it has attracted. The top five AI-focused companies – Apple, Nvidia, Microsoft, Alphabet, and Amazon – now account for nearly one-third of the S&P 500 index. Overall, AI-focused companies have driven roughly 80 per cent of the S&P 500’s gains in 2025. Yet, the incremental improvements in successive AI models, particularly LLMs, are becoming less visible to the public. The current challenges facing the AI ecosystem can be broadly grouped into three categories.</p>.<p>One: LLMs have already trained on nearly all publicly available human content, leaving little new data to improve them. Adding AI-generated material offers minimal gains. This saturation exposes ethical issues, especially around fair compensation for creators. In 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz sued AI image companies, claiming their works were used in training without consent or payment. They argue this violates copyright law through unauthorised copying and imitation of their styles, creating “21st-century collage tools”. AI firms counter that using public images qualifies as fair use for transformative purposes. The case, still unresolved, will likely set a precedent for how copyright law, written long before AI, applies to systems that learn from creative works at scale. Its outcome could redefine how AI companies source data and how artists protect their work.</p>.Bengaluru turning into India's AI & quantum computing capital: Innovation Report 2025.<p>Two: ways to effectively monetise Generative AI in real businesses remain uncertain. Chatbots built on LLMs have delivered quick cost savings by automating clerical tasks, and AI-assisted programming has boosted coding productivity. Gen AI-based video creation tools are also being adopted. Yet the initial gains from these applications appear to be plateauing. Despite a flood of startups built around Gen AI, an MIT Technology Review report notes that about 95 per cent of corporate Gen AI pilot projects have failed to deliver meaningful results.</p>.<p>Three: Generative AI companies face steep operating expenses. These models are not only costly to train but also expensive to run, given the immense number of daily API calls. Their infrastructure consumes vast amounts of electricity, particularly for image processing, and water and power to cool massive data centres. By 2030, data centres are projected to use nearly 10 per cent of US electricity. Another concern is rapid hardware depreciation. The AI arms race has driven massive investments in advanced chips to support model scaling, but with chip technology evolving so quickly, these assets risk becoming obsolete before recouping their costs, raising the threat of a sharp capital crash.</p>.<p><strong>When optimism gets speculative</strong></p>.<p>The real concern now is the return on investment from the massive wave of spending on Generative AI. Many fear that prices have been inflated by hype and unrealistic promises. If the technology fails to deliver the transformation it claims to enable, the fallout could be severe – economically, socially, and politically. Several leading researchers have warned against the over-investment in LLMs at the expense of other AI paradigms. Unlike physics-based systems, LLMs learn purely from human language and text. They lack grounding in the physical world, making true artificial general intelligence, on which much of the current optimism rests, an elusive goal.</p>.<p>Economists, too, are uneasy. The AI sector now exhibits the hallmarks of a speculative bubble. Nvidia sells chips to OpenAI; Microsoft owns a major stake in OpenAI; Nvidia, in turn, invests back, forming a closed financial loop reminiscent of past bubbles. In Bubbles and Crashes, Brent Goldfarb and David A Kirsch identify four warning signs: uncertainty, “pure play” companies, novice investors, and narrative fever. All are visible today. The Buffett Index has climbed past 200 per cent, higher than before the dot-com crash.</p>.<p>Sam Altman, OpenAI’s CEO, once compared the AI revolution to <br>the Manhattan Project that led to the atom bomb and changed the course of history. Given the scale of our collective bet on AI, its outcome may shape not just markets or elections but the future trajectory of human civilisation itself.</p>.<p><em>(The writers are professors at the Department of Data Sciences and Operations, Marshall School of Business, University of Southern California)</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>We are approaching the third anniversary of ChatGPT’s release. When OpenAI launched it, the world was stunned. For the first time, the wider public directly experienced the power of Generative Artificial Intelligence (Gen AI): large language models (LLMs) could generate coherent, context-aware paragraphs from just a few words of instruction. </p><p>Three years later, we stand at a crossroads, not only for the global economy but for human civilisation itself. It is now evident that Gen AI is a transformative technology that is reshaping nearly every aspect of our lives, from education to business and beyond.</p>.<p>Gen AI differs from every previous wave of disruptive innovation in several fundamental ways. Its adoption has been astonishingly rapid and global, both in the number of users and in its spread across borders. Nearly 70 per cent of new API users of OpenAI’s platforms now come from outside the United States. In the last five decades of American technological dominance, no other innovation has seen such swift and widespread international uptake. </p>.The risk of unequal AI education in India's schools.<p>Moreover, with reinforcement learning and continuous model upgrades, LLMs are becoming increasingly adaptive, forming a self-improving loop that enhances their performance over time. While the industrial revolution automated muscle power, Gen AI automates cognition and decision-making. That marks a profound shift, as these were tasks once considered uniquely human.</p>.<p>As Prof Diganta Mukherjee of ISI Kolkata observes, “AI is the next big thing after computerisation. The way we replaced repetitive processes at the lower end of the value chain in the early 1990s is now being replicated at a higher level, with AI taking over more complex yet still repetitive tasks such as decision-making.”</p>.<p>While no other technology has achieved such widespread adoption, the challenge with AI today lies in the scale of investment it has attracted. The top five AI-focused companies – Apple, Nvidia, Microsoft, Alphabet, and Amazon – now account for nearly one-third of the S&P 500 index. Overall, AI-focused companies have driven roughly 80 per cent of the S&P 500’s gains in 2025. Yet, the incremental improvements in successive AI models, particularly LLMs, are becoming less visible to the public. The current challenges facing the AI ecosystem can be broadly grouped into three categories.</p>.<p>One: LLMs have already trained on nearly all publicly available human content, leaving little new data to improve them. Adding AI-generated material offers minimal gains. This saturation exposes ethical issues, especially around fair compensation for creators. In 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz sued AI image companies, claiming their works were used in training without consent or payment. They argue this violates copyright law through unauthorised copying and imitation of their styles, creating “21st-century collage tools”. AI firms counter that using public images qualifies as fair use for transformative purposes. The case, still unresolved, will likely set a precedent for how copyright law, written long before AI, applies to systems that learn from creative works at scale. Its outcome could redefine how AI companies source data and how artists protect their work.</p>.Bengaluru turning into India's AI & quantum computing capital: Innovation Report 2025.<p>Two: ways to effectively monetise Generative AI in real businesses remain uncertain. Chatbots built on LLMs have delivered quick cost savings by automating clerical tasks, and AI-assisted programming has boosted coding productivity. Gen AI-based video creation tools are also being adopted. Yet the initial gains from these applications appear to be plateauing. Despite a flood of startups built around Gen AI, an MIT Technology Review report notes that about 95 per cent of corporate Gen AI pilot projects have failed to deliver meaningful results.</p>.<p>Three: Generative AI companies face steep operating expenses. These models are not only costly to train but also expensive to run, given the immense number of daily API calls. Their infrastructure consumes vast amounts of electricity, particularly for image processing, and water and power to cool massive data centres. By 2030, data centres are projected to use nearly 10 per cent of US electricity. Another concern is rapid hardware depreciation. The AI arms race has driven massive investments in advanced chips to support model scaling, but with chip technology evolving so quickly, these assets risk becoming obsolete before recouping their costs, raising the threat of a sharp capital crash.</p>.<p><strong>When optimism gets speculative</strong></p>.<p>The real concern now is the return on investment from the massive wave of spending on Generative AI. Many fear that prices have been inflated by hype and unrealistic promises. If the technology fails to deliver the transformation it claims to enable, the fallout could be severe – economically, socially, and politically. Several leading researchers have warned against the over-investment in LLMs at the expense of other AI paradigms. Unlike physics-based systems, LLMs learn purely from human language and text. They lack grounding in the physical world, making true artificial general intelligence, on which much of the current optimism rests, an elusive goal.</p>.<p>Economists, too, are uneasy. The AI sector now exhibits the hallmarks of a speculative bubble. Nvidia sells chips to OpenAI; Microsoft owns a major stake in OpenAI; Nvidia, in turn, invests back, forming a closed financial loop reminiscent of past bubbles. In Bubbles and Crashes, Brent Goldfarb and David A Kirsch identify four warning signs: uncertainty, “pure play” companies, novice investors, and narrative fever. All are visible today. The Buffett Index has climbed past 200 per cent, higher than before the dot-com crash.</p>.<p>Sam Altman, OpenAI’s CEO, once compared the AI revolution to <br>the Manhattan Project that led to the atom bomb and changed the course of history. Given the scale of our collective bet on AI, its outcome may shape not just markets or elections but the future trajectory of human civilisation itself.</p>.<p><em>(The writers are professors at the Department of Data Sciences and Operations, Marshall School of Business, University of Southern California)</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>