<p>I’m increasingly convinced that the Generative AI (as we have it right now) is not going to take over the world, and make human thinking and creativity redundant. Every semester as I read larger volumes of vapid AI-generated emails and essays from my students, I am becoming even more convinced that all these AI tools are good for is that — vapid text and regurgitation of existing ideas. And, I do find myself using it for that purpose as well. It’s great at writing polite emails, crafting generic responses, making basic presentations, and composing the odd filler paragraph or image. Even that is contingent on how well you craft the prompt it works with.</p><p>This is not to say that AI doesn’t have its uses. As a tool that can analyse millions of data points in the blink of an eye. It is excellent at analysing, say, a clump of cells to <a href="https://www.sciencenewstoday.org/how-ai-is-transforming-healthcare-diagnosis-treatment-and-beyond">check for abnormalities such as cancer</a>. AI and neural networks have been instrumental in analysing the massive amounts of data that telescopes around the world have been making available to astronomers. AI use has helped scientists <a href="https://www.sciencedaily.com/releases/2023/02/230207144222.htm">discover several new exoplanets</a>, for instance, and in predicting signatures of <a href="https://cerncourier.com/a/gravitational-wave-astronomy-turns-to-ai/">new kinds of gravitational waves</a>.</p> .ChatGPT bargains in Kannada, helps student lower auto fare by 80 Rupees.<p>As a statistical tool, AI can be put to several great uses; since it can go through very large data sets in a very short time, especially because of how exponentially computing power has increased. Under the imaginative instructions of scientists and others, AI is a fascinating tool that opens even more avenues for expanding our understanding of the world. I only wish it were used for things other than priming social media users for advertisers. One of the more recent trends of AI use has been the Studio Ghibli-like <a href="https://x.com/Zeneca/status/1904774204769411196">rendering of everything</a>; and I <a href="https://x.com/WhiteHouse/status/1905332049021415862">do mean</a> everything. Despite the computing power, though, at the peak of the Ghibli trend, Open AI’s CEO Sam Altman posted on X that, “<a href="https://x.com/sama/status/1905296867145154688">our GPUs are melting</a>.”</p>.<p><strong>What does ChatGPT actually do?</strong></p><p>AI chatbots such as ChatGPT are essentially Large Language Models (LLMs). An LLM is simply an AI programme that knows several million combinations of words that have been used in the millions and millions of texts it has been trained on. It can use these words to form sentences in the way that they have been in its training data; and use sentences to form paragraphs. It is not <em>thinking</em> about the question you ask it. It is simply using the words in the question, referring to the millions of texts it has been trained on; and spits out an answer that sounds like all the other combinations of words that humans have used in its training data. The same is true for art and music and presentation decks and everything else it can do. </p><p>This is good enough for last minute class assignments and replying to yet another email in which you must politely say no. It is also not terrible at summarising some texts and creating visuals for presentations. I have found that very often, particularly with analytical or theoretical texts in the social sciences, it gets the broad point, but often misses important nuances and subtexts.</p> .<p>It’s not exactly the smartest at creating policy for science and innovation. Earlier this year, it was revealed that UK’s technology secretary Peter Kyle had used ChatGPT to try to figure out, among other things, what podcasts he should appear on, what was the definition of antimatter, and why small and medium businesses in the UK have been slow to adopt AI. In the furore that was generated at the revelation, one of the questions asked was how did the journalist who found this information figure out that they could get this information under the Freedom of Information Act in the UK?</p><p>My point is: LLMs are not going to take over the world and make all our jobs redundant. An AI chatbot might be able to tell the minister which podcasts are relevant for him, and compile information from its training dataset on how to get small businesses to use AI; but it cannot come up with new ideas for policy making; or figure out how to use existing Freedom of Information rules to find what ChatGPT prompts the minister was using. It cannot do the investigative job of a journalist who has ‘a nose for news’.</p><p><strong>Not yet quite human</strong></p><p>Scientists such as Timnit Gebru and Margaret Mitchell, who have worked extensively on ethics and accountability in AI have called LLMs ‘<a href="https://dl.acm.org/doi/10.1145/3442188.3445922">stochastic parrots</a>’. They use the term ‘stochastic’ to describe how LLMs are random or probabilistic; generating language based on statistical probabilities rather than meaningful intention. The word ‘parrot’ signifies imitation without comprehension.</p><p>Noam Chomsky and others have described a LLM as, ‘a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">most probable answer to a scientific question</a>’. They argue that this is considerably less than what a human child learns to do as it learns a language; and marvel at how the child ‘is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters.’</p> .<p>To use Hayao Miyazaki’s well-regarded Studio Ghibli style to generate more memes is one thing; but it is entirely another thing to <em>make</em> and <em>do</em> art. Because it is important to ask what art really is. To put it in an Indian context, take something as powerful as S H Raza’s ‘bindu’, and compare it to the AI chatbot’s Ghibli-ised rendering of everything. The evolution of the ‘bindu’ through Raza’s life and work, the symbolism of it, understanding the development of modern Indian art through the emergence of the ‘bindu’ is at the heart of what art is and can be. In comparison, AI’s Ghibli reproductions are entirely empty of meaning. They’re cute, of course; but that’s because Miyazaki made it so; not the AI. To remove the politics of art from its rendering, to take the soul of what Studio Ghibli stands for out of its style is simply mindless imitation.</p><p>AI chatbots can be a very useful tool, much like Google Search was when it first emerged and revolutionised the way we found information. Libraries and research skills did not go extinct because of Google Search; they became, if anything, better. Generative AI can summarise our texts, imitate our best art, and reply like a friend if we ask it to. But the imagination of an intelligence that wants to take over the world; the writing of stories about the intriguing amorality of an AI that is caught between logic and emotion; the fear that an intelligence we create could make us redundant? That fear, that imagination, that feeling remains a deeply human thing.</p> .<p><em>(Vidya Subramanian is associate professor at Jindal Global Law School (JGLS) </em></p><p><em>X: @ vidyas42<br></em><br><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em><br><br></p>
<p>I’m increasingly convinced that the Generative AI (as we have it right now) is not going to take over the world, and make human thinking and creativity redundant. Every semester as I read larger volumes of vapid AI-generated emails and essays from my students, I am becoming even more convinced that all these AI tools are good for is that — vapid text and regurgitation of existing ideas. And, I do find myself using it for that purpose as well. It’s great at writing polite emails, crafting generic responses, making basic presentations, and composing the odd filler paragraph or image. Even that is contingent on how well you craft the prompt it works with.</p><p>This is not to say that AI doesn’t have its uses. As a tool that can analyse millions of data points in the blink of an eye. It is excellent at analysing, say, a clump of cells to <a href="https://www.sciencenewstoday.org/how-ai-is-transforming-healthcare-diagnosis-treatment-and-beyond">check for abnormalities such as cancer</a>. AI and neural networks have been instrumental in analysing the massive amounts of data that telescopes around the world have been making available to astronomers. AI use has helped scientists <a href="https://www.sciencedaily.com/releases/2023/02/230207144222.htm">discover several new exoplanets</a>, for instance, and in predicting signatures of <a href="https://cerncourier.com/a/gravitational-wave-astronomy-turns-to-ai/">new kinds of gravitational waves</a>.</p> .ChatGPT bargains in Kannada, helps student lower auto fare by 80 Rupees.<p>As a statistical tool, AI can be put to several great uses; since it can go through very large data sets in a very short time, especially because of how exponentially computing power has increased. Under the imaginative instructions of scientists and others, AI is a fascinating tool that opens even more avenues for expanding our understanding of the world. I only wish it were used for things other than priming social media users for advertisers. One of the more recent trends of AI use has been the Studio Ghibli-like <a href="https://x.com/Zeneca/status/1904774204769411196">rendering of everything</a>; and I <a href="https://x.com/WhiteHouse/status/1905332049021415862">do mean</a> everything. Despite the computing power, though, at the peak of the Ghibli trend, Open AI’s CEO Sam Altman posted on X that, “<a href="https://x.com/sama/status/1905296867145154688">our GPUs are melting</a>.”</p>.<p><strong>What does ChatGPT actually do?</strong></p><p>AI chatbots such as ChatGPT are essentially Large Language Models (LLMs). An LLM is simply an AI programme that knows several million combinations of words that have been used in the millions and millions of texts it has been trained on. It can use these words to form sentences in the way that they have been in its training data; and use sentences to form paragraphs. It is not <em>thinking</em> about the question you ask it. It is simply using the words in the question, referring to the millions of texts it has been trained on; and spits out an answer that sounds like all the other combinations of words that humans have used in its training data. The same is true for art and music and presentation decks and everything else it can do. </p><p>This is good enough for last minute class assignments and replying to yet another email in which you must politely say no. It is also not terrible at summarising some texts and creating visuals for presentations. I have found that very often, particularly with analytical or theoretical texts in the social sciences, it gets the broad point, but often misses important nuances and subtexts.</p> .<p>It’s not exactly the smartest at creating policy for science and innovation. Earlier this year, it was revealed that UK’s technology secretary Peter Kyle had used ChatGPT to try to figure out, among other things, what podcasts he should appear on, what was the definition of antimatter, and why small and medium businesses in the UK have been slow to adopt AI. In the furore that was generated at the revelation, one of the questions asked was how did the journalist who found this information figure out that they could get this information under the Freedom of Information Act in the UK?</p><p>My point is: LLMs are not going to take over the world and make all our jobs redundant. An AI chatbot might be able to tell the minister which podcasts are relevant for him, and compile information from its training dataset on how to get small businesses to use AI; but it cannot come up with new ideas for policy making; or figure out how to use existing Freedom of Information rules to find what ChatGPT prompts the minister was using. It cannot do the investigative job of a journalist who has ‘a nose for news’.</p><p><strong>Not yet quite human</strong></p><p>Scientists such as Timnit Gebru and Margaret Mitchell, who have worked extensively on ethics and accountability in AI have called LLMs ‘<a href="https://dl.acm.org/doi/10.1145/3442188.3445922">stochastic parrots</a>’. They use the term ‘stochastic’ to describe how LLMs are random or probabilistic; generating language based on statistical probabilities rather than meaningful intention. The word ‘parrot’ signifies imitation without comprehension.</p><p>Noam Chomsky and others have described a LLM as, ‘a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">most probable answer to a scientific question</a>’. They argue that this is considerably less than what a human child learns to do as it learns a language; and marvel at how the child ‘is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters.’</p> .<p>To use Hayao Miyazaki’s well-regarded Studio Ghibli style to generate more memes is one thing; but it is entirely another thing to <em>make</em> and <em>do</em> art. Because it is important to ask what art really is. To put it in an Indian context, take something as powerful as S H Raza’s ‘bindu’, and compare it to the AI chatbot’s Ghibli-ised rendering of everything. The evolution of the ‘bindu’ through Raza’s life and work, the symbolism of it, understanding the development of modern Indian art through the emergence of the ‘bindu’ is at the heart of what art is and can be. In comparison, AI’s Ghibli reproductions are entirely empty of meaning. They’re cute, of course; but that’s because Miyazaki made it so; not the AI. To remove the politics of art from its rendering, to take the soul of what Studio Ghibli stands for out of its style is simply mindless imitation.</p><p>AI chatbots can be a very useful tool, much like Google Search was when it first emerged and revolutionised the way we found information. Libraries and research skills did not go extinct because of Google Search; they became, if anything, better. Generative AI can summarise our texts, imitate our best art, and reply like a friend if we ask it to. But the imagination of an intelligence that wants to take over the world; the writing of stories about the intriguing amorality of an AI that is caught between logic and emotion; the fear that an intelligence we create could make us redundant? That fear, that imagination, that feeling remains a deeply human thing.</p> .<p><em>(Vidya Subramanian is associate professor at Jindal Global Law School (JGLS) </em></p><p><em>X: @ vidyas42<br></em><br><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em><br><br></p>