<p>Bengaluru: Behind the AI assistants that write emails, generate images and talk to you, is a vast network of Large Language Models (LLMs) that are trained to mimic human language. </p>.<p>Governed by what is called Natural Language Processing (NLP) with the underlying task of “next word prediction”, AI chatbots are nothing but an enormous neural network of mathematical abstractions that are fed with existing data on the internet. </p>.<p>If you say Barack, for instance, the model is trained to predict the next word as Obama, based on its training on lots and lots of text from the internet. </p>.Banks must ride on advantages of AI and Bigtech, not the other way: RBI Governor Das.<p>AI chatbots come with capabilities to mark specific user requirements and generate the most relevant responses to these requirements. Over the last few years, their evolution has largely been around linguistic, mathematical, and logical reasoning tasks. </p>.<p>According to Statista, as of July 2024, more than one-third of India's Generative AI startups are in the code and data segment. It is followed by audio and video segment startups, at 27 per cent.</p>.<p>Multiple sectors including banking are increasingly adopting AI assistants as part of efforts toward operational and cost efficiency. In healthcare, private hospitals have been building chatbots to interact with patients at a primary level, largely to establish the urgency of their requirements. Sector-specific academic courses in AI point to its growing range of applications across segments.</p>.<p>Industry analysts note that privacy breaches and irrelevant or incorrect responses caused by faulty input data are among the major risks involved.</p>.<p><strong>Being human(-like)</strong></p>.<p>Hariom Seth, founder of Tagglabs, an AI-driven advertising firm, underlines data sensitivity and user experience in both banking and healthcare.</p>.<p>There could be answers in the next wave of innovation, where Generative AI is expected to evolve closer to more human-like attributes such as empathy and political correctness. </p>.<p>Danish Pruthi, Assistant Professor at the Department of Computational and Data Sciences at IISc, notes that AI chatbots do not inherently possess these qualities. “Ultimately, the chatbots are trained on input texts, written by people. They become fluent in the language that we use. Instruction fine-tuning, where you teach the model to respond like humans using tens of thousands of instructions or prompts is where all the memory is updated,” he said. </p>.<p>On the tendency to anthropomorphise scientific innovations, Danish draws an analogy. </p>.<p>“If you ask Bill Gates what it feels like to be extremely poor, he will have a response based on his readings and observations of the real world. A similar thought process goes into the functioning of an AI chatbot. A generative AI response may look empathetic and politically correct, due to the LLMs, largely created by humans, which may include personal experiences, opinions, and biases,” he says. </p>.<p>While there are concerns over whether AI will take over creative jobs, Danish believes that there will be displacement in terms of the number of people required to do certain tasks. But it is too early to discuss if a human being is required at all to do a certain job, he says. </p>
<p>Bengaluru: Behind the AI assistants that write emails, generate images and talk to you, is a vast network of Large Language Models (LLMs) that are trained to mimic human language. </p>.<p>Governed by what is called Natural Language Processing (NLP) with the underlying task of “next word prediction”, AI chatbots are nothing but an enormous neural network of mathematical abstractions that are fed with existing data on the internet. </p>.<p>If you say Barack, for instance, the model is trained to predict the next word as Obama, based on its training on lots and lots of text from the internet. </p>.Banks must ride on advantages of AI and Bigtech, not the other way: RBI Governor Das.<p>AI chatbots come with capabilities to mark specific user requirements and generate the most relevant responses to these requirements. Over the last few years, their evolution has largely been around linguistic, mathematical, and logical reasoning tasks. </p>.<p>According to Statista, as of July 2024, more than one-third of India's Generative AI startups are in the code and data segment. It is followed by audio and video segment startups, at 27 per cent.</p>.<p>Multiple sectors including banking are increasingly adopting AI assistants as part of efforts toward operational and cost efficiency. In healthcare, private hospitals have been building chatbots to interact with patients at a primary level, largely to establish the urgency of their requirements. Sector-specific academic courses in AI point to its growing range of applications across segments.</p>.<p>Industry analysts note that privacy breaches and irrelevant or incorrect responses caused by faulty input data are among the major risks involved.</p>.<p><strong>Being human(-like)</strong></p>.<p>Hariom Seth, founder of Tagglabs, an AI-driven advertising firm, underlines data sensitivity and user experience in both banking and healthcare.</p>.<p>There could be answers in the next wave of innovation, where Generative AI is expected to evolve closer to more human-like attributes such as empathy and political correctness. </p>.<p>Danish Pruthi, Assistant Professor at the Department of Computational and Data Sciences at IISc, notes that AI chatbots do not inherently possess these qualities. “Ultimately, the chatbots are trained on input texts, written by people. They become fluent in the language that we use. Instruction fine-tuning, where you teach the model to respond like humans using tens of thousands of instructions or prompts is where all the memory is updated,” he said. </p>.<p>On the tendency to anthropomorphise scientific innovations, Danish draws an analogy. </p>.<p>“If you ask Bill Gates what it feels like to be extremely poor, he will have a response based on his readings and observations of the real world. A similar thought process goes into the functioning of an AI chatbot. A generative AI response may look empathetic and politically correct, due to the LLMs, largely created by humans, which may include personal experiences, opinions, and biases,” he says. </p>.<p>While there are concerns over whether AI will take over creative jobs, Danish believes that there will be displacement in terms of the number of people required to do certain tasks. But it is too early to discuss if a human being is required at all to do a certain job, he says. </p>