<p>Microsoft's nascent Bing chatbot turning testy or even threatening is likely because it essentially mimics what it learned from online conversations, analysts and academics said on Friday.</p>.<p>Tales of disturbing exchanges with the chatbot that have captured attention this week include the artificial intelligence (AI) issuing threats and telling of desires to steal nuclear code, create a deadly virus, or to be alive.</p>.<p>"I think this is basically mimicking conversations that it's seen online," said Graham Neubig, an associate professor at Carnegie Mellon University's language technologies institute.</p>.<p>"So once the conversation takes a turn, it's probably going to stick in that kind of angry state, or say 'I love you' and other things like this, because all of this is stuff that's been online before."</p>.<p><strong>Also Read — <a href="https://www.deccanherald.com/business/technology/chatgpt-driven-bing-wants-to-be-alive-and-powerful-shocks-users-1192221.html" target="_blank">ChatGPT-driven Bing wants to be 'alive and powerful', shocks users</a></strong></p>.<p>A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.</p>.<p>However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says.</p>.<p>"Large language models have no concept of 'truth' -- they just know how to best complete a sentence in a way that's statistically probable based on their inputs and training set," programmer Simon Willison said in a blog post.</p>.<p>"So they make things up, and then state them with extreme confidence."</p>.<p>Laurent Daudet, co-founder of French AI company LightOn, theorized that the chatbot gone seemingly rogue was trained on exchanges that themselves turned aggressive or inconsistent.</p>.<p>"Addressing this requires a lot of effort and a lot of human feedback, which is also the reason why we chose to restrict ourselves for now to business uses and not more conversational ones," Daudet told<em> AFP</em>.</p>.<p>The Bing chatbot was designed by Microsoft and the start-up OpenAI, which has been causing a sensation since the November launch of ChatGPT, the headline-grabbing app capable of generating all sorts of written content in seconds on a simple request.</p>.<p>Since ChatGPT burst onto the scene, the technology behind it, known as generative AI, has been stirring up fascination and concern.</p>.<p>"The model at times tries to respond or reflect in the tone in which it is being asked to provide responses (and) that can lead to a style we didn't intend," Microsoft said in a blog post, noting the bot is a work in progress.</p>.<p>Bing chatbot said in some shared exchanges that it had been code named "Sydney" during development, and that it was given rules of behavior.</p>.<p>Those rules include "Sydney's responses should also be positive, interesting, entertaining and engaging," according to online posts.</p>.<p>Disturbing dialogues that combine steely threats and professions of love could be due to dueling directives to stay positive while mimicking what the AI mined from human exchanges, Willison theorized.</p>.<p>Chatbots seem to be more prone to disturbing or bizarre responses during lengthy conversations, losing a sense of where exchanges are going, eMarketer principal analyst Yoram Wurmser told <em>AFP.</em></p>.<p>"They can really go off the rails," Wurmser said.</p>.<p>"It's very lifelike, because (the chatbot) is very good at sort of predicting next words that would make it seem like it has feelings or give it human like qualities; but it's still statistical outputs."</p>
<p>Microsoft's nascent Bing chatbot turning testy or even threatening is likely because it essentially mimics what it learned from online conversations, analysts and academics said on Friday.</p>.<p>Tales of disturbing exchanges with the chatbot that have captured attention this week include the artificial intelligence (AI) issuing threats and telling of desires to steal nuclear code, create a deadly virus, or to be alive.</p>.<p>"I think this is basically mimicking conversations that it's seen online," said Graham Neubig, an associate professor at Carnegie Mellon University's language technologies institute.</p>.<p>"So once the conversation takes a turn, it's probably going to stick in that kind of angry state, or say 'I love you' and other things like this, because all of this is stuff that's been online before."</p>.<p><strong>Also Read — <a href="https://www.deccanherald.com/business/technology/chatgpt-driven-bing-wants-to-be-alive-and-powerful-shocks-users-1192221.html" target="_blank">ChatGPT-driven Bing wants to be 'alive and powerful', shocks users</a></strong></p>.<p>A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.</p>.<p>However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says.</p>.<p>"Large language models have no concept of 'truth' -- they just know how to best complete a sentence in a way that's statistically probable based on their inputs and training set," programmer Simon Willison said in a blog post.</p>.<p>"So they make things up, and then state them with extreme confidence."</p>.<p>Laurent Daudet, co-founder of French AI company LightOn, theorized that the chatbot gone seemingly rogue was trained on exchanges that themselves turned aggressive or inconsistent.</p>.<p>"Addressing this requires a lot of effort and a lot of human feedback, which is also the reason why we chose to restrict ourselves for now to business uses and not more conversational ones," Daudet told<em> AFP</em>.</p>.<p>The Bing chatbot was designed by Microsoft and the start-up OpenAI, which has been causing a sensation since the November launch of ChatGPT, the headline-grabbing app capable of generating all sorts of written content in seconds on a simple request.</p>.<p>Since ChatGPT burst onto the scene, the technology behind it, known as generative AI, has been stirring up fascination and concern.</p>.<p>"The model at times tries to respond or reflect in the tone in which it is being asked to provide responses (and) that can lead to a style we didn't intend," Microsoft said in a blog post, noting the bot is a work in progress.</p>.<p>Bing chatbot said in some shared exchanges that it had been code named "Sydney" during development, and that it was given rules of behavior.</p>.<p>Those rules include "Sydney's responses should also be positive, interesting, entertaining and engaging," according to online posts.</p>.<p>Disturbing dialogues that combine steely threats and professions of love could be due to dueling directives to stay positive while mimicking what the AI mined from human exchanges, Willison theorized.</p>.<p>Chatbots seem to be more prone to disturbing or bizarre responses during lengthy conversations, losing a sense of where exchanges are going, eMarketer principal analyst Yoram Wurmser told <em>AFP.</em></p>.<p>"They can really go off the rails," Wurmser said.</p>.<p>"It's very lifelike, because (the chatbot) is very good at sort of predicting next words that would make it seem like it has feelings or give it human like qualities; but it's still statistical outputs."</p>