<p>The uproar over the sacking of Sam Altman as CEO of OpenAI and his subsequent return to the same role has raised many questions about the future of generative Artificial Intelligence (AI).</p><p>Now that the dust has settled, and Altman is not only back but well and truly in charge with a newly anointed board of his choice, one must wonder if the race to develop AI by Big Tech needs to take a pause. To take a step back as it were, and look more carefully at the potential of a scientific development that may well change the course of human history.</p> <p>One must first recap the Altman episode. It seems the non-profit board of OpenAI suddenly decided to remove the CEO on grounds of lack of adequate communication. Initially, this seemed a puzzling reason, but reports later emanating from the San Francisco -based company indicated that some developments in the field of artificial general intelligence (AGI) may have been of concern to the board members.</p> <p>Even so, it was an abrupt and arbitrary move especially as key investors such as Microsoft had no inkling of the plan. It must be noted that OpenAI has a unique governance model with the board of the original non-profit venture having control of the profit arm that has nearly 50 per cent shareholding by Microsoft. The outcome of the sudden decision to remove the CEO was Microsoft offering to bring Altman on board and create a new AI research team while OpenAI went ahead to appoint two CEOs in succession. The final incumbent was co-founder of Twitch, Emmett Shear who vowed to investigate the reasons for Altman’s sacking. Developments then moved rapidly as most of the company’s 770 employees declared their intent to depart en masse to Microsoft. At the end of the five-day drama, Altman was reinstated as CEO and three new board members were brought in including former Federal Reserve chief, Larry Summers.</p> <p>The back story of this bizarre incident seems to be the development of a new AI model called — in proper science fiction style — Q* (Q star). One of its startling capabilities has reportedly been to solve basic mathematical problems, a feat so far not achieved by any other AI model. There is no official confirmation of this development from OpenAI, barring many stories circulating on the Internet. Yet it is clear from some of the technical explanations in online tech magazines that there has been previous research on what is known as ‘Q learning’.</p> .'Pickup faster than Sam Altman’s return': Namma Yatri's hilarious marketing using OpenAI reference is spot on.<p>For the layman, the new capabilities of the Q* can be envisaged as the difference between systems that rely only on data from human sources and the ability to think in a more creative fashion. For instance, one publication has described existing algorithms as being like a robot in a maze that relies on directions from humans to move left or right. But Q* is like a robot that would try different routes to move towards the exit. This is indeed a huge leap for AI though many technical experts would still argue that it has little scope to harm humanity in the future.</p> <p>Yet from a layman’s point of view, the capability of an AI model to make decisions without being guided at every step is a scary scenario. Adding to it is Altman’s reported comments that AGI could be described as a “median human who could be hired as a co-worker” alongside humans. This seems an excessively casual approach to an issue that raises many ethical dilemmas. It is surely time for Big Tech to take a pause in the breakneck speed at which AI research is being carried out in a competitive manner. A more nuanced approach to the development of such pathbreaking systems now needs to be taken.</p> <p>In India too, a country which has one of the largest pools of AI engineers, discussions need to be carried out on ways to regulate and monitor this cutting-edge technology. Significantly, the Google representative here has recently highlighted the need for “guardrails” around a technology that needs all companies to be responsible players, not just one or two.</p> <p>The Sam Altman imbroglio has been a blessing in disguise. It has put the spotlight on the fact that AI research needs to ease up so that ethical concerns about the impact on humanity are addressed fully. For the time being, the public is only aware of the progress made in the Q* model but it is conceivably possible that other tech companies are moving in the same direction. Such developments need to be considered holistically keeping ethical guidelines in mind. Science needs to be harnessed for the benefit of humanity rather than allowing it to grow in a manner that could potentially be harmful to the world.</p><p><em>(Sushma Ramachandran is a senior journalist)</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>The uproar over the sacking of Sam Altman as CEO of OpenAI and his subsequent return to the same role has raised many questions about the future of generative Artificial Intelligence (AI).</p><p>Now that the dust has settled, and Altman is not only back but well and truly in charge with a newly anointed board of his choice, one must wonder if the race to develop AI by Big Tech needs to take a pause. To take a step back as it were, and look more carefully at the potential of a scientific development that may well change the course of human history.</p> <p>One must first recap the Altman episode. It seems the non-profit board of OpenAI suddenly decided to remove the CEO on grounds of lack of adequate communication. Initially, this seemed a puzzling reason, but reports later emanating from the San Francisco -based company indicated that some developments in the field of artificial general intelligence (AGI) may have been of concern to the board members.</p> <p>Even so, it was an abrupt and arbitrary move especially as key investors such as Microsoft had no inkling of the plan. It must be noted that OpenAI has a unique governance model with the board of the original non-profit venture having control of the profit arm that has nearly 50 per cent shareholding by Microsoft. The outcome of the sudden decision to remove the CEO was Microsoft offering to bring Altman on board and create a new AI research team while OpenAI went ahead to appoint two CEOs in succession. The final incumbent was co-founder of Twitch, Emmett Shear who vowed to investigate the reasons for Altman’s sacking. Developments then moved rapidly as most of the company’s 770 employees declared their intent to depart en masse to Microsoft. At the end of the five-day drama, Altman was reinstated as CEO and three new board members were brought in including former Federal Reserve chief, Larry Summers.</p> <p>The back story of this bizarre incident seems to be the development of a new AI model called — in proper science fiction style — Q* (Q star). One of its startling capabilities has reportedly been to solve basic mathematical problems, a feat so far not achieved by any other AI model. There is no official confirmation of this development from OpenAI, barring many stories circulating on the Internet. Yet it is clear from some of the technical explanations in online tech magazines that there has been previous research on what is known as ‘Q learning’.</p> .'Pickup faster than Sam Altman’s return': Namma Yatri's hilarious marketing using OpenAI reference is spot on.<p>For the layman, the new capabilities of the Q* can be envisaged as the difference between systems that rely only on data from human sources and the ability to think in a more creative fashion. For instance, one publication has described existing algorithms as being like a robot in a maze that relies on directions from humans to move left or right. But Q* is like a robot that would try different routes to move towards the exit. This is indeed a huge leap for AI though many technical experts would still argue that it has little scope to harm humanity in the future.</p> <p>Yet from a layman’s point of view, the capability of an AI model to make decisions without being guided at every step is a scary scenario. Adding to it is Altman’s reported comments that AGI could be described as a “median human who could be hired as a co-worker” alongside humans. This seems an excessively casual approach to an issue that raises many ethical dilemmas. It is surely time for Big Tech to take a pause in the breakneck speed at which AI research is being carried out in a competitive manner. A more nuanced approach to the development of such pathbreaking systems now needs to be taken.</p> <p>In India too, a country which has one of the largest pools of AI engineers, discussions need to be carried out on ways to regulate and monitor this cutting-edge technology. Significantly, the Google representative here has recently highlighted the need for “guardrails” around a technology that needs all companies to be responsible players, not just one or two.</p> <p>The Sam Altman imbroglio has been a blessing in disguise. It has put the spotlight on the fact that AI research needs to ease up so that ethical concerns about the impact on humanity are addressed fully. For the time being, the public is only aware of the progress made in the Q* model but it is conceivably possible that other tech companies are moving in the same direction. Such developments need to be considered holistically keeping ethical guidelines in mind. Science needs to be harnessed for the benefit of humanity rather than allowing it to grow in a manner that could potentially be harmful to the world.</p><p><em>(Sushma Ramachandran is a senior journalist)</em></p><p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>