<p>New Delhi: The Centre on Friday revised its earlier advisory to the largest social media companies in the country on the use of artificial intelligence (AI), changing a provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country.</p>.<p>"Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” the new advisory issued said.</p>.<p>The new advisory issued on Friday supersedes earlier advisory issued on March 1. </p>.<p>In its new form, the intermediaries are no longer required to submit an action taken-cum-status report either but are still required to comply with immediate effect.</p>.<p>In the March 1 advisory, the government had asked platforms to take the government’s “explicit permission” before deploying AI models, the new advisory said under-tested and unreliable AI models should be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated”.</p>
<p>New Delhi: The Centre on Friday revised its earlier advisory to the largest social media companies in the country on the use of artificial intelligence (AI), changing a provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country.</p>.<p>"Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” the new advisory issued said.</p>.<p>The new advisory issued on Friday supersedes earlier advisory issued on March 1. </p>.<p>In its new form, the intermediaries are no longer required to submit an action taken-cum-status report either but are still required to comply with immediate effect.</p>.<p>In the March 1 advisory, the government had asked platforms to take the government’s “explicit permission” before deploying AI models, the new advisory said under-tested and unreliable AI models should be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated”.</p>