<p>New Delhi: The Central government has introduced stricter regulations on AI-generated content, including deepfakes, requiring online platforms such as X and Instagram to remove any such material flagged by a competent authority or courts within three hours.</p><p>The <a href="https://www.deccanherald.com/tags/ministry-of-electronics-and-information-technology">Ministry of Electronics and Information Technology</a> (MeitY) on Tuesday notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which explicitly expand the scope of the law to cover “synthetically generated information”.</p><p>The amended rules will take effect from February 20, 2026.</p>.<p>The amendments define “audio, visual or audio-visual information” and “synthetically generated information” to include any AI-created or AI-altered content that appears real or authentic and is likely to be mistaken for genuine material involving real persons or events.</p><p>The government has clarified, however, that routine editing, formatting, technical corrections, or good-faith creation of documents, PDFs, research outputs, and educational material will not be classified as synthetic content.</p>.After US, India ranks 2nd globally in enterprise AI usage; security risks loom.<p>Key changes include significantly shorter takedown timelines. Platforms are now required to comply with government or court orders within three hours (previously 36 hours), as stated in the official gazette notification.</p><p>User grievance redressal timelines have also been tightened. Platforms must now respond to grievances within seven days (down from 15 days), and certain specified actions must be completed within two hours.</p><p>Under the new rules, all AI-generated content must be clearly and prominently labelled in a manner that is easily noticeable and adequately perceivable.</p><p>Intermediaries are also required to embed unique identifiers or metadata to trace the computer resource used to create, generate, or modify such content. Once applied, these labels and metadata cannot be removed, altered, or suppressed.</p><p>Calling for ban on illegal AI content, new rules said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.</p>
<p>New Delhi: The Central government has introduced stricter regulations on AI-generated content, including deepfakes, requiring online platforms such as X and Instagram to remove any such material flagged by a competent authority or courts within three hours.</p><p>The <a href="https://www.deccanherald.com/tags/ministry-of-electronics-and-information-technology">Ministry of Electronics and Information Technology</a> (MeitY) on Tuesday notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which explicitly expand the scope of the law to cover “synthetically generated information”.</p><p>The amended rules will take effect from February 20, 2026.</p>.<p>The amendments define “audio, visual or audio-visual information” and “synthetically generated information” to include any AI-created or AI-altered content that appears real or authentic and is likely to be mistaken for genuine material involving real persons or events.</p><p>The government has clarified, however, that routine editing, formatting, technical corrections, or good-faith creation of documents, PDFs, research outputs, and educational material will not be classified as synthetic content.</p>.After US, India ranks 2nd globally in enterprise AI usage; security risks loom.<p>Key changes include significantly shorter takedown timelines. Platforms are now required to comply with government or court orders within three hours (previously 36 hours), as stated in the official gazette notification.</p><p>User grievance redressal timelines have also been tightened. Platforms must now respond to grievances within seven days (down from 15 days), and certain specified actions must be completed within two hours.</p><p>Under the new rules, all AI-generated content must be clearly and prominently labelled in a manner that is easily noticeable and adequately perceivable.</p><p>Intermediaries are also required to embed unique identifiers or metadata to trace the computer resource used to create, generate, or modify such content. Once applied, these labels and metadata cannot be removed, altered, or suppressed.</p><p>Calling for ban on illegal AI content, new rules said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.</p>