India amends IT rules to regulate synthetic media and deepfakes

At a glance India has amended the IT Rules to regulate synthetically generated information. The updated regulations mandate content labellin...

At a glance

India has amended the IT Rules to regulate synthetically generated information. The updated regulations mandate content labelling and rapid takedown timelines for deepfakes.

Executive overview

The Ministry of Electronics and Information Technology notified the IT Amendment Rules 2026 to address the proliferation of AI-generated misinformation. These regulations establish a statutory definition for synthetic content, requiring platforms to implement mandatory labelling and traceability. The rules aim to mitigate real-world harms through significantly compressed compliance windows.

Core AI concept at work

Synthetically generated information refers to audio, visual, or audio-visual data created or altered using computer resources and algorithms. This technology produces content that appears authentic or indistinguishable from real-world events or persons. The regulation focuses on photorealistic AI-generated media that portrays individuals or events in a potentially deceptive manner.

synthetic media management India

Key points

  1. Intermediaries must remove flagged unlawful content within a three-hour window after receiving a government or court order.
  2. Platforms are required to prominently label all AI-generated or modified content and embed permanent, traceable metadata.
  3. Social media users must declare whether their uploaded content is synthetically generated before it is published online.
  4. Technical intermediaries must deploy automated tools to detect and prevent the dissemination of illegal or deceptive synthetic media.

Frequently Asked Questions (FAQs)

What is the new takedown deadline for AI-generated deepfakes in India?

The amended rules require social media platforms to remove flagged deepfakes and non-consensual intimate imagery within two hours. Other forms of unlawful content must be taken down within three hours of a formal order.

How must platforms identify AI-generated content under the 2026 IT Rules?

Platforms must ensure that synthetic media carries prominent visible labels and permanent metadata to indicate its artificial nature. These identifiers must remain unalterable and traceable to the intermediary source used to create or host the content.

FINAL TAKEAWAY

The 2026 IT Rule amendments shift digital governance from reactive monitoring to proactive accountability for synthetic media. By codifying labelling standards and accelerating takedown protocols, the government aims to establish a structured framework for managing the impact of generative artificial intelligence on public discourse.

[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

WELCOME TO OUR YOUTUBE CHANNEL $show=page

Loaded All Posts Not found any posts VIEW ALL READ MORE Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content