At a glance
India has introduced a comprehensive framework to regulate AI-generated content. These rules establish mandatory labelling and rapid takedown requirements to protect digital trust and individual dignity.
Executive overview
The Ministry of Electronics and Information Technology has amended the Information Technology Rules 2021 to address the rise of synthetic media. By integrating the India AI Governance Guidelines 2025, the government now mandates proactive disclosure, persistent metadata, and accelerated enforcement timelines to mitigate risks like deepfakes and misinformation.
Core AI concept at work
Synthetically generated information refers to any audio, visual, or audio-visual content created or altered using computer resources or algorithms. This technology produces media that appears authentic or indistinguishable from reality. The regulatory objective is to ensure such content is identifiable through technical markers without hindering legitimate creative or educational uses.
Key points
- Intermediaries must implement technical measures to ensure all synthetically generated information is prominently labelled and contains persistent provenance metadata.
- Platforms are now required to obtain user declarations for AI-generated uploads and deploy automated tools to verify the accuracy of these disclosures.
- Enforcement timelines for removing unlawful synthetic content have been compressed to as little as three hours to prevent the viral spread of harmful media.
- The framework excludes routine technical edits and accessibility enhancements to preserve innovation while focusing on content that could mislead the public.
Frequently Asked Questions (FAQs)
What is the primary goal of the new synthetic media regulations in India?
The regulations aim to enhance transparency and accountability by requiring clear identification of AI-generated content to prevent fraud and deepfakes. This ensures that citizens can distinguish between authentic and computer-generated media in real time.
How do the amended IT Rules impact digital platforms and social media intermediaries?
Platforms must now follow stricter due diligence, including mandatory labelling of synthetic content and responding to takedown orders within highly compressed timeframes. Failure to comply with these proactive governance requirements may result in the loss of statutory safe harbour protections.
FINAL TAKEAWAY
India has shifted from reactive content moderation to an ex ante governance model for artificial intelligence. This approach balances the protection of constitutional values with the need for technological advancement by embedding transparency and accountability directly into the digital content lifecycle.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]