“The future of AI must be built on human values, transparency and accountability.” - Fei-Fei Li, AI pioneer
Industry concerns over new SGI rules
India’s draft rules on synthetically generated information have triggered concern among rights groups and technology bodies. They argue that the proposed amendments to the Information Technology Rules could stretch the mandate of the IT Act and impose obligations that go beyond what the law currently allows. Many fear this may unintentionally curb freedom of expression.
Ambiguity over intermediary responsibilities
A major point of contention is the unclear definition of what counts as an intermediary. Experts note that the draft does not clearly address whether generative AI platforms fall within this category. Under the IT Act, intermediaries can store, host or transmit content but cannot originate it. However, AI systems that generate content based on user prompts challenge this definition.
Risk of excessive labelling and censorship
The guidelines require intermediaries to label or block synthetically generated content when in doubt. Industry watchers warn that this could lead to over-labelling, accidental censorship and the unnecessary suppression of legitimate content. Even minor edits to images or text could be misclassified as SGI, slowing down communication and potentially affecting political speech.
Platform compliance burdens
According to the Internet and Mobile Association of India, the proposed rules place heavy compliance burdens on platforms. These include hashing obligations, scanning content and verifying declarations provided by users. Failure to comply could attract severe penalties, creating tensions with safe harbour protections guaranteed under existing law.
Privacy and surveillance concerns
Civil society organisations have also raised privacy worries. Techniques like large-scale hashing or content matching could create surveillance-like systems that violate the spirit of India’s intermediary protections. Critics argue that such mechanisms may not effectively differentiate between harmful AI-generated content and harmless user edits.
Summary
The draft SGI rules have raised concerns about overreach, unclear definitions and potential censorship. Industry groups warn that compliance burdens and privacy risks could undermine free expression while creating legal uncertainties for generative AI platforms.
Food for thought
Can India regulate AI responsibly without pushing platforms into over-caution and users into silence?
AI concept to learn: AI regulation
AI
regulation refers to the laws, policies, and guidelines designed to
ensure artificial intelligence is developed and used safely, ethically,
and responsibly. It aims to protect people from harm, prevent misuse,
ensure transparency, and promote fairness. Regulation also establishes
accountability for organisations building or deploying AI. As AI systems
become more powerful, governments and global institutions are creating
rules around data privacy, bias, safety testing, model disclosures, and
human oversight to balance innovation with public trust and societal
well-being.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS