“The real risk is not that machines will think too much, but that humans will think too little.” - Eliezer Yudkowsky, AI researcher
AI content norms face industry resistance
The Internet and Mobile Association of India (IMAI) has raised serious concerns over the government’s proposed rules on labelling synthetically generated information. The association argues that the definitions in the draft are vague, overly broad, and risk sweeping legitimate digital activities into regulatory confusion. This is precisely the fear many had raised when the rules were first published.
Challenges in defining synthetic content
According to IAMAI, the proposed definition of synthetically generated information is subjective and difficult to apply consistently. The group warns that content ranging from routine photo corrections to stylised videos could be misclassified, even when there is no intent to deceive. This ambiguity could burden creators and platforms alike. IAMAI points out that existing laws, including the Information Technology Act and IT Rules of 2021, already address harmful content. Introducing overlapping rules may create enforcement uncertainty and place unnecessary pressure on social media platforms required to verify user declarations and label content.
Premature implementation not good
The proposed amendments require platforms to label every AI-driven piece of content with permanent metadata or identifiers. IAMAI believes this requirement is unreliable, burdensome, and premature, as current AI technologies and content workflows are still evolving rapidly.
Call for a technology neutral approach
The industry body has advised the government to avoid rules that target specific technologies. Instead, they argue for a more flexible approach focused on preventing unlawful content rather than micromanaging how digital tools operate.
Summary
India’s tech industry has urged the government to reconsider draft AI content labelling rules, warning that unclear definitions and duplicate obligations could disrupt digital services and create uncertainty for creators and platforms.
Food for thought
If defining AI generated content is already proving difficult, how will we regulate the far more sophisticated systems of the future?
AI concept to learn: Synthetic Content Labelling
Synthetic content labelling refers to marking or identifying information created or altered using artificial intelligence. It helps users understand whether what they are viewing is machine generated. This concept is becoming important as AI tools increasingly blend with everyday media creation.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS