“The more powerful the technology, the more important it is to ensure it’s aligned with human values.” - Demis Hassabis, CEO, DeepMind
A growing need for clarity in AI content
AI-generated images and videos, commonly called deepfakes, are increasingly shaping public perception. With generative AI tools producing photorealistic content effortlessly, the risk of misinformation and impersonation has grown. The Indian government has now taken its first statutory step toward curbing these risks by mandating labelling of AI-generated content under the proposed amendments to the IT Rules, 2021.
Balancing innovation with responsibility
The move focuses not on censorship but on transparency. Government says creators will simply need to indicate whether their content is synthetic, allowing audiences to make informed choices. The emphasis remains on enabling innovation while building public trust in digital information.
Defining synthetic media in law
For the first time, Indian law defines “synthetically generated information” and brings it under existing takedown and moderation rules. Images and videos must carry visible labels over at least 10% of the display area, while audio must identify itself within the first 10% of playback. Platforms hosting such content must verify and flag manipulated media.
A cautious yet necessary beginning
Experts see the draft regulation as a measured response to growing concerns around poll interference, scams, and reputational misuse. However, some argue the rules lack clarity on text-based AI output, underground content, and deepfake detection mechanisms gaps that must be addressed during public consultation.
The path ahead for AI accountability
India’s framework marks a turning point in global AI governance, reflecting a balance between free expression and factual integrity. Effective enforcement and digital literacy will decide whether this transparency model can scale without stifling innovation.
Summary
India’s draft rules on labelling AI-generated content represent a crucial step toward responsible AI adoption. By focusing on transparency rather than restriction, the government aims to reduce misinformation, build trust, and lay the groundwork for ethical AI development.
Food for thought
Will mandatory labelling alone be enough to rebuild public trust in an era where seeing is no longer believing?
AI concept to learn: Synthetic Media
Synthetic media refers to audio, video, or images created or manipulated by artificial intelligence to appear authentic. It includes deepfakes, AI-generated voices, and photorealistic visuals, technologies that blur the line between real and artificial, demanding new norms for verification and transparency.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS