“AI is powerful, but without strong human oversight and accountability, its misuse can easily outpace regulation.” - Kate Crawford, Author of Atlas of AI
Where AI labelling norms fall short
The recent efforts to regulate AI-generated content in India highlight an important concern, how effective are current labelling rules in deterring misuse? From deepfakes impersonating public figures to AI-generated fraud, the scope of harm is expanding faster than governance can catch up.
Weak enforcement and unclear scope
Under India’s amended IT and Digital Media Ethics Code, AI-generated content (classified as “synthetically generated information” or SGI) must be labelled. Yet, these guidelines only apply to intermediaries, not directly to those creating harmful content. This limited scope weakens enforcement and accountability, especially for social media giants hosting manipulated visuals or voice clips.
Deepfake dilemmas and industry resistance
Celebrities and politicians have already faced reputational harm from AI deepfakes. However, the entertainment industry argues that mandatory labelling could stifle creativity. Moreover, the rule’s technical requirements, like identifying 10% of manipulated visual or voice content, remain vague and easily bypassed, making compliance inconsistent.
Differential treatment and practical barriers
Significant social media intermediaries (SSMIs) face a two-tier verification and compliance system. While these steps aim for transparency, smaller platforms evade similar scrutiny, creating loopholes. Additionally, delays in implementing penalties for non-compliance, as seen in France’s and India’s proposed frameworks, limit deterrence against AI misuse.
The need for stronger deterrence
India’s effort is commendable as a first step, but the lack of proportional penalties and ambiguity in guidelines make AI labelling appear symbolic. True deterrence would come from combining clear liability, verified identification, and swift prosecution of AI-enabled crimes.
Summary
India’s proposed AI labelling norms attempt to bring transparency but fall short of curbing deepfake misuse. Limited coverage, vague labelling rules, and weak enforcement create gaps that bad actors can exploit. Stronger accountability, clarity, and deterrent penalties are urgently needed.
Food for thought
If AI can mimic truth so convincingly, can regulation ever truly keep up with deception?
AI concept to learn: AI Deepfakes
AI deepfakes are hyper-realistic synthetic media generated using deep learning, often combining real and fabricated elements. They can convincingly imitate people’s faces, voices, or actions, raising major concerns around misinformation, fraud, and digital identity abuse.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS