“I think it is important that people are aware of the dangers of AI.” - Geoffrey Hinton, AI scientist
Deepfake abuse alarms government
India has faced a rise in manipulated videos that blend artificial intelligence with social influence. Some incidents involving fabricated political clips and celebrity deepfakes showed how easily misinformation spreads and how difficult it is to verify content before it spreads far and wide. Such generations are called 'Synthetic Media'.
Unreliable detection is troublesome
Current tools fail to keep pace with rapidly evolving synthetic media. They are inconsistent across Indian languages, especially beyond English, a language used in training most AI models deeply. Government researchers and industry collaborators found that many videos circulating on popular platforms had been altered, yet remained undetected for days, amplifying misinformation and confusion. So government wants tighter rules requiring platforms to label AI-generated content clearly and act faster on flagged misinformation. The draft guidelines want watermarks, traceability methods and improved content moderation systems. India’s fact checking units and digital forensics teams are also expanding their capabilities to investigate suspicious media.
Inherent limitations
Major platforms like Meta, YouTube and WhatsApp have introduced automated systems to detect manipulated videos, but many gaps persist due to the fast-moving nature of this technology. Tools that work well in English struggle with regional languages. Without multilingual support and transparent auditing of AI detection models, the misuse of deepfakes will remain widespread.
The road ahead
Researchers say India needs a stronger collaborative network between government, academia and private companies. Better education for users, clearer platform disclosures and improved technical standards can collectively reduce harm while protecting freedom of expression.
Summary
India is strengthening its approach to deepfake regulation as manipulated videos spread across languages and platforms. Despite new rules and detection tools, technical gaps and inconsistent performance still limit effective control.
Food for thought
How can India balance strict deepfake regulation with the need to preserve free expression in a diverse digital society?
AI concept to learn: Deepfake Detection
Deepfake detection focuses on identifying media that has been synthetically altered using AI. It analyses visual or audio artefacts, inconsistencies or model-generated patterns that humans may miss. Beginners should understand that detection is always a race against increasingly realistic generative techniques.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS