“There is no such thing as a neutral dataset.” - Timnit Gebru, Eritrean Ethiopian-born computer scientist
Me and my copy, delivered instantly
The rise of generative technologies has been a boon for the creative-minded. But for the known faces, a new challenge has arisen - clones. Some Indian Bollywood stars approached the judiciary over AI-generated videos misusing their likeness, thereby showing how deepfakes and synthetic media have swamped the public space. Authenticity and imitation are tough to distinguish. So the core question is: how to protect one's identity?
What rights do 'personalities' have
Every personality is unique, and identifiable by some 'markers'. Personality rights cover an individual’s name, image, voice, and other markers of identity. Traditionally based on dignity, privacy, and economic autonomy, these rights now act as a shield against unauthorised exploitation. But deepfakes created using GenAI can easily morph or change voices, fabricate scenarios, and spread misinformation at scale. As AI tools evolve, they present new vulnerabilities and the legal regime is unprepared to tackle those.
The world speaks differently
Global legal frameworks differ in a wide way. European law treats personality rights as a dignity-based model, while the United States frames them as rights of publicity that vary from state to state. India remains largely reactive, relying on older legislation like the Information Technology Act and limited judicial precedents. Courts have recognised misappropriation and defamation linked to AI misuse, but comprehensive rules remain absent.
The human centred lens on AI
International bodies like UNESCO promote rights based approaches that stress anonymity safeguards, transparency, and strict regulation of deepfakes. Scholars note that while AI enables creativity, it also enables manipulation. Reconstructed voices of deceased artists and AI-driven impersonation in harmful contexts demonstrate how fragile human identity becomes in the hands of powerful models.
Responsible innovation is the only solution
India’s advisory on deepfakes marks a start, yet experts argue that stronger laws are needed. Clear rules on watermarking, platform liability, cross border cooperation, and user protections can help balance innovation with dignity. Without coordinated global action, personal identity risks entering a phase of unchecked commodification.
Summary
AI generated likeness misuse highlights gaps in global personality rights. Deepfakes threaten privacy, dignity, and safety, while legal systems struggle to keep pace. Stronger legislation, accountability measures, and human centred governance are essential to protect identity in an AI driven world.
Food for thought
If AI can convincingly recreate a person, where should society draw the line between innovation and identity theft?
AI concept to learn: Deepfakes
Deepfakes use machine learning models to generate realistic synthetic images, audio, or video that imitate real people. They rely on training data to map facial expressions or voice patterns and then recreate them in fabricated scenarios. Understanding how they work helps beginners recognise both their creative potential and their ethical risks.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]

COMMENTS