"AI is too important not to regulate, and too important not to regulate well," - Sundar Pichai, CEO, Google
Incident with Grok
The Indian government issued a notice to Grok after it generated sexually explicit content. This reveals a gap in laws which usually respond only after harmful content appears online rather than preventing its creation.
Limitations of Rules
Regulations focus on platforms that distribute content rather than tool developers. Although IT Rules require labelling, they remain reactive and overlook creators who enable the generation of such material through their software.
Check our posts on Safe Harbour; click here
The burden on distribution platforms
Subimal Bhattacharjee notes that the framework places the weight on intermediaries. Social media companies must handle takedowns while tools remain accessible through marketplaces, allowing deepfake apps to continue operating despite various policies.
Shifting focus to system design
Calls for system design regulation aim to force developers to build tools that prevent harmful outputs. Experts suggest platforms need due diligence, as transparency and labelling are key to reducing harm for users.
Future of AI accountability
Lack of legislation forces platforms to act as unofficial regulators. Shiv Sapra argues this is unsustainable, requiring laws that address the conduct and intent of those creating prompts to ensure long term digital stability.
Summary
The controversy surrounding Grok illustrates the inadequacy of reactive AI laws. India's current IT Rules place the regulatory burden on distribution platforms while ignoring tool creators. Experts advocate for proactive system design and accountability to prevent the generation of harmful synthetic content.
Food for thought
Should the primary legal responsibility for AI generated harm lie with the prompt user or the company that designed the tool?
AI concept to learn: Safe Harbour Provision
This legal principle protects online service providers from liability for content posted by users. If a platform follows rules like removing illegal material when notified, they keep this protection. Losing it means the company could be sued directly for anything their AI generates.
Check our posts on Safe Harbour; click here
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]
