"I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening." - Sam Altman, CEO of OpenAI
Government finds response unsatisfactory
The Indian government has expressed strong dissatisfaction with the response provided by X regarding the misuse of its AI chatbot Grok. The Ministry of Electronics and Information Technology found the platform's initial reply to be vague and insufficient. Officials noted that the company merely recycled existing policies instead of detailing the concrete actions taken to address the issue.
Concerns over deepfake generation
Top government sources highlighted a gross misuse of the platform involving the generation of obscene and sexually explicit content. The specific issue involved the nudification of women through AI prompts, which violates digital safety and privacy norms. This alarming trend prompted the ministry to demand immediate and specific corrective measures to stop the creation of such harmful material.
Ministry issues second notice
Displeased with the lack of clarity, the ministry issued a second notice to the platform demanding specific evidence of action taken. A senior official stated that general policy statements are not enough and the government requires details on exactly what steps were implemented. This follow up notice emphasized that everything in the first response was effectively negated by its lack of substance.
Strict deadlines for compliance
The ministry initially issued a 72 hour ultimatum on January 2 regarding the removal of the explicit content. Although X requested a three day extension, the government tightened the timeline by granting only a final 24 hour deadline. Despite submitting a reply claiming compliance with Indian laws, officials found the submission severely lacking in specific details about the enforcement actions.
Risk of losing legal immunity
Authorities have warned that continued evasion will lead to the loss of Safe Harbour protection under Section 79 of the IT Act. Stripping this legal shield would make both the company and its executives liable for criminal prosecution under the IT Act and the Bharatiya Nyaya Sanhita for the illegal content generated by their AI tool.
Summary
The Indian government is considering legal action against X after deeming its response to Grok AI's misuse inadequate. Authorities have threatened to revoke the platform's legal immunity if it fails to provide specific evidence of action taken against the generation of obscene deepfake content and the violation of user privacy.
Food for thought
If platforms are held criminally liable for the output of their AI models, will this effectively halt the public release of open-ended generative AI tools?
AI concept to learn: Safe harbour
Safe harbour is a legal provision that protects internet intermediaries like social media platforms from liability for content posted by third parties or users. It ensures that platforms are not treated as the publishers of user generated content, provided they observe due diligence and comply with government orders to remove illegal material.
[The Billion Hopes Research Team shares the latest AI updates for learning and awareness. Various sources are used. All copyrights acknowledged. This is not a professional, financial, personal or medical advice. Please consult domain experts before making decisions. Feedback welcome!]