Union Minister Rajeev Chandrasekhar announced on March 2 that artificial intelligence platforms must obtain government permission before launching AI products in India. Chandrasekhar emphasized the need for rigorous scrutiny before AI products are released to the public, likening it to safety standards for cars or microprocessors.
The government’s advisory, issued on March 1, mandates intermediaries to ensure compliance immediately and submit a status report within 15 days. Additionally, AI-generated content must be labeled with a unique identifier to trace the originator of misinformation or deepfakes.
Chandrasekhar cited recent controversies, such as Google’s Gemini model’s response about Prime Minister Narendra Modi, as catalysts for these regulations. He highlighted violations of IT rules and criminal codes, urging accountability for content published on platforms.
Google responded to the controversy, acknowledging shortcomings in Gemini’s responses to prompts about current events or political topics. The company affirmed ongoing efforts to enhance reliability and functionality in such areas.
These measures underscore the government’s commitment to regulating AI deployment and ensuring transparency and accountability in AI-generated content.