Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, clarified the Centre’s directive regarding artificial intelligence (AI) platforms.
What Happened: Taking to the social media platform X, Chandrasekhar stated that the advisory, which caused considerable confusion among tech companies, including startups, about needing approvals before launching their AI platforms, is specifically aimed at significant, large platforms and does not apply to startups.
The minister emphasized that the advisory seeks to regulate untested AI platforms from being deployed on the Indian internet. He explained that the process of seeking permission, along with labelling and consent-based disclosure to users about untested platforms, serves as an “insurance policy” for platforms, protecting them from potential consumer lawsuits.
Chandrasekhar reaffirmed the government’s commitment to ensuring the safety and trust of India’s internet as a shared goal among the government, users, and platforms.
See also: Apple Wants To Match Samsung’s Milestone with ₹70,000 Cr Revenue Target
What does the advisory say: Issued on March 1 by the Ministry of Electronics and Information Technology (MeitY), the advisory mandates that all AI models, large-language models (LLMs), software using generative AI, or any algorithms currently in the beta stage or deemed unreliable must obtain explicit government permission before deployment to Indian users. This first-of-its-kind advisory globally aims to prevent bias, discrimination, or threats to electoral integrity through AI and related technologies.
While the advisory is not legally binding, Chandrasekhar hinted at the future of regulation, suggesting that non-compliance could eventually lead to legal and legislative consequences. The advisory follows incidents of reported bias by AI platforms, including a notable case involving Google’s AI model Gemini, which sparked a response from union ministers Ashwini Vaishnaw and Chandrasekhar, emphasizing that Indian users should not be subject to experimentation with unreliable platforms.
Furthermore, the advisory calls for all platforms deploying generative AI to label the potential fallibility or unreliability of their output and recommend a “consent popup” mechanism to inform users explicitly about these issues. It also outlines requirements for labelling content that could potentially be used as misinformation or deepfakes, ensuring traceability of the synthetic creation’s origin.
Read next: Google Apologizes For AI Misstep, Admits Gemini’s Unreliability On PM Modi Query
Don't miss a beat on the share market. Get real-time updates on top stock movers and trading ideas on Benzinga India Telegram channel.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.