The Indian government has issued a directive to technology companies involved in the development of new artificial intelligence (AI) tools, stating that they must obtain government approval before releasing them to the public.
The advisory, released by the Indian IT ministry on March 1, emphasizes that this approval is necessary for AI tools that are considered “unreliable” or still in the trial phase. It also suggests that these tools should be labeled to indicate their potential for providing inaccurate responses to queries. In addition, the ministry has asked platforms to ensure that their tools do not pose a threat to the integrity of the electoral process, as general elections are expected to take place this summer.
This advisory follows criticism of Google and its AI tool Gemini by one of India’s top ministers. Gemini has been accused of providing “inaccurate” or biased responses, including one that characterized Indian Prime Minister Narendra Modi as a “fascist.” Google has apologized for the shortcomings of Gemini and acknowledged that it may not always be reliable, particularly when it comes to current social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, has emphasized the importance of safety and trust in platforms, stating that being “unreliable” does not exempt them from legal obligations. He warned that there should be legal consequences for platforms that enable or produce unlawful content.
In November, the Indian government announced plans to introduce new regulations to combat the spread of AI-generated deepfakes ahead of the upcoming elections. However, the tech community in India has expressed concerns about the latest AI advisory, arguing that it would be detrimental for India to regulate itself out of its position as a leader in the tech industry.
Chandrasekhar responded to these concerns, stating that the advisory aims to inform those deploying AI platforms that are still in the experimental phase about their obligations and the potential legal consequences according to Indian laws. He emphasized the need to protect both the platforms and their users.
On February 8, Microsoft partnered with Indian AI startup Sarvam to bring an Indic-voice large language model to its Azure AI infrastructure, with the aim of reaching more users in the Indian subcontinent.