A parliamentary committee calls for a comprehensive law on artificial intelligence

Anand Kumar
By
Anand Kumar
Anand Kumar
Senior Journalist Editor
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis...
- Senior Journalist Editor
5 Min Read
#image_title

A parliamentary committee has recommended that the government explore a comprehensive law to regulate artificial intelligence, even as the government maintains that existing laws are adequate to deal with emerging risks.

The recommendations are part of a report finalized by the Standing Committee on Communications and Information Technology and is expected to be submitted to Parliament (Reuters)
The recommendations are part of a report finalized by the Standing Committee on Communications and Information Technology and is expected to be submitted to Parliament (Reuters)

These recommendations are part of a report finalized by the Standing Committee on ICT and expected to be submitted to Parliament. The committee met with officials of the Ministry of Electronics and Information Technology (MeitY) on Friday to consider and approve the report.

In the report, reviewed by HT, the committee acknowledged that the government was doing its part to prevent the misuse of AI “in financial fraud, intimidation or in deep audio videos etc”, but said it still felt the need to “explore the possibility of comprehensive legislation to prevent the misuse of AI”.

This contradicts the government’s position that a host of existing laws such as the Information Technology (IT) Act, the Digital Personal Data Protection (DPDP) Act, and India’s criminal laws already cover risks such as bias, misinformation and privacy harms.

The government told the committee that India, with more than 900 million internet users, is already the second largest online market in the world. AI is expected to add $450 to $500 billion to India’s GDP by 2025, and nearly $967 billion by 2035, representing about 10% of the country’s economy expected to reach $5 trillion. Officials also told the committee that artificial intelligence and automation could generate about 4.7 million new technology jobs by 2027, a number similar to the current size of India’s IT workforce.

MeitY recently amended the IT Rules to include Synthetically Generated Information (SGI) or AI-generated content within its scope. The rules require platforms to clearly label AI-generated content, include metadata for traceability, and ensure users disclose when content is synthetic. The new rules also introduce stricter timelines, including removing illegal AI content within two to three hours, while reducing timelines for redressing grievances.

In response to concerns raised by the committee on whether there was a need to restrict foreign AI models such as Grok, ChatGPT and DeepSeek in the Indian market while training them on local data, MeitY acknowledged that foreign AI companies have access to Indian data to train their models, but added that many global AI systems are largely trained on global English-language internet datasets, which does not adequately reflect India’s linguistic and cultural diversity.

MeitY also added that existing safeguards under the DPDP Act apply to these platforms, requiring user consent for data processing, transparent privacy policies, and rights such as access and deletion. It added that cross-border data transfers are limited only to notified countries, and entities handling Indian data abroad must ensure adequate protection, with these rules applying to both domestic and foreign AI platforms.

The ministry also acknowledged that companies building large language models often do not disclose the datasets used for training. She cited ongoing litigation including cases related to OpenAI, including in India, where courts have yet to decide whether training AI systems on copyrighted material without permission is legally permissible.

Indian news agency ANI filed a case against OpenAI in the Delhi High Court in late 2024, accusing the maker of ChatGPT of using copyrighted content without permission to train its AI models. The issue has since expanded, with the Digital News Publishers Association (DNPA), which represents several major media houses, stepping in, alleging that OpenAI is collecting, storing and reproducing news content without licenses or attribution, potentially undermining the business of digital journalism. The case is ongoing.

Separately, the Administration for the Promotion of Industry and Internal Trade (under the Ministry of Commerce) has set up a committee to review the intersection between generative AI and copyright law. It released the first part of its working paper on generative AI and copyright in December 2025, proposing a mandatory system that would allow AI developers to use legally accessed copyrighted content for training, ensuring that creators receive compensation. MeitY supported this view.

Share This Article
Anand Kumar
Senior Journalist Editor
Follow:
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis of current events.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *