The Election Commission of India (ECI) has called a meeting with top executives of major social media platforms on March 11 at Nirvachan Sadan to address the growing threat of misinformation and deepfakes generated by artificial intelligence ahead of the upcoming Assembly elections.

The meeting comes as the Election Commission of India, in coordination with the Indian International Institute for Democracy and Electoral Management (IIIDEM), is developing a dedicated technology framework to identify and combat misinformation generated by artificial intelligence during elections.
Senior IEC officials said the discussions will focus on enhancing monitoring of election-related content, improving response time to complaints and enhancing coordination between digital platforms and electoral authorities during the election campaign period. The meeting will be held under the title of studying “opportunities and challenges” related to the use of social media during elections.
Officials noted that this will be the first time the commission has held a direct and organized meeting with major technology companies on election-related content issues. To date, the ECI has largely addressed these concerns by issuing warnings while enforcing the Model Code of Conduct (MCC). During that period, district collectors, who also act as nodal officers of the district during elections, can issue notices to candidates, political parties or digital platforms and direct the removal of misleading or illegal content.
Senior representatives from global technology companies are expected to participate, including officials from Meta Platforms, which runs Facebook, Instagram, and WhatsApp; Alphabet, the parent company of Google and YouTube; and Company X, which operates Company X.
As part of developing its own technological tools and coordinating with digital platforms where such content is circulated, officials said the commission is exploring the development of specialized software and operational protocols to detect fake videos, synthetic audio and digital materials that have been manipulated during elections.
According to officials, the proposed system will analyze digital content to determine whether a video, audio clip or image is original, artificially created using artificial intelligence tools or edited in a misleading manner. The program will be designed to report alterations such as fabricated speeches, digitally altered facial expressions, audio reproduction and instances where real footage has been selectively edited or combined with unrelated images to change its meaning.
At present, the IEC’s main election management platform, ECINET, does not have any built-in mechanism to authenticate or identify manipulated digital content such as deepfakes or AI-generated media.
The initiative forms part of a broader effort by the Commission to strengthen institutional preparedness for the increasing role of artificial intelligence in electoral politics. Officials at the Independent Electoral Commission’s IT department said AI tools could be used to manipulate public discourse by creating realistic but false political content, impersonating candidates or presenting true statements in misleading contexts.
The proposed detection system is expected to help election officials verify the authenticity of viral content and identify instances where genuine material has been edited, selectively clipped, or combined with unrelated elements to create misinformation.
The initiative is also linked to a broader plan being explored at IIIDEM to build long-term institutional capacity in AI and elections, including research and training programs and the possibility of creating a global knowledge center on AI-related electoral risks.
The meeting comes as the states of West Bengal, Tamil Nadu, Kerala, Assam and the union territory of Puducherry prepare for assembly elections in the coming months, raising concerns within the commission about the increasing use of artificial intelligence tools to create misleading political content. Election authorities are particularly concerned about fake videos, synthetic audio clips, and manipulated photos that could falsely portray candidates, fabricate statements, or create false endorsements.

