Patrika Logo
Switch to Hindi
My News

My News

Plus

Plus

Shorts

Shorts

Epaper

Epaper

AI Content Rules Updated, Preparing to Tighten Grip on Deepfakes: Know the Government's New Regulations

Under the new proposal, major social media platforms such as Facebook, X (Twitter), and YouTube will be held accountable. Platforms with 5 million or more users will be responsible for identifying and flagging AI-generated fake content.

Bharat

Patrika Desk

Oct 25, 2025

Artificial Intelligence
Artificial Intelligence (Image-Freepik)

Artificial Intelligence (AI), while a very useful tool for its users, also has several negative implications. Therefore, the government has adopted a strict stance on AI and deepfake content. The IT Ministry has introduced a new proposal to control misinformation spreading through AI-generated audio, video, and images. The ministry states that fake content spreading rapidly on social media is becoming a threat to both society and democracy.

Social Media Platforms to Be Held Accountable

Under the new proposal, major social media platforms like Facebook, X (Twitter), and YouTube will be held accountable. Platforms with 5 million or more users will be responsible for identifying and flagging AI-generated fake content. The IT Ministry has prepared a draft of these AI-related rules and has sought suggestions and feedback from all stakeholders by November 6.

Mandatory Labelling and Authentication for AI Content

Under the new rules, videos, audio, or photos created by AI will need to be labelled before uploading, so that viewers know they have been generated by AI. Additionally, users will have to verify their identity before uploading content. The draft also states that any AI content must contain at least 10% original material.

Deepfakes Raise Concerns in Parliament

The increasing incidents of deepfake videos had also raised concerns in Parliament. IT Minister Ashwini Vaishnaw recently stated that the images and voices of prominent personalities are being misused, affecting their personal lives. He also mentioned that the government is taking concrete steps to identify and prevent such fake content. The government believes that misleading and realistic-looking content created through AI technology can be used to tarnish reputations during election campaigns, commit financial fraud, and incite public sentiment. Therefore, monitoring and controlling such content has now become a priority.