Nowadays, through Artificial Intelligence (AI) on the Internet, it has become very easy to create such fake videos and pictures which look absolutely real. These are called ‘deepfakes’. These are often being used to mislead people or spread wrong information. Taking this threat seriously, the IT Ministry of the Government of India has issued new and strict rules for big social media platforms like Facebook, Instagram and YouTube. The direct objective of these rules is to ensure that if a photo or video has been created with the help of AI, then the common user should know as soon as he sees it that it is not genuine. The government has made it clear that now it will be the responsibility of these companies to keep the internet safe and they will have to take immediate action on any misleading content.
Big things about new rules
According to the new rules issued by the Ministry, it will be important for everyone to follow these rules:
- It is important to provide identification: Now if any photo or video is made through AI, then social media companies will have to clearly write on it that it has been made through AI. Besides, a digital identity (code) should also be hidden inside the file so that its authenticity can be checked at any time.
- Action in 3 hours: If the government or court identifies any false AI video or ‘deepfake’ and asks to remove it, then the companies will have to remove it from their platform within just 3 hours. After this step, now if anyone makes a deepfake photo of any person, it can be ‘taken down’ immediately so that there will be no scope for delay.
- Prohibited removal of labels: Once any content is labeled as ‘AI generated’, social media platforms cannot remove it or hide its information.
- Use of automatic software: Companies will now have to install special computer programs or software that can automatically identify whether any wrong, obscene or fraudulent AI video is being spread. It will be the responsibility of the companies to stop such content from spreading.
It is mandatory to make users aware
Under these new rules, the responsibility of social media companies has increased further. Not only will they have to remind their users every three months about the serious consequences of misuse of AI, but they will also have to provide this information in very simple and clear language. Apart from this, if any such AI content comes to the notice of companies which violates the rules, then they will have to immediately take strict and effective steps to stop it.
Legal Compliance and Security
According to the new rules, social media companies will now have to ensure that no person creates or shares deepfake AI photos or videos on the platforms against the laws of the country. Companies need to be especially careful not to use AI to spread any content that violates stringent laws like the Indian Judicial Code, POCSO Act (for the protection of children) or the Explosive Substances Act. Social media platforms have now been held accountable for any AI content that breaks the law.
After the implementation of the new rules, now this responsibility will not only be of the companies but also of the common users like us. Now if you post any photo or video made with the help of AI, then you will have to clearly state that it has been made with AI. Although many social media apps have already provided buttons or features for this, but with this strictness of the government, fraud on the internet will reduce and everything will become more clean and reliable.




