Deepfake Viral Video: YouTuber made himself Taylor Swift and Sitharaman in a pinch; Then this is how the deepfake was exposed – Youtuber Transforms Into Taylor Swift & Sitharaman In Seconds; Here’s How Deepfake Reality Exposed Deepfake

Summary

In the video shared on social media platform X, Ishaan Sharma is seen transforming into different global and Indian celebrities in a matter of seconds. In the video, he warns the audience that now everything seen on social media is not necessary to be real. His message was clear that face, age, gender and voice…

In the video shared on social media platform X, Ishaan Sharma is seen transforming into different global and Indian celebrities in a matter of seconds. In the video, he warns the audience that now everything seen on social media is not necessary to be real. His message was clear that face, age, gender and voice can all be changed by AI, and the viewer will not even be able to recognize.

Why is concern about deepfakes increasing?

AI deepfake technology has become so advanced that it can make a person say things he never said, create fake videos to spread misinformation and harm the image of public figures. The impact of deepfakes on political discourse was recorded in 11 countries in 2023. Tech experts believe that the risk is greater in countries that lack technical understanding.

Also read: The hassle of complaint is over: Now AI will keep a close watch on fraud calls, TRAI’s new system will eliminate spam

Awareness or strict control?

After the video went viral, two types of reactions are emerging on social media, in which some users say that the government should run AI awareness campaigns in all languages ​​so that people can identify fake videos. The other side believes that greater government control over AI is not practical. He suggests that disclaimers can be made mandatory on AI generated content.

Big dangers of deepfake

1. Impact on democracy

Deepfake technology can be extremely dangerous in an election environment. Suppose a fake video of a leader is made viral, in which he is seen making inflammatory statements, then it can influence the opinion of the voters. Social tensions may increase, election results may be affected, and trust between the media and the public may decrease.

In 2023, AI-generated content was used to influence political discourse in many countries. The problem is not just the fake video, but the breakdown of trust. When people start considering everything as fake, even real information starts looking suspicious.

2. Personal loss

Deepfakes can show an ordinary person or a celebrity in a situation that he or she has never been in. In particular, adding faces to objectionable content without permission, spreading false statements through fake audio and tarnishing someone’s professional image. This can lead to mental stress, social humiliation and legal complications. Women and public figures are more affected by this threat.

3. Cybersecurity threats

Deepfakes are not limited to just videos. Now AI can also copy voice exactly. By creating a fake voice of the CEO, getting money transferred from the company, impersonating a bank or government official, committing fraud and issuing fake statements against institutions, all these things can be done easily. In such cases, both economic losses and institutional instability may occur. This is why cyber security experts are now developing AI-based detection systems.

Also read: YouTube Music brings new AI feature: Now create desired playlist in seconds by speaking, will give competition to Spotify

Is it possible to use deepfakes positively?

Yes, if used responsibly and transparently. Like this, expensive VFX scenes can be created at a lower cost. Apart from this, it can also be used in creative things like making historical characters come alive and changing their age or digital recreation.

Additionally, systems like FORGE use deepfake technology for security. With its help, they can confuse attackers, protect sensitive data and prevent intruders by creating a fake digital environment. This is an AI versus AI model, where security can also be stronger than AI.

Exit mobile version