YouTube has taken down more than 1,000 deepfake videos that were fraud ads in the recent weeks. In these films, Joe Rogan, Taylor Swift, and Steve Harvey all seemed to be promoting Medicare scams. After an investigation by 404 Media, the advertising ring in charge of the content was identified which eventually lead to the takedown. All together, the videos have received close to 200 million views.
YouTube’s Response: Acknowledging the Problem
In response to this, YouTube has acknowledged the problem and has even stated that it has been making substantial effort to counteract AI generated celebrity fraud advertisements. Users were given the assurance by the platform that it takes the issue seriously and is making efforts to stop the spread of this kind of deepfake content.
As seen by a recent incident involving non-consensual deepfake pornography featuring Taylor Swift, the problem of deepfake goes beyond YouTube. On another platform, the obscene content became viral,receiving over 45 million views and 24,000 reposts before being removed after about 17 hours. The photographs might have started in a Telegram group where users discuss sexual images of women created by AI, according to a report by 404 Media.
A Cybersecurity Challenge
According to cybersecurity insights from companies like Deeptrace, 96% of deepfakes are pornographic, with a noticeable trend towards depicting women and outraging their modesty in many cases. This figure highlights how difficult it will be to address the improper use of AI-generated content online.
YouTube is reportedly actively attempting to tackle the improper use of its platform for deep-fake celebrity advertisements. The event does, however, continue to raise questions about how difficult it is to stop deepfake content on different web sites. These platforms need to constantly invest in resources and adapt as technology advances in order to remain ahead of the malpractices in the battle against harmful and misleading practices.
The Rising Threat of Deepfakes
The event has also shed light on the necessity of strong cybersecurity measures to quickly identify and eliminate fake content. Additionally, it highlights how internet platforms should handle instances of AI generated content being misused, particularly when it involves celebrities and potentially dangerous schemes. The frequency of non-consensual deepfake porn highlights the critical need for preventative actions to shield people from invasions of privacy and damage to their reputations. Check out this deepfake of Jennifer Aniston giving away Macbooks here.
Deepfakes are posing a menace in the digital world by putting privacy and data at risk. It has led to widespread misinformation and worse, disinformation and created an unprecedented need to exercise vigilance and skepticism.
Nevertheless, YouTube’s decision to remove more than 1,000 deepfake scam ad videos is a step in the right direction towards solving the larger issue of thwarting misleading content on the internet. The incident highlights the necessity for platforms, cybersecurity professionals, and regulatory agencies to work together to address the complex problem of deepfakes on the internet. It also highlights the continuous efforts needed to keep ahead of emerging technology.
Comments 1