Skip to Content, Navigation, or Footer.
The Post - Athens, OH
The independent newspaper covering campus and community since 1911.
The Post

Letter to the Editor: Dangers of deepfake technology must be addressed

Have you ever seen a video of a celebrity or politician that seemed so absurdly out of character that you thought it must be fake? If so, it’s possible you were viewing a deepfake. 

For example, popular comedian Jordan Peele created a deepfake video of President Obama calling Donald Trump a “dipsh-t” before warning the public against the scary accuracy and danger of deepfake technology. 

If Peele hadn’t added this disclaimer, it’s very possible viewers would have believed this was really Obama. According to The Guardian, “deepfakes use a form of artificial intelligence called deep learning to make images of fake events.” 

They are essentially computer-generated videos/photos using someone’s likeness or images, and oftentimes they are used for malicious purposes. The anonymity of the internet, as well as the online disinhibition effect, have led to deepfake technology’s progression and with it a variety of ethical concerns such as revenge porn, societal distrust and media literacy issues.

The progression of deepfake technology has skyrocketed as of late. As of 2019, over 15,000 deepfake videos were discovered online, with 96% of them being pornographic. About 99% of those used celebrities’ faces and projected them onto porn stars to make it seem as if that celebrity was starring in an adult video. 

Aside from pornography, deepfake technology is being used to create voice and video clones of public figures, particularly influential people like politicians, to slander them. These fabricated videos and photos can be nearly impossible to decipher from real ones, thus causing harmful misinformation and potentially damaging peoples' reputations. 

The accessibility of this new technology, as well as its rapid progress, raises serious ethical concerns. One concept that can help explain why people online feel so comfortable generating these types of videos is the online disinhibition effect. 

According to psychiatrist Dr. John Suler, this theory states that people are willing to say whatever they want online that they wouldn’t necessarily say in the real world because of the lack of significant consequences. Furthermore, the internet provides a blanket of anonymity that allows users to hide their identity, leading to an increase in radical and harmful content. Deepfakes are only one consequence of the internet’s allowance for invisibility, but a consequence that is affecting more than just a few individuals. 

While AI-generated media has the ability to negatively impact specific people, it has also influenced the internet as a whole, causing a lack of trust in what is real and an environment where people feel comfortable violating others’ consent and privacy. 

Along with these trust and consent issues, media literacy is threatened by the progression of deepfakes. Media literacy is defined as the ability to decode media messages and assess the influence of these messages on our feelings and behavior. Media literacy skills are already an issue, but with these believable and realistic deepfake photos and videos, it has become even more difficult to analyze and form conclusions about media online. 

While this technology can be used for good, like for entertainment or educational purposes, the harmful reasons for deepfake usage far outweigh the benefits of it. I would like to discuss two specific, serious impacts of deepfake technology on society: revenge porn and societal distrust.

Deepfakes have plunged society into a world of distrust and confusion. Forbes cites a 2019 Pew Research Center report that states, “63% of Americans said made-up or altered videos and images create confusion about the facts of current issues and events.” This kind of confusion could negatively alter the state of democracy in the U.S. as well as governmental decisions in other countries.

If people believe AI-generated videos of political candidates, there is a serious threat to their standing. Scott Hermann, the CEO of identity-theft protection corporation IDIQ, said deepfakes are often used to discredit people in the public eye and spread misinformation. He added this could be “especially dangerous when used for political motivation, as this technology can make it seem like a political figure has said or done almost anything.”

This kind of media being released to the public, as well as the confusion that comes with identifying it as real or fake, can cause serious societal distrust, particularly in the government and the people who run it. It could lead to a situation where anyone could say a video or photo is simply a deepfake even if it’s 100% real. 

For example, Donald Trump claimed a video of him bragging about grabbing a woman’s genitals wasn’t real, and at this point in the technology’s progression, that’s not an entirely outlandish claim. AI-generated deepfake media has the potential to completely alter the public’s view of certain people and the government and ultimately inspire increased distrust in society. Along with these trust issues, media literacy is threatened by the progression of deepfakes. 

The increasing prevalence of those who possess the technology and skills required to create this media results in challenges for society to keep up with and combat its effects. This is becoming a serious threat for regular people too outside of just public figures and politicians.

Although deepfake pornography of celebrities is harmful and potentially reputation-damaging, regular people are also at risk at the hands of this technology. The first deepfake was created in 2017 by a Reddit user to put celebrities’ faces onto porn stars. The literal origin story of deepfake technology is rooted in pornography and the desire to see women naked without their consent. Going further than personal sexual gratification, an app named DeepNude was created in 2019 specifically to turn any photo into a nude photo, including regular women. 

Thankfully this app was deleted mere days after its creation, but the fact that this technology exists is frightening, to say the least. These doctored photos could be used to ruin the reputation of their subject or to participate in revenge porn, another serious and malicious possibility. This can lead to widespread relationship revenge among average individuals in society, not just those in the public eye. 

While it may seem that this technology couldn’t possibly affect you, I would like to emphasize the potential for its malicious consequences to creep into your own backyard. This is why education and knowledge of this ever-progressing technological advancement are paramount. As a victim of pornography deepfakes myself, I know how demoralizing, violating and harmful they can be, and no one should ever have to feel that way. That is why I am proposing a solution: technology and policies need to be put in place in order to detect and ban this media from public platforms.

I believe that to combat these harmful consequences, organizations and social platforms should all be required to implement technology that identifies and blocks deepfake media, or for deepfakes to require a disclaimer to prove their illegitimacy. 

Technology firms are working on systems that detect fakes, and blockchain ledger systems could “hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked” according to The Guardian.

Paul Kan, an AI business consultant, said there isn’t a silver bullet solution to this problem, but rather a combination of deepfake-detection software, awareness, education, regulation and policies should be adopted to combat it as effectively as possible. These technologies and policies should be put in place by all social/public platforms to provide as much protection against harmful deepfakes and misinformation as possible. Aside from these, I believe a mandatory disclaimer/watermark should be applied to every deepfake video/photo to eliminate the potential for any confusion whatsoever.

These solutions will hopefully reduce the harmful effects of this technology and lessen the societal distrust and reputation-damaging media that deepfake technology has created. Next time you view a shocking or polarizing video on social media, don’t automatically believe it: pull out a magnifying glass and keep in mind the fact that it could possibly be a deepfake.

Powered by SNworks Solutions by The State News
All Content © 2016-2024 The Post, Athens OH