After losing his voice to throat cancer, Sonantic’s deepfake technology allowed actor Val Kilmer to “speak” again in 2021. From lost voices to alternate realities, deepfake technology is revolutionizing the way we perceive media. Using artificial intelligence, it synthesizes hyper-realistic human experiences, including seamlessly blending features onto other people's bodies and manipulating sound to create a convincing experience.
But there’s a darker side to deepfake technology and AI as a whole, one that speaks to issues of consent and privacy and primarily targets women. In fact, the first ever mention of a deepfake was shared by a reddit user in 2017, who used the technology to create non-consensual porn of women.
As deepfakes and AI becomes more sophisticated, reports of unethical misuse of this technology keep piling up. Let’s take a look at the multiple ethical implications of deepfakes and AI, as well as the benefits of this technology and potential solutions.
Deepfakes and AI technology present several ethical challenges, including issues surrounding consent, privacy, disinformation, and propaganda. Although AI was arguably created to better society, this technology carries ethical implications with the potential to significantly alter our current way of life.
One of the most significant implications of deepfakes and AI technology is the issue of consent and privacy. Deepfakes have the potential to violate individuals' privacy by creating fake videos or images that appear to depict them engaging in activities that they did not consent to.
Most often than not, individuals have not given permission for their image or voice to be used in a particular context. Consent is key to most deepfakes, but particularly in regard to non-consensual pornography.
The use of deepfakes without consent raises significant questions about privacy and personal autonomy. The creation and dissemination of deepfakes that portray individuals in a false light violates their privacy and subjects them to potential harm. It undermines their ability to control how their image is used and how they are perceived by others.
The ease with which deepfakes can be created and distributed means that anyone can be a target, regardless of their status or background. This has implications for individuals, as well as society as a whole, as it undermines trust in the authenticity of digital media.
However, it's important to note that deepfakes are just one aspect of AI that raises concerns regarding consent and privacy. AI as a whole also raises concerns about these two topics.
This technology has the potential to collect, store, and use vast amounts of personal data. Therefore, companies that develop and use AI must ensure that they are transparent about how they collect and use data, and they must obtain consent from individuals before doing so.
It’s not news that emerging technologies tend to reinforce existing power imbalances and reinforce the systems of oppression present in our society. In the last few years, we have seen this with deepfakes and AI.
Women and minorities are often the targets of deepfakes that depict them engaging in sexually explicit or illegal activities, which can have lasting consequences for their personal and professional lives.
The sheer volume of creation and dissemination of non-consensual AI pornography reinforces the presence of rape culture in our society. Deepfakes and AI are used to create realistic fake porn videos and images that can be used to harass, intimidate, or blackmail individuals, particularly women.
According to a report by Sensity AI, 96% of deepfakes were non-consenual porn, and of those, 99% were made of women. Deepfakes have been targeted against women since their inception. The first mention of a deepfake was by a reddit user in 2017, who used the technology to create non-consensual porn videos. Such material can be particularly damaging to women's reputations and can cause significant emotional harm.
Non-consensual porn isn’t exclusive to deepfake technology. The viral AI avatar app Lensa, for example, has been generating nude images of women. When Melissa Heikkilä tried Lensa, she was hoping to get results similar to her male colleagues at MIT Technology Review. Unfortunately, instead of warriors and astronauts, she got several nude or overtly sexualized avatars.
Deepfakes and artificial intelligence can also be used to perpetuate racism. For example, deepfake videos could be used to frame innocent individuals of color for crimes they did not commit. Along with damaging the lives of innocent people, this contributes to harmful stereotypes and further discrimination.
Cases of racist deepfakes have also been reported, normalizing hate speech that could incite violence. In February 2023, American high school students made a deepfake of their principal shouting racist slurs, including calling his Black students “monkeys” and threatening a mass shooting.
Even without nefarious intention, such as through a purposefully racist deepfake, artificial intelligence can easily perpetuate racism as it is trained on inherently biased data. Facial recognition algorithms, for example, have time and time again been proven to exhibit both a racial and gender bias to the detriment of Black women.
This not only makes the daily life of Black people more difficult, such as through issues like opening online bank accounts but can also have a larger negative impact when AI technologies are employed by law enforcement.
Another ethical concern with deepfakes and AI is their potential use in disinformation and propaganda campaigns. By creating convincing fake videos or audio recordings, malicious individuals could spread false information to manipulate public opinion. This could have serious consequences for democratic processes, as well as for individuals and communities who are targeted by such campaigns.
Deepfakes have already been leveraged to undermine governments and political campaigns. According to the World Economic Forum, in Gabon, the military launched an unsuccessful coup after the release of a deepfake of leader Ali Bongo that suggested he was not healthy enough to be President.
This last ethical implication might seem less relevant, but it could have a lasting impact on society. The exponential rise of artificial intelligence and deepfakes in particular couldl further contribute to a lack of trust in digital media. As deepfakes improve and become more realistic, differentiating real from fake content will also become more difficult. This could lead to a lack of trust in the media and, furthermore, a loss of faith in democratic institutions.
While discussing negative implications is important, so is avoiding succumbing to the notion of AI progress as a catastrophic scenario, known as the "doomsday" narrative. Fortunately, AI has the potential to be used for good.
Deepfake technology can be used to create educational content accessible to all, such as historical reenactments or even language-learning materials. AI can also be used in medicine to improve healthcare treatment by analyzing medical data and predicting disease risk. Through its efficiency, it can literally save lives.
Artificial intelligence can also automate tedious and repetitive tasks, allowing humans the time to focus on meaningful work. AI might thus not necessarily replace the work of humans but leave to them the tasks that require empathy and creativity, leading to true fulfillment.
Furthermore, AI can minimize risks in society and take on tasks that would prove hazardous and even deadly to humans. From defusing a bomb to entering a volcano, AI robots save humans from dangerous jobs every day. Other jobs are less dangerous but still risky to human health, such as waste management, mine exploration, and more.
AI porn is currently legal in the UK, although the government’s future Online Safety Bill is looking to outlaw deepfake porn. Moreover, deepfake technology itself is not illegal. But as we have explored, it can be used for illegal purposes, such as spreading non-consensual porn.
To tackle the issue, some companies have implemented internal policies, such as prohibiting the posting or sharing of deepfakes without consent or requiring content creators to disclose if their content includes deepfakes.
Twitch, Reddit and PornHub all said they had banned non-consensual porn, according to The Sun. Still, enforcing these policies can be difficult, and there is ongoing debate over the responsibility of companies to prevent the spread of harmful deepfakes.
Addressing the ethical implications of deepfakes and AI is no easy feat. Possible solutions call for the involvement of multiple stakeholders, including governments and social media companies.
Here are a few possible solutions for addressing issues of privacy and consent regarding deepfakes and AI:
Social media responsibility: Social media platforms have played a significant role in the proliferation of deepfakes and AI-generated content. These companies can take responsibility for the content on their platforms, including deepfakes, by developing effective policies to detect and remove deepfakes and prevent their spread.
As deepfakes and AI technology continues to advance, it's important to remember their ethical implications, particularly in regards to consent and privacy. While these technologies have several positive applications, they also have the potential to be used to harm, as we have seen with the increase in non-consensual AI porn. As this technology is likely to only become more sophisticated, it's imperative that all stakeholders work together to address the issues of privacy and consent. Only then will AI be working for us rather than against us.