Deepfake pornography
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
History
The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real.Deepfake pornography was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material of a person's face, and then using a deep learning model to train a Generative Adversarial Network to create a fake video that convincingly swaps the face of the source material onto the body of a pornographic performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process. Since 2023, several AI apps available on Google Play and the Apple App Store are capable of "nudify-ing" user provided photos to generate non-consensual deepfake pornography.
Deepfake pornography is sometimes confused with fake nude photography, but the two are mostly different. Fake nude photography typically uses non-sexual images and merely makes it appear that the people in them are nude.
Notable cases
Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson. Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable.Rana Ayyub
In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had a doctored pornographic video of her circulated online. In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case.Atrioc controversy
In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans.Taylor Swift
In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X, and spread to other platforms such as Facebook, Reddit and Instagram. One tweet with the images was viewed over 45 million times before being removed. A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters. After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes. Searches for Swift's name were temporarily disabled on X, returning an error message instead. Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community.A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge."
The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre, Microsoft CEO Satya Nadella, the Rape, Abuse & Incest National Network, and SAG-AFTRA. Several US politicians called for federal legislation against deepfake pornography. Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually.
2024 Telegram deepfake scandal
It emerged in South Korea in August 2024, that many teachers and female students were victims of deepfake images created by users who utilized AI technology. Journalist Ko Narin of The Hankyoreh uncovered the deepfake images through Telegram chats. On Telegram, group chats were created specifically for image-based sexual abuse of women, including middle and high school students, teachers, and even family members. Women with photos on social media platforms like KakaoTalk, Instagram, and Facebook are often targeted as well. Perpetrators use AI bots to generate fake images, which are then sold or widely shared, along with the victims' social media accounts, phone numbers, and KakaoTalk usernames. One Telegram group reportedly drew around 220,000 members, according to a Guardian report.Investigations revealed numerous chat groups on Telegram where users, mainly teenagers, create and share explicit deepfake images of classmates and teachers. The issue came in the wake of a troubling history of digital sex crimes, notably the notorious Nth Room case in 2019. The Korean Teachers Union estimated that more than 200 schools had been affected by these incidents. Activists called for a "national emergency" declaration to address the problem. South Korean police reported over 800 deepfake sex crime cases by the end of September 2024, a stark rise from just 156 cases in 2021, with most victims and offenders being teenagers.
On September 21, 6,000 people gathered at Marronnier Park in northeastern Seoul to demand stronger legal action against deepfake crimes targeting women. On September 26, following widespread outrage over the Telegram scandal, South Korean lawmakers passed a bill criminalizing the possession or viewing of sexually explicit deepfake images and videos, imposing penalties that include prison terms and fines. Under the new law, those caught buying, saving, or watching such material could face up to three years in prison or fines up to 30 million won. At the time the bill was proposed, creating sexually explicit deepfakes for distribution carried a maximum penalty of five years, but the new legislation would increase this to seven years, regardless of intent.
By October 2024, it was estimated that "nudify" deep fake bots on Telegram were up to four million monthly users.
2025-2026 Grok/X chatbot deepfake scandal
In December 2025, Bloomberg reported that X users found Grok would comply with unconsensual requests to digitally undress individuals, including minors, or show them performing sexually explicit acts. The majority of these prompts were targeted at women and girls. An analysis of 20,000 images generated by Grok between December 25, 2025 and January 1, 2026 showed 2% were of people in bikinis or transparent clothes and appeared to be 18 or younger, including 30 of "young or very young" women or girls. A separate analysis conducted over 24 hours from January 5 to 6 calculated that users had Grok create 6,700 sexually suggestive or nudified images per hour. xAI responded to requests for comment from media organizations with the automated reply, "Legacy Media Lies." The bot's image generation sparked an international backlash and calls for legal or regulatory action from officials in the European Union, United Kingdom, Poland, France, India, Malaysia, and Brazil.Ethical considerations
Deepfake child pornography
Deepfake technology has made the creation of child pornography faster and easier than it has ever been. Deepfakes can be used to produce new child pornography from already existing material or creating pornography from children who have not been subjected to sexual abuse. Deepfake child pornography can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying.Differences from generative AI pornography
While both deepfake pornography and generative AI pornography utilize synthetic media, they differ in approach and ethical implications. Generative AI pornography is created entirely through algorithms, producing hyper-realistic content unlinked to real individuals.In contrast, deepfake pornography alters existing footage of real individuals, often without consent, by superimposing faces or modifying scenes. Hany Farid, a digital image analysis expert, has emphasized these distinctions.
Most deepfake pornography is made using the faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identity verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn't consent to the creation of the content."