AI slop
AI slop is digital content made with generative artificial intelligence that is lacking in effort, quality, or meaning, and produced in high volume as clickbait to gain advantage in the attention economy. It is a form of synthetic media usually linked to the monetization in the creator economy of social media and online advertising. Coined in the 2020s, the term has a pejorative connotation similar to spam. "Slop" was selected as the 2025 Word of the Year by both Merriam-Webster and the American Dialect Society.
AI slop has been variously defined as "digital clutter", "filler content speed and quantity over substance and quality", and "shoddy or unwanted AI content in social media, art, books search results". Jonathan Gilmore, a philosophy professor at the City University of New York, describes the material as having an "incredibly banal, realistic style" that is easy for the viewer to process.
Origin of the term
As early large language models and image diffusion models accelerated the creation of high-volume but low-quality text and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross". Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among 4chan, Hacker News, and YouTube commentators as a form of in-group slang.The British computer programmer Simon Willison is credited with being an early champion of the term "slop" in the mainstream, having used it on his personal blog in May 2024. However, he has said it was in use long before he began pushing for the term.
The term gained increased popularity in the second quarter of 2024 in part because of Google's use of its Gemini AI model to generate responses to search queries, and the large quantities of slop on the internet were widely criticized in media headlines during the fourth quarter of 2024.
On social media
AI image and video slop have proliferated on social media in part because it can be revenue-generating for its creators on Facebook and TikTok, with the issue affecting Facebook most notably. This incentivizes individuals from developing countries to create images that appeal to audiences in the United States which attract higher advertising rates.The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts, or using erratic speech-to-text methods to translate their intentions into English.
Speaking to New York magazine, a Kenyan creator of slop images described giving ChatGPT prompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK", and then feeding those created prompts into a text-to-image AI model such as Midjourney.
AI-generated images of plants and plant care misinformation have proliferated on social media. Online retailers have used AI-generated images of flowers to sell seeds of plants that do not actually exist. Many online houseplant communities have banned AI generated content but struggle to moderate large volumes of content posted by bots.
Facebook spammers have been reported as AI-generating images of Holocaust victims with fake stories; in reality there are only a handful of historical photographs taken at Auschwitz. The posters were described as "slop accounts", and the Auschwitz Memorial museum called the images a "dangerous distortion". History-focused Facebook groups also have been inundated with AI-generated "historical" photos.
Slopper, a pejorative slang term derived from "AI slop", was coined in 2025 to describe someone who is overly reliant on generative AI tools like ChatGPT.
Online meme content has taken on the trend of using AI-generated content to fool and entertain viewers. Users on social media have begun using AI-generated images to build massive followings by fooling viewers. This strategy has been popularized for those who are interested in an easy way to get an income online as all that is needed is one post, of the hundreds posted weekly, to gain traction and encourage the quickly made content. Some creators are frustrated that their hard work is being stolen by AI-generated content. An artist, Michael Jones, created physical wood carvings of animals using a chainsaw, but the style of the sculptures was taken by and used as a source for AI-generated content, which began to surface with other people beside them, claiming to have made the sculptures themselves. Jones stated that AI-slop is "a huge issue for carvers all over the world who are sadly missing out on the rightful credit exposure to their work..."
In politics
United States
In August 2024, The Atlantic noted that AI slop was becoming associated with the political right in the United States, who were using it for shitposting and engagement farming on social media, with the technology offering "cheap, fast, on-demand fodder for content".AI slop is frequently used in political campaigns in an attempt at gaining attention through content farming. In 2025, in the first five months of Donald Trump's second tenure as US president, Trump posted several AI-generated images of himself on official government social media accounts, such as images of him as the pope or as a muscular man brandishing a lightsaber. In August 2024, Trump posted a series of AI-generated images on his alt-tech social media platform, Truth Social, portraying fans of the pop singer Taylor Swift in "Swifties for Trump" T-shirts, as well as an AI-generated image of Swift appearing to endorse Trump's 2024 presidential campaign. The images originated from the conservative Twitter account @amuse, which posted numerous AI slop images leading up to the 2024 United States elections that were shared by other high-profile figures within the Republican Party, such as Elon Musk, who has publicly endorsed generative AI. In 2025, Wired described Donald Trump as "The first AI slop President", noting his frequent use of AI-generated images and videos in public messaging. The magazine highlighted examples such as AI depictions of Trump as a fighter pilot and as a religious figure, arguing that his reliance on low-quality generative content marked a new phase in political communication.
In the aftermath of Hurricane Helene in 2024, Republican influencers such as Laura Loomer circulated on social media an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of President Joe Biden to respond to the disaster. The Republican activist Amy Kremer shared the image while acknowledging it was not genuine.
The initial version of the Make Our Children Healthy Again Assessment of children's health issues, released by a commission of cabinet members and officials of the Trump administration, and led by US Department of Health and Human Services Secretary Robert F. Kennedy Jr., reportedly cited nonexistent and garbled references generated using artificial intelligence.
In response to the No Kings protests in October 2025, Trump posted a video depicting himself flying a fighter jet and releasing feces on crowds of demonstrators, including Harry Sisson, a Democratic influencer.
In the midst of disruptions to food stamp distribution during the 2025 US government shutdown, anonymous social media users began using OpenAI's Sora AI model to post slop videos of "welfare queens" complaining, stealing, and rioting in supermarkets; many comments to the videos appeared unaware that they were AI-generated, or acknowledged that they were AI-generated but nonetheless useful in pushing a narrative of widespread welfare fraud.
Russia and China
A study done by the analytics company Graphika found that the governments of Russia and China have been using AI generated slop as propaganda. This includes the use of "spamouflage" as AI generated content featuring fake influencers was found to be linked to China. These videos often focused on divisive topics aimed to cause disruption with ulterior motives to the presented content.Gaza War
In February 2025, Donald Trump shared an AI-generated video on Truth Social and Instagram depicting a hypothetical Gaza Strip after a Trump takeover. The video's creator claimed it was made as political satire.During the Gaza War, AI-generated media was used to exaggerate support for both sides, and to evoke sympathy using fake images of suffering civilians. Because of content restrictions in generative AI, these images and videos rarely depict people wounded in battle, instead focusing on damage to buildings. Fake images of attacks were used to avoid accidentally providing intelligence to enemies.
In advertising
In November 2024, Coca-Cola used artificial intelligence to create three commercials as part of their annual holiday campaign. These videos were immediately met with backlash from both casual viewers and artists; animator Alex Hirsch, creator of Gravity Falls, criticized the company's decision not to employ human artists to create the commercial. In response to the negative feedback, the company defended their decision to use generative artificial intelligence, stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology". Coca-Cola continued to utilize AI-generated commercials for their 2025 holiday campaign.During the holiday season of 2025, McDonald's Netherlands released an AI-generated Christmas advertisement titled It's the Most Terrible Time of the Year, which was met with a large amount of backlash. The advert was seen as cynical, portraying Christmas time as "the most terrible time of the year". The company turned off comments on YouTube and later removed the initial upload of the video from public view in response, though reuploads of the original were still public on the site.
In March 2025, Paramount Pictures was criticized for using AI scripting and narration in an Instagram video promoting the film Novocaine. The ad uses a robotic AI voice in a style similar to low-quality AI spam videos produced by content farms. A24 received similar backlash for releasing a series of AI-generated posters for the 2024 film Civil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.
In the same month, Activision posted various advertisements and posters for fake video games such as "Guitar Hero Mobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such as Facebook and Instagram, which many labelled as AI slop. The intention of the posts was later stated to act as a survey for interest in possible titles by the company. The Italian brainrot AI trend was widely adopted by advertisers as an attempt to adjust to younger audiences.