The Rise of AI Slop
How low-quality AI content went from ~10% of the web to 52% in three years — and from niche internet slang to Merriam-Webster's Word of the Year.
10%→52%
AI Web Content (2022→2025)
63B
AI YouTube Views/Year
3
Word of the Year Awards
2,089+
AI News Sites Found
Contents
In late 2022, roughly 10% of newly published online articles showed signs of AI generation. By October 2025, that number had hit 52% — a Graphite/Originality.ai study confirmed the split had plateaued at roughly 50/50 between human and AI content.
This isn't a story about AI capabilities. It's a story about incentives. When it costs nearly nothing to produce content that looks professional and platforms reward volume over substance, the result is inevitable: AI slop — content that sounds fluent but says nothing. Here's how we got here.
1The Numbers: How Fast AI Slop Grew
The data tells a story of exponential growth that plateaued once AI content reached roughly half of all new web publishing:
of newly published articles showed AI-generation signals in late 2022, shortly after ChatGPT's November release
Source: Graphite / Originality.ai
of newly published articles were AI-generated by October 2025 — the split has plateaued at roughly 50/50
Source: Graphite / Futurism, Oct 2025
of 900,000 newly created web pages contained AI-generated content — only 25.8% were purely human-written
Source: Ahrefs, April 2025
undisclosed AI-generated news websites identified across 16 languages, many churning out hundreds of articles per day
Source: NewsGuard
estimated annual views for AI-generated YouTube content, across 278 dedicated channels identified in one 2025 study
Source: ReelNReel / Kapwing
of YouTube Shorts sampled contained AI-generated content, with an additional 33% qualifying as 'brainrot'
Source: Kapwing, Nov 2025
The Filter Effect
Here's the nuance the raw numbers miss: while 52% of new web content is AI-generated, only 14% of Google Search results contain AI content. Google's algorithms are effectively filtering most AI slop from search results — even as it floods the open web. The problem isn't in search. It's in social media feeds, YouTube recommendations, Amazon listings, and anywhere else algorithmic curation replaces editorial judgment.
Source: Graphite Study
2How We Got Here: A Timeline
AI slop didn't arrive overnight. It built momentum over three years, driven by a cascade of technological breakthroughs, economic incentives, and cultural reactions:
ChatGPT launches
OpenAI releases ChatGPT. Within 5 days it reaches 1 million users, making AI text generation accessible to anyone with a browser. The content flood begins almost immediately.
Clarkesworld shuts down submissions
Sci-fi magazine Clarkesworld receives 500+ AI-generated story submissions in under 20 days, driven by 'make money with ChatGPT' YouTube videos. Editor Neil Clarke temporarily closes submissions entirely — a first for a major literary magazine.
Source: neil-clarke.com
AI content farms industrialize
Programmatic AI content production hits scale. NewsGuard begins tracking AI-generated news sites, eventually documenting over 2,089 across 16 languages. Amazon sees an estimated 10,000-40,000 AI-generated e-books published monthly.
Shrimp Jesus goes viral
Surreal AI-generated images of Jesus fused with shrimp, lobsters, and sea creatures flood Facebook. Stanford Internet Observatory studies 120 Facebook Pages posting AI images that generate hundreds of millions of interactions — mostly operated from India and Vietnam.
Source: Stanford Internet Observatory
Google fights back
Google's March 2024 core update specifically targets AI slop, reducing low-quality search results by 45%. It's the strongest anti-slop signal from any major platform.
Source: Google Search Blog
Simon Willison names it
Programmer Simon Willison publishes "Slop," arguing the term should become standard for unwanted AI content — the way "spam" became the word for unwanted email. He writes: "Sharing unreviewed AI-generated content with other people is rude." The term already existed on 4chan and Hacker News, but Willison's post gives it mainstream legitimacy.
Source: simonwillison.net
Hurricane Helene deepfakes
After Hurricane Helene devastates the Southeast US, AI-generated images of flood victims and rescue scenes spread on social media. U.S. Senator Mike Lee shares one before deleting it. Virginia Tech researchers document how fake images undermine real disaster relief.
Source: PetaPixel
52% milestone
Graphite/Originality.ai research confirms that AI-generated content has crossed the 50% threshold for new web articles. The internet is now more AI-written than human-written.
Source: Futurism
Macquarie Dictionary: Word of the Year
Australia's Macquarie Dictionary names "AI slop" its Word of the Year for 2025 — winning both Committee's Choice and People's Choice (only the fourth time both aligned in the award's history).
Source: Macquarie Dictionary
Merriam-Webster: "Slop" is Word of the Year
Merriam-Webster names "slop" its Word of the Year 2025, defining it as "digital content of low quality that is produced usually in quantity by means of artificial intelligence." The word's definition is now permanent.
Source: Merriam-Webster
Triple crown + YouTube responds
The American Dialect Society votes "slop" their Word of the Year — making it a rare triple-crown. The same month, YouTube CEO Neal Mohan declares "managing AI slop" a top priority for 2026, with over 1 million channels already using AI tools daily.
Source: CNBC
3The Economics of Slop: Why It Exists
AI slop isn't an accident. It exists because the economics are irresistible: nearly zero production cost meets advertising revenue that doesn't care about quality.
The Producer Side
- $FUNTASTIC: A single AI YouTube channel earned ~$9,000/month from 500+ million views with virtually zero human creative input (NPR)
- $AI book farms: Producing 10,000-40,000 Kindle e-books monthly. Cost per book: near zero. Revenue per sale: $2-9 royalty. At scale, the math works (Futurism)
- $Facebook slop farms: Stanford found 120+ AI image Pages generating hundreds of millions of interactions, driving traffic to ad-laden websites for revenue
The Platform Side
- •More content = more ad inventory. Platforms profit from engagement, regardless of content quality
- •Algorithmic amplification: A Shrimp Jesus post can reach 40 million people. The algorithm rewards engagement, and AI slop is engineered for engagement
- •Moderation can't keep up: AI content is produced faster than any human moderation team can review it. The flood outpaces the filters
The Core Incentive Problem
Here's the uncomfortable truth: every entity in the content ecosystem is individually incentivized to produce or allow slop. Producers make money with near-zero effort. Platforms get more content to monetize. AI companies get more users. The only people who lose are readers — and readers don't pay the bills. Until that incentive structure changes, AI slop will keep growing.
4From Internet Slang to Word of the Year
The word "slop" has existed for centuries — meaning soft mud, then food waste, then pig feed. But its meaning shifted dramatically in the 2020s:
Nov 2025
Macquarie Dictionary
"AI slop" — Word of the Year. Won both Committee's Choice and People's Choice.
Dec 2025
Merriam-Webster
"Slop" — Word of the Year 2025. Defined as low-quality digital content produced by AI.
Jan 2026
American Dialect Society
300+ linguists voted "slop" Word of the Year. Triple-crown: three authorities, one word.
What makes this culturally significant: by 2025, "slop" could stand on its own — no "AI" prefix needed. The technology context was implicitly understood. The word had transcended its technical origins to become a cultural shorthand for a shared frustration.
The term also spawned an entire vocabulary: slopocalypse, slopwashing, slopfluencer, sloptimization, workslop, slopaganda, slopsquatting. A linguistic analysis by Making Science Public found 20+ derived terms — a society doesn't invent that many words for something unless it's a real, daily experience.
5How Platforms Are Responding
Google Search
Google's March 2024 core update was explicitly designed to combat AI slop. Result: 45% reduction in low-quality search results. Their approach: reward sites with genuine expertise and original content, penalize "scaled content abuse" — content produced at scale primarily for search ranking, not for human readers.
The filter works: only 14% of Google Search results are AI-written, despite 52% of new web content being AI-generated. But the slop displaced from search just floods other channels.
YouTube
YouTube CEO Neal Mohan declared "managing AI slop" a top priority for 2026. YouTube now requires AI content disclosure, labels AI-produced videos, and is expanding its "likeness detection" system to flag when a creator's face is used without permission in deepfakes. Over 1 million channels were using YouTube's AI tools daily by December 2025.
Meta (Facebook / Instagram)
Meta now labels AI-generated content with "Made with AI" tags and restricts monetization of repetitive, unoriginal AI content. But enforcement is inconsistent — the Shrimp Jesus pages accumulated hundreds of millions of interactions before any action was taken.
Amazon
Amazon introduced AI disclosure requirements for Kindle books, limiting authors to publishing three books per day. But the damage is ongoing: an estimated 10,000-40,000 AI-generated e-books are still published monthly, many without disclosure.
The Common Pattern
Every platform follows the same playbook: ignore the problem → get embarrassed by media coverage → announce policy changes → enforce inconsistently. The fundamental tension: platforms profit from content volume, so aggressive anti-slop measures hurt their bottom line.
6Beyond Annoyance: Real-World Harm
AI slop isn't just bad content. It causes measurable harm across multiple domains:
Physical Danger
AI-generated mushroom foraging guides on Amazon contain potentially lethal misidentifications. AI medical advice proliferates without peer review. When content is produced without expertise, people can get hurt.
Democratic Erosion
AI-generated political imagery — Trump in papal robes, fabricated Hurricane Helene photos — spreads faster than fact-checkers can debunk. When visual evidence becomes unreliable, informed democratic participation suffers.
Economic Displacement
Real musicians lose royalties to AI playlist stuffing. Human writers compete with AI content farms. Clarkesworld had to develop AI detection tools just to protect their submission process.
Institutional Trust
The Chicago Sun-Times published a summer reading list where 10 of 15 recommended books didn't exist — AI-generated hallucinations published by a major newspaper. Each incident chips away at institutional credibility.
Researcher Laura Glitson describes the "slop-doom feedback loop" — where constant exposure to AI slop produces feelings of "surreality, paranoia, suspicion, and anxiety." The uncanny quality of encountering content that might be fake creates what psychologists call the "black box effect" — a persistent unease from not knowing what's real.
7What Comes Next
The AI slop problem isn't getting smaller. Here's what the data and trends suggest:
1. The 50/50 Split Is the New Normal
AI content growth has plateaued at roughly 52% of new web content. But that's because we're measuring share, not volume. Total content production is still increasing — there's just more of everything.
2. Slop Gets Harder to Detect
As models improve, the tell-tale signs of AI generation ("In today's fast-paced world...") will become less obvious. The slop won't disappear — it'll just get better at mimicking substance. The need for quality signals (specificity, sources, lived experience) becomes more important, not less.
3. Regulation Is Coming (Slowly)
The EU AI Act includes provisions for AI content transparency. Multiple U.S. states are considering AI disclosure requirements. But regulation consistently lags behind the technology — by the time rules arrive, the landscape has usually shifted.
4. Human Content Becomes Premium
The flip side: as AI content floods every channel, authentically human content becomes more valuable. We're already seeing this in newsletters, podcasts, and indie publishing — formats where the human author is the product.
Frequently Asked Questions
How much of the internet is AI-generated content?
As of late 2025, a Graphite/Originality.ai study found that 52% of newly published online articles are AI-generated — up from roughly 10% in late 2022. Ahrefs independently found that 74.2% of 900,000 newly created web pages contained AI-generated content.
When did 'slop' become a word for AI content?
The term 'slop' for AI content emerged on 4chan, Hacker News, and YouTube around 2022. Programmer Simon Willison popularized it in a May 2024 blog post arguing it should become the standard word for unwanted AI content, like 'spam' did for unwanted email. By December 2025, Merriam-Webster named it Word of the Year.
Why is AI slop a problem?
AI slop degrades information quality (52% of the web is now AI-generated), creates real-world harm (fake foraging guides, fabricated news), erodes trust in online content, displaces human creators economically, and wastes readers' time with content that sounds authoritative but says nothing of substance.
What are platforms doing about AI slop?
Google's March 2024 core update reduced low-quality search results by 45%. YouTube CEO Neal Mohan declared 'managing AI slop' a top priority for 2026. Meta requires AI-generated content labeling and restricts monetization of repetitive AI content. Amazon has introduced AI disclosure requirements for Kindle books.
Sources & Further Reading
Keep Reading
Think Your Content Is Slop-Free?
Paste any text and get an instant analysis of its slop level, with evidence highlighting and human-readable explanation.