The Meteoric Rise of AI Art
The Meteoric Rise of AI Art
When the feed becomes a factory and originality becomes the real flex.
By Luckless Outfitters
The scroll feels different now.
You can’t always name it at first. You just sense it. The photos are too clean. The lighting is too perfect. The faces look like they came from the same genetic template. The videos hit the same “cinematic” beat again and again, like a song stuck on loop.
Then your brain catches up.
It’s not one creator copying another. It’s an entire system copying everything.
In December 2025, Merriam‑Webster picked a blunt word to describe what’s flooding our screens: “slop,” defined as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” (Merriam-Webster)
That definition lands because so many feeds now feel like a conveyor belt. Not because AI can’t create beauty. It can. But because social media rewards speed, volume, and sameness, and AI is built for all three.
The result is a new kind of online pollution: endless “good enough” visuals that are easy to post, easy to copy, and easy to forget.
This is the meteoric rise of AI art. And it’s changing what it means to be a creator.
The machines learned our aesthetic and they learned it fast.
A few years ago, AI images were a novelty. Now they are infrastructure.
In March 2024, Adobe said its Firefly system had generated over 6.5 billion images in about a year. (Adobe Blog) That number isn’t just a flex. It’s a signal that image generation moved from “experiment” to “habit.”
Video is moving even faster.
On January 15, 2026, Reuters reported that AI video startup Higgsfield hit a valuation above $1.3 billion, with social media marketers making up about 85% of its usage. (Reuters) The same report said the company launched its browser-based product in March 2025 and claimed a $200 million annualized revenue run rate. (Reuters)
At the same time, major platforms are packaging AI video for everyday business. In January 2026, The Verge reported that Google’s Flow can generate eight-second clips from prompts or images, stitch them into longer scenes, and now supports tools like lighting changes and “camera” adjustments plus vertical video support built for Shorts and TikTok-style feeds. (The Verge)
This isn’t a trend living in niche Discord servers anymore. It’s baked into the tools people use for work.
And when creation becomes easier than curation, you get more content than any human can process.
The new economics: post more, feel less.
Social media has always tempted people to post more. AI turns that temptation into an assembly line.
Look at the scale.
In November 2025, The Guardian reported TikTok said it had 1.3 billion videos labeled as AI-generated. (The Guardian) In the same breath, TikTok said more than 100 million pieces of content are uploaded to the platform every day. (The Guardian) That means AI-labeled posts may still be a slice of the whole, but the whole is so massive it can bury anything.
On YouTube, the numbers look even uglier for quality.
In December 2025, The Guardian reported Kapwing research finding that more than 20% of videos shown to new YouTube users were “AI slop.” (The Guardian) The same report said Kapwing surveyed 15,000 top channels and identified 278 channels made entirely of that content, totaling 63 billion views and 221 million subscribers, with estimated yearly revenue around $117 million. (The Guardian)
That’s not “a few bad posts.” That’s an industry.
And it creates a weird trap for everyone else.
When low effort AI content performs, it teaches creators a brutal lesson: quality is optional. Quantity is strategy. The result is what people complain about in private but still participate in public. A feed full of half-finished ideas posted at full volume.
Why so much AI art looks impressive but feels empty.
This is where the argument gets personal. Because many of the people posting AI visuals aren’t trying to ruin the internet. They’re trying to keep up.
But the sameness is real.
Coursera’s 2025 explainer on whether AI will replace graphic designers points out a core limitation: AI output quality depends heavily on the data it was trained on, and AI can’t replace human critical thinking, complex problem solving, or nuanced audience analysis. (Coursera) In other words, the model can deliver polish, but it can’t deliver meaning on its own.
The Verge reported that designers in China are already living inside that gap. One designer said AI forces everyone to rethink what designers are for: “Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?” (The Verge) Another compared image generators to a “toy,” because you might get one good result after “dozens or even hundreds” of bad ones. (The Verge)
And the most important line in that Verge reporting is not about technology. It’s about incentives. Designers said the hype has made clients expect faster work for less money, creating an “averaging” effect that lowers the ceiling of what gets made. (The Verge)
That “averaging” feeling is what many people mean when they say AI art lacks originality or detail depth. It’s smooth, but not specific. It’s stylish, but not studied. It knows how to mimic a look, but it doesn’t know why the look mattered.
UNCTAD made the same point from a different angle in March 2024. The author described generative AI as extracting patterns from large volumes of popular content to learn what “good” art looks like and argued that to truly track evolving tastes, an AI would need to understand human emotions. (UN Trade and Development (UNCTAD)) The piece warns that if AI models train on their own outputs, errors can compound, and if human creative industries are destroyed, we risk a future where styles become “fixed and static.” (UN Trade and Development (UNCTAD))
That’s the nightmare scenario: a culture that repeats itself because the tools were trained on yesterday’s hits.
Will AI replace artists and designers? The honest answer is messy.
The internet loves a clean yes or no fight. Real life isn’t like that.
The best “Will AI replace artists and designers?” articles tend to agree on one key point: AI replaces tasks, not taste.
Coursera argues that AI is more likely to reshape design jobs than erase them, because designers who refine outputs and apply critical judgment will still be valued. (Coursera) The Verge’s reporting adds a darker truth: even if AI can’t replace strategy, it can still change what clients are willing to pay for, and that can shrink creative work into cheap production. (The Verge)
UNCTAD pushes it even further: protecting human creators isn’t just about jobs; it’s about preventing cultural stagnation. (UN Trade and Development (UNCTAD))
So, will AI replace artists and designers?
It will replace:
-
quick drafts that used to take hours
-
generic “stock” imagery
-
filler content made only to keep an account active
But it will not replace what makes humans worth following:
-
lived experience
-
weird personal taste
-
real observation
-
creative risk
-
story and context
The real threat is not replacement. It’s devaluation. When everyone can generate a “pretty” image in seconds, “pretty” stops being a competitive advantage.
The backlash is already here: people want humans again.
The audience isn’t clueless. People can feel when content is empty.
In January 2026, Digiday reported a sharp drop in consumer enthusiasm for generative AI creator content. Influencer agency Billion Dollar Boy found only 26% of consumers prefer generative AI creator content over traditional creator content, down from 60% in 2023. (Digiday) Digiday’s framing is simple: authenticity and “messiness” are becoming the difference-makers, because over-polished content can look like AI even when it isn’t. (Digiday)
That matters for every person trying to grow online.
Because it suggests a new rule of the feed:
When everyone can fake perfection, imperfection becomes proof of life.
The hidden cost: AI doesn’t just flood feeds. It fractures trust.
There is another problem here, and it’s bigger than aesthetics.
AI makes it harder to trust what we see.
A Stanford Journalism Fellowship article in December 2024 put it plainly: even if specific AI fakes don’t change minds, the mere existence of AI-generated images may have “devalued all visuals” and made people more skeptical of anything they see. (John S. Knight Journalism Fellowships)
In January 2026, that trust crisis got a grim example. Reuters reported Germany’s government and Holocaust memorial institutions urged platforms to stop the spread of AI-generated fake Holocaust imagery, warning it distorts history through “trivialising and kitschification.” (Reuters) The letter also warned these images can fuel mistrust of authentic historical documents. (Reuters)
You don’t need to be posting political misinformation to feel the impact.
If audiences start assuming everything is fake, real creators lose, too. Real journalists lose. Real communities lose. The whole internet becomes a little colder.
The platform response: labels, watermarks, and “nutrition facts” for media.
Platforms know they have a problem. And they’re moving toward labeling and provenance.
TikTok says it requires creators to label realistic AI-generated content, and it can also automatically label content it identifies as generated or significantly edited with AI including uploads that have Content Credentials attached. (TikTok Support) In late 2025, TikTok also said it uses C2PA Content Credentials as part of its strategy and that this helped label over 1.3 billion videos. (TikTok Newsroom)
TikTok has also tested giving users control over how much AI content they see, through “Manage topics,” including a slider for AI generated content. (The Guardian)
Meta has taken a similar direction. In February 2024, Meta said it would label images posted to Facebook, Instagram, and Threads when it can detect industry-standard indicators they are AI generated, and that it labels photorealistic images created with Meta AI as “Imagined with AI.” (Facebook) C2PA’s own site later said Meta leveraged Content Credentials to inform labeling across those platforms. (C2PA)
Behind many of these efforts is C2PA, an open technical standard meant to help establish the origin and edits of digital content through Content Credentials. (C2PA) The Content Authenticity Initiative describes Content Credentials as verifiable metadata; basically a “nutrition label” for digital content that can show who made it, when, and which tools were used. (Content Authenticity Initiative)
Even the tools themselves are leaning into this.
Adobe says Content Credentials are automatically attached to Firefly outputs. (Adobe Blog)
This is the internet trying to patch its own eyes back together.
It won’t be perfect. Labels can be removed. Metadata can be stripped. People can lie. But the direction is clear: the next era of content is not just about what looks good. It’s about what can be trusted.
What creators should do now: use AI like a sketchbook, not a vending machine.
Here is the part most “Will AI replace artists?” debates miss.
The urgent question for everyday creators is not philosophical. It’s practical:
How do you use AI without posting the same dull, glossy nothing as everyone else?
A simple answer: stop treating AI like a final product.
Treat it like a first draft.
If you post for a living or you’re trying to, this is the new baseline:
-
Start with a human point.
One real opinion. One real story. One real frustration. The “why” must come from you. -
Feed the model real material.
Use your own photos, your own notes, your own references. Generic inputs create generic outputs. -
Force specificity.
Add constraints. Location. Time period. A real brand texture. A real object. A real limitation. Constraints create originality. -
Do a detail pass like a pro.
Fix the hands. Fix the text. Fix the weird artifacts. The “half-assed” look is usually in the last 10%. -
Show proof of life.
Behind-the-scenes clips. Process screenshots. A messy desk. A voice note. Something that says: a real person was here. -
Label when appropriate.
If it’s meant to look real, disclose it. Platforms are moving toward transparency anyway. (TikTok Support) -
Care about saves and shares more than raw views.
Slop can win impressions. Quality wins loyalty.
This is not anti-AI advice.
It’s anti-emptiness advice.
Because the machine will always beat you at volume. It will never beat you at lived experience.
The next feed is a split-screen: factories vs. humans!
AI-generated photos and video are not going away. The money is too big. The tools are too easy. The distribution systems love endless inventory.
But the audience is changing.
Consumers are signaling fatigue, and the numbers back it up. (Digiday) Platforms are labeling and testing controls because even they can see the feed getting clogged. (The Guardian) And trust is becoming the scarce resource that every creator is fighting for. (John S. Knight Journalism Fellowships)
So the future won’t belong to the people who can generate the most images.
It will belong to the people who can still make someone stop scrolling.
Not because the content is perfect.
But because it’s real.