Theodore clearly remembers the "slop" of AI-generated content that was the last straw.
The picture shows two thin, extremely poor children from South Asia.
For some reason, despite their boyish features, they have thick beards.
One child has no hands and only one foot.
The other one is holding a piece of cardboard that says it's his birthday and is asking for likes for the picture.
For a completely inexplicable reason, they are sitting in the middle of a busy street, in the pouring rain, with a birthday cake in front of them.
There are many obvious signs in the image that it was created using artificial intelligence (AI).
However, it went viral on Facebook and garnered nearly a million likes and heart-shaped emoticons.
Then something snapped in Teodor.
"I was stunned."
"Absurd images created using AI were all over Facebook and received huge amounts of attention, without any questioning of whether they were true."
"That was crazy to me," says a 20-year-old student from Paris.
That's why Theodore opened an account on X, formerly Twitter, called Insane AI Slop, where he began calling out and mocking content that misleads people.
Others noticed, and his inbox was soon flooded with messages from people sending him examples of popular, so-called "slop" content created with AI.
"Slop" is very poor quality digital content, mass-produced using artificial intelligence, that is usually absurd, bizarre, and fake, and goes viral very quickly.
- Humans or artificial intelligence: How to know who you're talking to
- Egi, Neo, Isaac and Memo are home robots, would you let them wash your dishes?
- Grok, Elon Musk's AI tool, no longer wants to 'undress' people
The usual themes are highlighted – religion, the military, and poor children doing touching things.
"Kids in third world countries who do something impressive always do well."
"Let's say, a poor child in Africa who makes an incredible sculpture out of trash."
"I think people find it sincere and warm, so creators think, 'Great, let's invent more things like this,'" says Theodore.
His account soon gained more than 133.000 followers.
The flood of AI-generated "slop" content, which he defines as fake, unconvincing videos and photos, created quickly and en masse, now seems unstoppable as tech companies fully embrace artificial intelligence.
Some of these companies claim to have begun to crack down on certain types of "slop" content created by AI, although social media still appears to be full of such images and videos.
In just a few years, the experience of using social networks has changed a lot.
How did this happen and what impact will it have on society?
And, perhaps most importantly, how much do billions of social media users actually care about it?
The "third phase" of social networks
In October, during another ecstatic speech about the company's results, Mark Zuckerberg, CEO of Meta, happily declared that the social network had entered the third phase, which is focused on the application of artificial intelligence.
"The first phase was when all the content was created by friends, family, and accounts you followed directly."
“The second one came when we added creator content.
"Now, as artificial intelligence makes it easier to create and curate content, we will add another huge body of content," he told shareholders.
Meta, which operates Facebook, Instagram, and Threads, not only allows the publication of AI-generated content, but has also provided products that enable the creation of more such content.
Now, users around the world are offered image and video generators, as well as increasingly powerful filters.
When asked by the BBC for comment, Meta recalled Zuckerberg's speech in January.
The billionaire then said that the company was turning even more strongly to artificial intelligence and did not mention combating "slop" content.
"We will soon see an explosion of new media formats that are more engaging and interactive, and are made possible solely by advances in artificial intelligence," Zuckerberg said.
Neil Mohan, CEO of YouTube, wrote in a blog post about expectations for 2026 that in December alone, more than a million YouTube channels used the AI tools available on the platform to create content.
"Just as the synthesizer, Photoshop, and CGI (computer-generated imagery) revolutionized the realm of sound and visual expression, artificial intelligence will be a boon to creatives who are willing to embrace it," he wrote.
At the same time, he acknowledged that there is growing concern about "low-quality content, known as 'slop' created using artificial intelligence."
He said his team is working to improve the system for finding and removing "repetitive, low-quality content."
However, he dismissed the possibility of the company making judgments about what should and should not be popular.
He reminded that content that was once created for a targeted, narrow audience, such as ASMR (autonomous sensory meridian response - a feeling of pleasant tingling, tickling, burning caused by specific audio-visual stimuli) or live video game playing, is now mainstream.
According to research by artificial intelligence company Kapwing, as much as 20 percent of the content displayed on newly created YouTube accounts today consists of "low-quality videos created using artificial intelligence."
Short video formats were particularly popular, and Kapwing found that such content appeared in 104 of the first 500 YouTube clips viewed on the new account the researchers created.
Creator revenue appears to be a strong driver, as individuals and channels can earn money from engagement and views.
Judging by the number of views on some channels and videos created using artificial intelligence, is the audience really attracted to such content, or are the algorithms deciding what we watch.
Kapwing says that of the channels that feature AI-generated "slop" content, the Indian channel Bandar Apna Dost has the most views, with 2,07 billion, which, according to estimates, brings creators an annual income of about four million dollars.
However, such content also causes negative reactions.
Underneath many viral videos created with AI, it is common today to see an avalanche of angry comments condemning such content.
Giant monsters and deadly parasites in the stomach
Theodore, a student from Paris, helped ignite this resistance.
Using his newfound influence on the Xu network, he complained to YouTube about the flood of bizarre AI-generated cartoons that were garnering huge views.
In his opinion, they were disturbing and harmful, and in some cases it seemed to him that they were intended for children.
The titles of the videos were like "Mama Cat Saves Kitten from Deadly Stomach Parasites," and they featured gory scenes.
In another short clip, a woman in a nightgown eats parasites and turns into a huge, angry monster who is eventually healed by Jesus.
YouTube said the videos were removed for violating its community guidelines.
They stated that they are "focused on connecting our users with high-quality content, regardless of how it was created," and that they are working to "reduce the spread of poor-quality content created using artificial intelligence."
This phenomenon affects even seemingly pleasant platforms dedicated to lifestyle, such as Pinterest, where recipes and interior design ideas are shared.
Users have become so frustrated with the flood of "slop" AI-generated content that the company has introduced a new system that allows users to opt out of AI-generated content.
But that system relies on users themselves admitting that their perfect home images were actually created using AI.
Anger in the comments
In my feed (and I'm aware that everyone has a different feed, and therefore comments), there are constantly negative reactions to "slop" content created with VI.
There seems to be a movement of sorts against this kind of content on TikTok, Threads, Instagram, and X.
Sometimes the number of likes for comments criticizing an AI-generated "slop" is much higher than the popularity of the post itself.
Such was the case with the recent video of a snowboarder allegedly saving a wolf from a bear.
The video itself had 932 likes, while the comment: "Raise your hand if you're tired of this AI-generated s**t," received 2.400 likes.
But, of course, all of this feeds the beast.
Every interaction is a good interaction for social networks, as it is crucial to keep users on their platforms as long as possible.
So does it really matter whether the amazing, moving, or shocking video on your feed is real or not?
The "brain rot" effect
Emily Thorson, an associate professor at Syracuse University in the United States (USA), who studies politics, disinformation and misbeliefs, says it all depends on what people do on social media.
"If someone is on a platform that offers short videos purely for entertainment, then the criteria for whether something is worthwhile is very simple – 'is it fun?'" she says.
"But if someone is on the platform to learn about a specific topic or to connect with community members, then they will perceive AI-generated content as much more problematic."
How people perceive AI-generated "slop" is also influenced by the way such content is presented.
If something is presented as a joke, it is usually accepted as such.
But when "slop" content is created to deceive people, it can cause outrage.
A good example is an AI-generated video I saw recently: an incredibly realistic, nature documentary-style footage of a stunning leopard hunt.
The comments show that some viewers were deceived, while others doubted the authenticity of the content.
"What documentary is it from?" one user asked.
"Please, that's the only way to prove it wasn't created by artificial intelligence."
Alessandro Galeazzi from the University of Padua in Italy researches social media behavior and echo chambers.
He says checking whether a video was made with AI requires mental effort, and he fears that people will simply stop checking.
"My impression is that the flood of meaningless, low-quality content created by AI can further shorten people's attention spans," he says.
He distinguishes between content that is intended to deceive and that comical, obviously fake "slop" content created with AI, such as fish with shoes or gorillas lifting weights in the gym.
But even such seemingly harmless and humorous content can have harmful consequences.
Galeaci talks about the risk of so-called "brain rot" - the assumption that our constant exposure to social media is damaging our intellectual abilities.
"I would argue that 'slop' content created by AI encourages a brain rot effect, forcing people to quickly consume content that they know is not only probably not real, but probably not meaningful or interesting," he says.
Reducing the number of moderators
In addition to "slop" content, some other content created using AI can have much more serious consequences.
Elon Musk's companies, xAI and the Xu platform, were recently forced to change their rules after the chatbot Grok was used to digitally "undress" women and children on the Xu network.
And after the US attack on Venezuela, fake videos of people crying in the streets and thanking the US were spread.
Such content can shape public opinion and create the impression that the American action was more popular than it actually was.
Analysts point out that this is particularly worrying, because for many, social media is their only source of information.
Dr. Mani Ahmed, CEO of OpenOrigins, a company that verifies whether images are created using real people, says we need a new way for creators of authentic content to prove that their footage and photos are real.
"We've already reached the point where you can't tell for sure what's real just based on visual inspection," he says.
"Instead of trying to detect what is fake, we need an infrastructure that allows real content to publicly prove its authenticity."
You might think that this should be handled by the social media platforms themselves.
However, many of them, including Meta and X, have reduced the number of moderators and now rely on users to flag content as false or misleading themselves.
Social networks without 'slop' content?
If existing tech giants generally allow "slop" content to be freely published and shared, would the emergence of a new social network that would offer an alternative without "slop" content created by AI pose a threat to the large networks?
This seems unlikely, as discovering AI-generated content is becoming increasingly difficult.
Machines can no longer reliably determine whether a video or image is fake, and it would be even more difficult for them to subjectively assess whether a piece of content falls into the "slop" category.
However, if a new social network emerges and people take specific actions, such as leaving a particular network, it could have certain results.
This reminds me of the rise of social network BeReal, a French app that gained popularity during the coronavirus pandemic and encourages users to show their true selves by sharing unfiltered selfies in various situations.
Birial has not yet reached the level of Facebook or Snapchat, and probably never will.
But it did make some platforms flinch and, in some cases, copy the idea.
Perhaps the same could happen if a network emerged that openly opposed "slop" content created with AI.
But Teodor believes the battle is lost and that "slop" content created with AI will survive.
Despite still receiving messages from around 130.000 followers in his inbox, he posts less and less frequently and has largely come to terms with the new online reality.
"Unlike many of my followers, I am not dogmatically against artificial intelligence," he says.
"I am against the pollution of the internet with 'slop' content created using AI for quick entertainment and viewing."
Source of the first image: BBC; Created with the Adobe Firefly artificial intelligence image generation model.
Follow us on Facebook, Twitter, Instagram, YouTube i Viber. If you have a topic suggestion for us, please contact bbcnasrpskom@bbc.co.uk
- Life without crowds and the fusion of machines and people: What will the world look like in 2050?
- How to know you're watching a video created with artificial intelligence
- How to get the most out of artificial intelligence: Four questions for you
- How to tell if your new favorite artist is real
- 'If the AI bubble bursts, all companies will feel the consequences'
- Can a chatbot really be your child's friend?
- How artificial intelligence is using our drinking water
- Films made with artificial intelligence could win Oscars
Bonus video: