Social networks: "Stain your brain" - how algorithms show violence to boys

When Andrew studied the content on TikTok, he was disturbed to find that some teenagers were being shown posts containing violence and pornography, and promoting misogynistic attitudes, he tells BBC Panorama.

9990 views 2 comment(s)
Kai says the violent and disturbing material appeared "out of nowhere" on his social media, Photo: BBC
Kai says the violent and disturbing material appeared "out of nowhere" on his social media, Photo: BBC
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

It was 2022 and Kai, who was 16 at the time, was scrolling on his phone.

He says one of the first videos he saw on his social media feed was about a cute dog.

But, right after that, there was a sudden change.

He says that "out of nowhere" he was recommended videos of someone being hit by a car, a monologue by an influencer expressing misogynistic views, and clips from violent fights.

He asked himself: Why me?

During that time in Dublin, Andrew Kang worked as a user security analyst at TikTok, a position he held for 19 months, from December 2020 to June 2022.

He says he and a colleague decided to study what the app's algorithms recommend to users in the UK, including 16-year-olds.

Not long before that, he worked for rival company Meta, which owns Instagram - another network used by Kai.

When Andrew studied the content on TikTok, he was disturbed to find that some teenagers were being shown posts containing violence and pornography, and promoting misogynistic attitudes, he tells BBC Panorama.

He argues that, in general, teenage girls are recommended very different content based on their interests.

TikTok and other social media-owned companies use artificial intelligence tools to remove the vast majority of harmful content and flag other content for review by human moderators, regardless of how many views it has.

But artificial intelligence tools cannot recognize everything.

Andrew Kang says that when he worked for TikTok, any videos that weren't removed or flagged for human moderators by AI — or reported by other users to moderators — were manually reviewed again only if they reached a certain threshold.

He says at one point it was set at 10.000 views or more.

He feared this meant some younger users were being exposed to harmful videos.

Most social networking companies allow people 13 years of age or older to log into the network on their own.

TikTok says 99 percent of the content it removes for violating its policies is removed by artificial intelligence or human moderators before reaching 10.000 views.

It also says it conducts proactive investigations for videos with fewer views than that.

with the BBC

While working for Meta between 2019 and December 2020, Andrew Kang says there was a different problem.

He says that while most videos were removed by AI tools or reported to moderators, the site relied on users to report other videos once they had already watched them.

Kang adds that he raised concerns while working at both companies, but was met mostly with inertia, he says, out of fear of the amount of work that would be required or the cost.

He states that some improvements were subsequently made in TikTok and Meta, but adds that younger users, like Kai, were left to their mercy in the meantime.

Several former employees of the companies that own the social networks told the BBC that Andrew Kang's concerns were in line with their knowledge and experience.

Algorithms from all the major social networks regularly recommend harmful content to children, even if it's unintentional, British regulator Ofcom tells the BBC.

"Companies turn a blind eye to it and treat children like adults," says Almudena Lara, Ofcom's director of online safety policy development.

'My friend needed a return to reality'

TikTok told the BBC it has "industry-leading" security settings and employs more than 40.000 people to keep users safe.

He said he plans to invest "more than $2 billion in security" this year alone, and of the content he removes for violating his regulations, 98 percent is found proactively.

Meta, which owns Instagram and Facebook, says it has more than 50 different tools, resources and features that "provide teenagers with positive, age-appropriate experiences."

Kai told the BBC he had tried using one of Instagram's tools and a similar one on TikTok to indicate he was not interested in violent and misogynistic content - but claims they continued to recommend it.

He is interested in the UFC (Ultimate Fighting Championship), mixed martial arts competition.

He also found himself watching videos of controversial influencers recommended to him, but he says he didn't want to be recommended that more extreme content.

"A picture appears in your head and you can't get rid of it afterwards. Your brain is tainted. And then that's all you think about for the rest of the day," he says.

Girls his age he knows are recommended videos about topics like music and makeup instead of violence, he says.

with the BBC

Meanwhile, Kai, now 18, says he continues to be served violent and misogynistic content, both on Instagram and TikTok.

When we scroll through his Instagram reels, we find among them a picture that pokes fun at domestic violence.

On it are two characters next to each other, one of whom has bruises, with the signature: "My love dictionary."

On the second is a person who was run over by a truck.

Kai says he noticed how videos with millions of likes can be persuasive to young people his age.

For example, he says one of his friends was swayed by the controversial influencer's content — and began to adopt his misogynistic views.

His friend "took it too far," says Kai.

"He started saying all kinds of ugly things about women. I had the feeling that I had to bring my friend back to reality."

Kai says he commented on posts to show he didn't like them, and when he accidentally liked the videos, he tried to retract the like, hoping it would reset the algorithms.

But he says that eventually even more such content began to flood his feeds.

with the BBC

So how do TikTok algorithms really work?

According to Andrew Kang, algorithms are fed by interaction, whether positive or negative.

That could partly explain why Kai's efforts to manipulate the algorithms didn't work.

The first step for users is to specify some likes and interests when they log into the network for the first time.

Andrew says that some of the content that the algorithm initially serves to, say, a 16-year-old is based on the preferences he has given, as well as the preferences of other users of a similar age who live in a similar location.

According to TikTok, the algorithms are not influenced by the user's gender.

But Andrew says the interests that teenagers show when they sign up often have a gendered effect.

A former TikTok employee says some 16-year-olds may be exposed to violent content "from the get-go" because other teenage users with similar preferences have expressed an interest in this type of content - even if it just means spending more time on a video they it attracts attention that little bit longer.

The interests expressed by many teenage girls in the profiles he studied - "pop singers, songs, makeup" - meant they wouldn't be recommended violent content, he says.

He says the algorithms use "learning by learning," which is a method in which artificial intelligence systems learn from trial and error, and train themselves to recognize behavior from different videos.

Andrew Kang says they're designed to maximize engagement by showing you videos they expect you'll watch for longer, comment on or like — all just to keep you coming back for more.

The algorithm that recommends content for TikTok's "For You" page, he says, doesn't always distinguish harmful from harmless content.

According to Andrew, one of the problems he recognized while working for TikTok was that the teams involved in training and coding the algorithm did not always know the exact nature of the videos they were recommending.

"They see the number of viewers, age, trend, that kind of abstract data. They're not really necessarily exposed to that content itself," a former TikTok analyst tells me.

That's why in 2022, he and a colleague decided to look at what kind of videos are recommended to a wider range of users, including 16-year-olds.

He says they were concerned about violent and harmful content being served to some teenagers, and suggested TikTok update its moderation system.

They wanted TikTok to clearly label videos so that everyone who works can see why they're harmful — extreme violence, abuse, pornography, and so on — and to hire more moderators who specialize in those different areas.

Andrew says their proposals were rejected at the time.

TikTok claims that it had specialized moderators at the time and, as its platform grew, it continued to hire more of them.

He also says he's separated different types of harmful content -- into what he calls queues -- for moderators.


See also this story:


'Begging a tiger not to eat you'

Andrew Kang says that from within TikTok and Meta, it seemed like it was really hard to bring about the changes he felt were necessary.

"When we ask a private company whose interests are to promote its products to moderate itself, it's like asking a tiger not to eat you," he says.

He also says that he thinks the lives of children and teenagers would be better if they stopped using smartphones.

But for Kai, banning teenagers from phones or social media isn't the answer.

His phone is crucial to his life - a very important way to talk to friends, find his way out and pay for things.

Instead, he wants the companies that own the social networks to listen more to what teenagers don't want to see.

He wants companies to build tools that allow users to indicate interests more efficiently.

"I feel like the companies that own the social networks don't value your opinion as long as it makes them money," Kai tells me.

In the UK, a new law will force social networks to verify the age of children and prevent sites from recommending pornography or other harmful content to young people.

The British media regulator Ofcom is in charge of implementing it.

Almudena Lara, Ofcom's director of online safety policy development, says that while harmful content that mainly affects young women - such as videos promoting eating disorders and self-harm - has rightly been in the public eye, the algorithmic pathways that channel hate and violence mostly teenage boys and young men received less attention.

"Usually, a minority of children are exposed to the most harmful content. But we do know, however, that once you're exposed to harmful content, it becomes inevitable," says Lara.

Ofcom says it can fine companies and prosecute if they don't do enough, but that the measures won't come into effect until 2025.

TikTok claims it uses "innovative technology" and provides "industry-leading" teen safety and privacy settings, including systems to block content that may be inappropriate, and does not allow extreme violence and misogyny.

Meta, which owns Instagram and Facebook, says it has more than "50 different tools, resources and features" that provide teenagers with a "positive, age-appropriate experience."

According to Meta, it seeks feedback from its own teams, and potential policy changes go through tightly controlled processes.


BBC is in Serbian from now on and on YouTube, follow us HERE.


Follow us on Facebook, Twitter, Instagram, YouTube i Viber. If you have a topic suggestion for us, please contact bbcnasrpskom@bbc.co.uk

Bonus video: