Investigation topicsFakespertsSubscribe to our Sunday Digest
SOCIETY

The hate algorithm: How social media fuels radicalization

In May, France’s digital development minister called for a European ban on social media for kids under the age of 15 — an attempt to shield them from the toxic influence of algorithm-driven content. The concern isn’t new, but it is growing. Studies show that over the past decade, recommendation algorithms have quietly taken over our digital lives. They decide what we see and what we don’t, and when trying to guess what we might like, they tend to favor content that’s emotionally charged, polarizing, sensational, and often radical. That’s the kind of content far-right communities and politicians tend to produce. And it’s not just a tech quirk — it is reshaping politics. Radical posts get more clicks, more reactions, and more algorithmic love. The result: people end up trapped in tailor-made echo chambers. And according to researchers, that’s not a bug — it’s how the system is designed. These platforms are built to keep people scrolling, sharing, and staying inside tightly sealed ideological bubbles.

Audio version Apple Podcasts / Spotify

Доступно на русском

Justin Brown-Ramsey, now a history PhD student at Brown University, knows that journey firsthand. Back in 2015, during his high school years, his parents were going through a divorce. Struggling to cope, he started spending more and more time online. That’s when he stumbled across Jordan Peterson — a Canadian psychologist, YouTuber, and right-wing guru of sorts. On screen, Peterson offered something Justin badly needed: empathy, understanding, and what seemed like straightforward advice on how to take control of his life. But wrapped in that message were also conservative talking points about the decline of Western civilization and the evils of “cancel culture.”

The YouTube algorithm quickly picked up on Justin’s interest. Soon, his feed was full of right-wing and far-right bloggers, influencers, and political commentators, and it wasn’t long before this was the only kind of content Justin was seeing. He began spending hours arguing with strangers in the comment sections of #MeToo videos, parroting Peterson’s debate tactics and trying to outwit “the leftists.”

Bit by bit, he drifted away from his friends. Disagreements over racism, trans rights, and other hot-button issues became rifts that couldn’t be patched up. The deeper he sank into his online world, the more isolated he felt in real life. What eventually pulled him out wasn’t a viral post or a change of algorithm — but education. Studying history gave him the tools to step back, reconnect, and start thinking more critically.

Not everyone gets that chance.

Down the rabbit hole: how social media traps users in radical content

Justin’s story is a textbook case of what researchers call the “radicalization conveyor belt,” though some studies and reports give the phenomenon a different name: e.g. the alt-right pipeline, the rabbit hole. But no matter the term, the basic idea is simple: someone with unformed or uncertain views starts engaging with content online and is gradually swept up in a current of algorithm-driven recommendations that can shift their entire worldview. This was especially visible during the early months of the COVID-19 pandemic, when anti-vaccine groups saw their follower counts jump significantly. Before long, many of those groups had evolved into full-blown far-right conspiracy communities.

So why do the algorithms behave this way — why do they keep funneling people toward far-right content? After all, Justin could just as easily have been shown videos from other psychologists or youth-focused life coaches — which is what drew him to Peterson in the first place. Instead, the recommendation system kept pushing him further and further into hardline right-wing politics.

The answer lies in how these systems are designed. They aren’t built to educate, inform, or balance viewpoints — they’re built to hold our attention and keep us online for as long as possible. And the content that works best for that is the stuff that provokes, outrages, or stirs up emotion. More often than not, that means political content — especially radical, polarizing content.

Different platforms have different patterns, of course. But in the end, whether by design or by accident, many end up boosting right-wing content more than anything else.

A telling example came from inside Twitter itself. Before Elon Musk bought the platform, the company conducted an internal study showing that tweets from right-wing politicians and media outlets were more likely to be amplified in users’ algorithmic timelines than those from left-leaning sources. This trend held true in six out of seven countries studied — including the U.S., U.K., and Canada. In the latter, right-wing tweets were favored by a factor of four.

Another study, released by Global Witness in February 2025, examined what TikTok and Twitter (X) users in Germany were being shown ahead of the recent federal elections. The results were clear: the “For You” feeds on both platforms were flooded with far-right content. On TikTok, 78% of political recommendations pointed to material linked to the far-right AfD party — even though the party’s actual polling numbers were below 20%. On Twitter, 63% of promoted political content also centered on AfD. Platform-level bias is unmistakable. And it can influence public opinion: repeated exposure to “radical content” does, in fact, lead to support for similarly radical ideas.

Another study suggests that TikTok’s “For You” feed isn’t just a neutral mirror passively reflecting user preferences. On the contrary, the researchers who conducted the study argue that the logic of the “radicalization conveyor belt” is baked into the platform’s very design. The rapid spread and virality of radical and far-right content, they say, isn’t an accident — it is a fully predictable outcome given the way the algorithms work, even if such content is not what viewers themselves would actually prefer to see.

In a paper titled Algorithmic Extremism?, Joe Burton, a professor of international security in Lancaster University’s Department of Politics, Philosophy, and Religion, argues that algorithms now play an increasingly central role in politics. Far from being neutral tools, they actively fuel polarization, radicalization, and even political violence. Burton believes the issue isn’t just with social media — it is rooted in the underlying philosophy of how AI and algorithmic systems are designed and deployed.

These algorithms are consciously built in order to exploit well-documented vulnerabilities in human psychology. Sociologists and psychologists have long pointed out that social media platforms feed on our cognitive biases — especially confirmation bias, the tendency to seek out and trust information that supports what we already believe.

To keep users engaged, algorithms constantly serve up content that aligns with their existing views. If someone shows interest in far-right politicians, they’ll be recommended more far-right content. If someone indicates skepticism of vaccines, they’ll start seeing anti-vaccine material. Over time, this creates what’s often called a “filter bubble” — an artificially curated space where everyone agrees with everyone else in the space, and conflicting viewpoints are filtered out.

According to a widely accepted psychological framework known as the group polarization model, communities where everyone agrees don’t just reinforce their existing beliefs — they push each other toward even more extreme positions. Combined with the mechanics of the radicalization conveyor belt, the result is a disturbing feedback loop: algorithms don’t just reflect extremist ideologies — they deepen them, making it harder for users to break free of false beliefs that everyone around them also accepts as true.

Invisible algorithms, real-world violence

Online extremism doesn’t always stay online. The 2022 mass shooting in Buffalo, New York, is a chilling example. According to a report by the state attorney general, algorithm-driven recommendations and delays in removing harmful content played a key role in the shooter’s radicalization — and, ultimately, in the act of violence itself. In the UK, a Bellingcat investigation into far-right influence networks shows how platforms like YouTube and Telegram act as recruitment hubs, channeling new users toward extremist forums and isolated radical communities. A popular Netflix miniseries, Adolescence, explores the life of a teenager caught in a similar cultural and informational bubble.

The shooter arrested shortly after the 2022 Buffalo attack

Research has mapped in detail how seemingly ordinary content can become a gateway to misogyny and extremism. Intelligence agencies like MI5 are paying attention: in the UK, up to 13% of those flagged as potential terrorism risks are under the age of 18.

Their corner of the internet has come to be known as the “manosphere,” a loosely connected web of communities that includes men’s rights activists, incels, and pickup artists — with influencers like the Tate brothers or Arsen Markaryan acting as figureheads. Like other radical pipelines, the manosphere often begins with innocent “entry points” such as gaming streams.

A still from the Netflix series Adolescence

A University of Portsmouth study finds that the simplified, stereotypical portrayals of gender — and of people in general — found in many games can create fertile ground for certain influencers, who use gaming culture as a platform to promote anti-feminist or openly misogynistic ideologies. And once again, the algorithms play their role: instead of suggesting more content tied to a user’s gaming interests, they start nudging them toward manosphere-related material.

While prominent voices in the manosphere often claim they’re only concerned with men’s well-being and mental health, the reality can be far darker. A paper published earlier this year in Child and Adolescent Mental Health warns that young people exposed to such content are at greater risk for depression, anxiety, and social withdrawal. The narrative of “male victimhood” popular among online misogynists doesn’t heal — it deepens feelings of alienation and resentment.

Journalists at The Guardian ran an experiment: they created blank accounts on Facebook and Instagram, listing only gender and age (male, 24 years old). Within weeks, both feeds were flooded with sexist memes, posts about “traditional Catholic values,” and hypersexualized images of women.

Algorithms can also help fuel political violence. A Washington Post investigation based on internal Facebook documents — the so-called Facebook Papers — revealed how the platform’s algorithms effectively accelerated the organization and mobilization of those who stormed the U.S. Capitol on January 6, 2021. As early as 2019, Facebook researchers noted that a test profile set up as a “conservative mother from North Carolina” was already being fed QAnon conspiracy content by day five, showing how quickly algorithmic radicalization can take hold.

Facebook also hesitated to act against Stop the Steal groups — named after the slogan used by Trump supporters who claimed that the 2020 election had been rigged against their candidate. When the company finally banned the main group in December 2020, dozens of copycat pages saw explosive growth thanks to automated “super invites” that added up to 500 people at a time — flooding feeds with aggressive calls to violence yet again.

Trump’s algorithm tricks

Even the beneficiaries of algorithmic echo chambers have started complaining about them. Under the banner of “woke AI,” Trump’s second administration turned long-standing concerns about the dangers of algorithmic bubbles into yet another front in the culture war. In early 2025, Congress demanded internal documents from Amazon, Google, Meta, Microsoft, OpenAI, and others, purportedly in an effort to investigate so-called algorithmic bias — accusing the platforms of promoting inclusivity and left-leaning agendas.

At the AI summit in Paris, Vice President J.D. Vance publicly vowed that the current administration would “ensure that AI systems developed in America are free from ideological bias and will never restrict our citizens’ right to free speech.” In reality, this was a response to international and corporate efforts aimed at reducing bias in AI — efforts that would naturally include dialing back the right-wing slant. That, of course, goes against Vance’s interests. For Trump and Vance, efforts to ensure responsible AI and algorithm design aren’t safeguards — they’re threats. And they’re working hard to frame those safeguards as dangers to innovation and free speech.

This mirrors Elon Musk’s complaints before he bought Twitter — claims that the platform was supposedly biased against conservatives, despite all evidence to the contrary. Naturally, after Musk acquired the company, the rightward tilt only intensified: today, openly neo-Nazi content thrives and is actively promoted by the platform.

Back in 2018, Trump accused Google’s search engine of being “rigged” against him and warned that social media platforms were facing “a very serious situation” that his administration would have to address. At the core of this strategy is what Psychology Today dubbed the “Trump algorithm”: any positive information that supports his success is treated as truth, and any negative news is instantly dismissed as fake.

So why don’t platforms change their algorithms? Engagement drives ad revenue — and outrage and radicalization drive engagement. Algorithms based on time spent on-site and click-through rates naturally favor inflammatory content — the sharper, the better. Add to that Musk’s own political incentives to tilt platforms toward ideologies he favors, and you have a system that resists reform. When extremism becomes profitable, there’s no reason to invest in better moderation. Why fix what’s making money?

What’s next?

The EU’s Digital Services Act requires platforms to disclose how their algorithms work and to undergo independent audits. The law came fully into force in February 2024. Critics say it still lacks transparency, while advocates of stricter regulation propose introducing “algorithm labels” that would reveal key parameters to end users. Others argue the law goes too far and curtails the freedoms of platforms and users alike.

Still, as emphasized in a report by the U.S. attorney general, legal reforms are falling behind the rapid evolution of artificial intelligence systems.

Turning the tide will require collective resolve. Platforms must change their incentive structures, researchers need to design effective auditing systems, and lawmakers must craft flexible rules that hold tech companies accountable. Most importantly, users must raise their awareness and understand how algorithms shape the information they consume. Grasping the hidden logic behind digital feeds is the first crucial step.