Home  / 5531

Amplifying Antisemitism: How Recommender Algorithms Serve Harmful Content to Children

Publication Date

Publication Place

Publisher

Abstract

This research investigates how recommender algorithms on TikTok and Rumble expose UK minors to antisemitic content.

Analysts created 10 TikTok profiles representing 15-year-old users with varied political and cultural interests, including neutral interest in the Israel-Palestine conflict, left and right-wing political interest, male lifestyle influencer content, far-right content and two neutral accounts. The profiles were prompted towards relevant topics for each interest through an hour and a half of manual content viewing, followed by content engagement via bespoke bot over 14 days, resulting in over 5,500 recommended videos. Thematic analysis clustered content into 10 core themes, revealing pathways from neutral lifestyle content to highly politicised and conspiratorial clusters. Relevant themes were manually reviewed, revealing that harmful content persisted through videos, comments, and TikTok’s sticker and sound features, illustrating systemic gaps in safeguarding minors.

On Rumble, analysts collected 4,412 videos from the platform’s “Editor’s Picks” over six months. Analysts filtered for antisemitism-related keywords and reviewed 259 videos potentially relevant to antisemitism. Findings show Rumble hosts more overt antisemitic content than TikTok, including slurs, Holocaust distortion and conspiracies about Jewish control. These findings underscore urgent gaps in platform accountability and the need for robust enforcement of the Online Safety Act to protect children from the normalisation and mainstreaming of antisemitic content.

Topics

Genre

Geographic Coverage

Original Language

Bibliographic Information

Amplifying Antisemitism: How Recommender Algorithms Serve Harmful Content to Children. Antisemitism Policy Trust, Institute for Strategic Dialogue. 2026:  https://archive.jpr.org.uk/object-5531