Search results

Your search found 119 items
Previous | Next
Sort: Relevance | Topics | Title | Author | Publication Year View all 1 2 3
Home  / Search Results
Date: 2026
Date: 2026
Abstract: Wir untersuchen Manifestationen von Online-Antisemitismus im deutschen Sprachraum anhand von Tweets über Jüdinnen, Juden und Israel aus den Jahren 2019–2022. Die manuell annotierten Zufallsstichproben von insgesamt mehr als 8000 Tweets geben Aufschluss darüber, wie in sozialen Medien im deutschen Sprachraum vor dem 7. Oktober 2023 über jüdisches Leben und Israel gesprochen wurde.

Auch wenn nur ein kleiner Teil der Kommentare, mit 312 Nachrichten etwa vier Prozent, antisemitisch laut der IHRA-Definition von Antisemitismus waren, zeigen sie eine große Bandbreite an Formen von Antisemitismus auf. So wird sichtbar, dass viele der nach dem 7. Oktober 2023 gemachten Anschuldigungen gegen Israel auch schon vorher vorhanden waren.

Aber auch die als nicht antisemitisch gelabelten Posts bilden viele unterschiedliche Aspekte und Perspektiven ab, mit denen in Deutschland über jüdisches Leben und Antisemitismus gesprochen wird. Ein Thema war die Shoah. Dabei wurden zum Teil fragwürdige Vergleiche gezogen, etwa zwischen der Verfolgung von Jüdinnen und Juden während des Nationalsozialismus und zeitgenössischen Themen. Beispiele dafür sind die öffentliche Kritik an Personen, die sich gegen Maßnahmen zur Eindämmung der COVID-19-Pandemie stellen, das Diskriminierungsempfinden von Muslim_innen oder AfD-Sympathisant_innen sowie das Leid der Palästinenser_innen. Ein weiters Thema war Antisemitismus und die Verurteilung dessen, meist allgemein, gelegentlich aber auch konkret in Bezug auf eine bestimmte Äußerung oder Handlung. Eine zentrale Erkenntnis der Untersuchung ist, dass sich die meisten Online-Diskurse, in denen die Begriffe „Juden“ oder „Israel“ verwendet wurden, in irgendeiner Form mit Antisemitismus in Vergangenheit oder Gegenwart befassten – der Alltag von Jüdinnen, Juden und Israelis spielte dagegen eine untergeordnete Rolle.
Author(s): Burchett, Claire
Date: 2025
Abstract: With the now-established visibility and electoral success of the contemporary populist radical right (PRR) in Western Europe, existing literature has examined these parties’ refutation of antisemitism in parallel to their continued allusion to antisemitic tropes, to greater and lesser extents. This PhD thesis brings these two strands of literature together in a three-country, three-party, and two-platform analysis of the Facebook and X posts of the Freedom Party of Austria (FPÖ), the National Rally (RN) in France, and the Alternative for Germany (AfD) between 2017 and early 2023. First, this thesis applies elements of discourse-historical analysis and of populist “style” to social media data in a novel way to contribute a framework of when Jewish inclusion and exclusion are acceptable to the parties. It demonstrates that the parties construct their ingroups as “victims”, and that Jews are included when this is strategically conducive or when Jewish victimhood does not threaten that of the non-Jewish majority. Second, while existing literature on the PRR’s framing of Jews, Israel, and antisemitism has predominantly focused on party output, this thesis uses mixed methods, Natural Language Processing (NLP) tools and inductive qualitative analysis, to analyse the comments by users who engage with the parties’ posts. It contributes a novel framework of user victimhood, showing that users are not able to form a common identity with Jews when they see Jews as an Other (rejective), see Jewish victimhood as competing with their own (competitive), and perceive Jewish victimhood as an accusation of antisemitism (defensive). Despite this, a third contribution of this research is an examination of user responses to antisemitic code words, such as “globalists”, and a conclusion that only rarely are these overtly understood and escalated by users. The thesis thus provides both empirical and methodological contributions to scholarship on the PRR: combining influences from psychology, political science, and history, and applying mixed methods in an original way to deepen and widen understanding of both the parties and users, and examining how the strategy of (anti-)antisemitism fits into broader processes of PRR mainstreaming.
Date: 2024
Date: 2023
Abstract: The social media landscape is ever-changing as is its relationship to Holocaust memory and education. In the earlier days of Facebook and Twitter’s dominance, there was a clear divide of opinions in the Holocaust sector. On one hand, some institutions were early adopters (notably the Auschwitz State Museum) and others experimented with the affordances of these platforms such as the team at Grodzka Gate, Lublin extending the analogue practice of school pupils sending letters to child Holocaust victim Henio Zytomirski onto Facebook and the United States Holocaust Memorial Museum’s ‘tweet-up’ hybrid architecture tour. On the other hand, expressions of hesitance about these participatory spaces informed the need for the International Holocaust Remembrance Alliance’s Education Working Group to establish guidelines for using social media in this context (2014).

As practice grew, it also became somewhat formalised with most organisations predominantly focusing on Twitter, Facebook and Instagram for public engagement work, and most content presenting traditional curation of historical sources with additional narrative, promoting the organisation’s offline (or elsewhere online) work, or behind the scenes access to curator and educator experiences. Whilst, one of the

celebrated potentials about social media is their ability to help organisations to reach wider (global) audiences, little has changed online since Eva Pfanzelter’s (2014) claim that the Holocaust institutions that dominated previously offline, also dominate on social media platforms. Few others attract much engagement with their posts.

TikTok has brought both new opportunities and challenges for the Holocaust sector – organisations and individuals who have taken to creating content on the platform are seeing far greater engagement than they had on previous ones. Yet, TikTok is also one of the most data-invasive and opaque platforms regarding researcher access. Many also encounter far more Holocaust denial, distortion and trivialisation on this platform. However, the social media landscape is also far larger than the Holocaust sector has really acknowledged and much of the coded hate content that appears on mainstream platforms has been cultivated at scale on others, from 8Chan to Telegram, and gaming and VR social spaces. It is imperative therefore that we bring together a wide range of stakeholders and experts to discuss what the sector needs to move forward with its work on social media. If Holocaust memory and education is to remain visible in the ever-expanding digital world, then it must be visible across a variety of digital spaces.

This report serves as an important first step in this work. It was created as part of the research project ‘Participatory Workshops – Co-Designing Standards for Digital Interventions in Holocaust Memory and Education’, which is one thread of the larger Digital Holocaust Memory Project at the University of Sussex.

The participatory workshops project have focused on six themes, each of which brought together a different range of expertise to discuss current challenges and consider possible recommendations for the future. The themes were:

AI and machine learning
Digitising material evidence
Recording, recirculating and remixing testimony
Social media
Virtual memoryscapes
Computer games
Date: 2021
Abstract: Conspiracy fantasy or – to use the more common but less accurately descriptive term – ‘conspiracy theory’ is an enduring genre of discourse historically associated with authoritarian political movements. This article presents a literature review of research on conspiracy fantasy as well as two empirical studies of YouTube videos by three leading conspiracy fantasists. Two of these fantasists have been linked to the far right, while one maintains connections to figures on the far right and the far left. The first study employs content analysis of the 10 most popular videos uploaded by each of the three, and the second employs corpus analysis of keywords in comments posted on all videos uploaded by the three fantasists. Jewish-related entities such as Israel, Zionists and the Rothschild family are found to be among the entities most frequently accused of conspiracy in the videos. Conspiracy accusations against other Western nations (especially the United States and the United Kingdom), as well as their leaders and their media, were also common. Jewish-related lexical items such as ‘Zionist’, ‘Zionists’, ‘Rothschild’ and ‘Jews’ are found to be mentioned with disproportionate frequency in user comments. These findings would appear to reflect the conspiracy fantasy genre’s continuing proximity to its roots in the European antisemitic tradition and add weight to existing findings suggesting that the active YouTube audience responds to latently antisemitic content with more explicitly antisemitic comments.
Date: 2024
Date: 2024
Abstract: The World Wide Web (WWW) and digitisation have become important sites and tools for the history of the Holocaust and its commemoration. Today, some memory institutions use the Internet at a high professional level as a venue for self-presentation and as a forum for the discussion of Holocaust-related topics for potentially international, transcultural and interdisciplinary user groups. At the same time, it is not always the established institutions that utilise the technical possibilities and potential of the Internet to the maximum. Creative and sometimes controversial new forms of storytelling of the Holocaust or more traditional ways of remembering the genocide presented in a new way with digital media often come from people or groups who are not in the realm of influence of the large memorial sites, museums and archives. Such "private" stagings have experienced a particular upswing since the boom of social media. This democratisation of Holocaust memory and history is crucial though it is as yet undecided how much it will ultimately reinforce old structures and cultural, regional or other inequalities or reinvent them.

The “Digital space” as an arbitrary and limitless archive for the mediation of the Holocaust spanning from Russia to Brazil is at the centre of the essays collected in this volume. This space is also considered as a forum for negotiation, a meeting place and a battleground for generations and stories and as such offers the opportunity to reconsider the transgenerational transmission of trauma, family histories and communication. Here it becomes evident: there are new societal intentions and decision-making structures that exceed the capabilities of traditional mass media and thrive on the participation of a broad public.
Date: 2025
Date: 2021
Author(s): Manca, Stefania
Date: 2022
Date: 2024
Abstract: W wyniku przeprowadzonego drugiego etapu pilotażowego monitoringu, mającego zbadać zjawisko narastającej liczby treści o charakterze nienawistnym w internecie w okresie kampanii wyborczej, dokonano wielu istotnych obserwacji.

Kampania do Parlamentu Europejskiego, będąca kolejną z serii kampanii wyborczych odbywających się w krótkim odstępie czasu od poprzednich, miała miejsce w okresie okołowakacyjnym, co wiązało się z mniejszym zaangażowaniem zarówno ze strony partii politycznych, jak i użytkowników internetu. Mimo tego obniżonego poziomu zaangażowania wzrost treści o charakterze nienawistnym był już zauważalny przed formalnym rozpoczęciem kampanii, co sugeruje, że polityczny i społeczny klimat pozostawał spolaryzowany na skutek poprzednich wyborów do Sejmu i Senatu, które odbyły się 9 października 2023 roku.

Wraz z formalnym rozpoczęciem kampanii wyborczej zaobserwowano stały wzrost aktywności w serwisach internetowych oraz ciągłą tendencję wzrostową treści o charakterze nienawistnym. Po zakończeniu kampanii doszło do istotnego zmniejszenia liczby tego typu treści.

Analiza zachowań użytkowników internetu podczas monitoringu ujawniła, że wzrost treści nienawistnych rozprzestrzeniał się między różnymi grupami, co świadczy o dynamicznym i płynnym charakterze tego zjawiska. Zauważono, że nienawistne treści skierowane do jednej grupy mniejszościowej często prowadziły do generowania nienawiści wobec innych grup mniejszościowych. Szczególnie interesującym aspektem jest fakt, że wzrost treści antysemickich korelował z nasileniem treści antyukraińskich i antyuchodczych, co sugeruje związek między różnymi formami nienawiści w dyskursie społecznym.

Zapraszamy do zapoznania się z raportem

Spis treści:

Wstęp
Metodologia
Badanie – wyniki
Analiza zmian
Treści o charakterze antysemickim
Treści o charakterze antyuchodźczym i antymuzułmańskim
Treści o charakterze antyukraińskim
Treści o charakterze anty-LGBT+
Wnioski końcowe
Publikacja powstała w ramach projektu „Kompleksowa strategia przeciwdziałania antysemickiej mowie nienawiści w przestrzeni publicznej”, finansowanego przez Fundację Pamięć, Odpowiedzialność i Przyszłość, realizowanego przez Żydowskie Stowarzyszenie Czulent przy wsparciu merytorycznym Centrum Badań nad Uprzedzeniami.

Niniejsza publikacja nie prezentuje stanowiska i opinii Fundacji Pamięć, Odpowiedzialność i Przyszłość (EVZ).
Author(s): Shaw, Daniella
Date: 2025
Author(s): Goodman, Simon
Date: 2025
Date: 2024
Abstract: W ramach półrocznego badania podjęliśmy działania mające na celu sprawdzenie, czy i kiedy nienawistne treści o charakterze antysemickim są usuwane przez międzynarodowe i polskie serwisy IT po otrzymaniu zgłoszenia od użytkowniczek i użytkowników o konieczności ich usunięcia. Chcieliśmy także sprawdzić, czy istnieje różnica w usuwaniu nienawistnych treści zgłaszanych przez zwykłych użytkowników a tzw. zaufane podmioty sygnalizujące.

W tym celu przeprowadziliśmy Badanie usuwania treści nielegalnych w internecie (zwane także MRE — Monitoring and Reporting Exercise), testując międzynarodowe platformy internetowe: Facebook, Instagram, YouTube, TikTok, platforma X (dawniej Twitter) oraz dostawców polskich usług pośrednich: Agora, wp.pl, onet.pl, natemat.pl, dorzeczy.pl i wykop.pl pod kątem stosowania krajowych i unijnych przepisów nakazujących usunięcie lub uniemożliwienie dostępu do treści nielegalnych, w tym mowy nienawiści. Badanie zostało przeprowadzone w momencie wchodzenia w życie nowej unijnej regulacji dotyczącej poprawy bezpieczeństwa w przestrzeni cyfrowej, znanej jako Rozporządzenie 2022/2065 lub Akt o usługach cyfrowych.

Spis treści:

Wstęp
Słowniczek
Ramy prawne
Metodologia
Etapy MRE
Kluczowe dane
Wskaźniki usuwalności zgłoszeń
Wnioski i rekomendacje
Publikacja powstała w ramach projektu Zabezpieczenie naszej społeczności, ochrona naszej demokracji: zwalczanie antysemityzmu poprzez zintegrowane podejście do rzecznictwa i bezpieczeństwa (projekt PROTEUS), współfinansowanego przez Unię Europejską.
Date: 2024
Author(s): Peretz, Dekel
Date: 2024
Date: 2024
Abstract: The proliferation of hateful and violent speech in online media underscores the need for technological support to combat such discourse, create safer and more inclusive online environments, support content moderation and study political-discourse dynamics online. Automated detection of antisemitic content has been little explored compared to other forms of hate-speech. This chapter examines the automated detection of antisemitic speech in online and social media using a corpus of online comments sourced from various online and social media platforms. The corpus spans a three-year period and encompasses diverse discourse events that were deemed likely to provoke antisemitic reactions. We adopt two approaches. First, we explore the efficacy of Perspective API, a popular content- moderation tool that rates texts in terms of, e.g., toxicity or identity-related attacks, in scoring antisemitic content as toxic. We find that the tool rates a high proportion of antisemitic texts with very low toxicity scores, indicating a potential blind spot for such content. Additionally, Perspective API demonstrates a keyword bias towards words related to Jewish identities, which could result in texts being falsely flagged and removed from platforms. Second, we fine-tune deep learning models to detect antisemitic texts. We show that OpenAI’s GPT-3.5 can be fine-tuned to effectively detect antisemitic speech in our corpus and beyond, with F1 scores above 0.7. We discuss current achievements in this area and point out directions for future work, such as the utilisation of prompt-based models.
Author(s): Vincent, Chloé
Date: 2024
Abstract: Antisemitism often takes implicit forms on social media, therefore making it difficult to detect. In many cases, context is essential to recognise and understand the antisemitic meaning of an utterance (Becker et al. 2021, Becker and Troschke 2023, Jikeli et al. 2022a). Previous quantitative work on antisemitism online has focused on independent comments obtained through keyword search (e.g. Jikeli et al. 2019, Jikeli et al. 2022b), ignoring the discussions in which they occurred. Moreover, on social media, discussions are rarely linear. Web users have the possibility to comment on the original post and start a conversation or to reply to earlier web user comments. This chapter proposes to consider the structure of the comment trees constructed in the online discussion, instead of single comments individually, in an attempt to include context in the study of antisemitism online. This analysis is based on a corpus of 25,412 trees, consisting of 76,075 Facebook comments. The corpus is built from web comments reacting to posts published by mainstream news outlets in three countries: France, Germany, and the UK. The posts are organised into 16 discourse events, which have a high potential for triggering antisemitic comments. The analysis of the data help verify whether (1) antisemitic comments come together (are grouped under the same trees), (2) the structure of trees (lengths, number of branches) is significant in the emergence of antisemitism, (3) variations can be found as a function of the countries and the discourse events. This study presents an original way to look at social media data, which has potential for helping identify and moderate antisemitism online. It specifically can advance research in machine learning by allowing to look at larger segments of text, which is essential for reliable results in artificial intelligence methodology. Finally, it enriches our understanding of social interactions online in general, and hate speech online in particular.
Author(s): Ascone, Laura
Date: 2024
Author(s): Bolton, Matthew
Date: 2024
Abstract: Accusations that Israel has committed, or is in the process of committing, genocide against the Palestinian population of the Middle East are a familiar presence within anti- Israel and anti Zionist discourse. In the wake of the Hamas attacks of 7 October 2023 and the subsequent Israeli military invasion of Gaza, claims of an Israeli genocide reached new heights, culminating in Israel being accused of genocide by South Africa at the International Court of Justice. Such claims can be made directly or indirectly, via attempts to draw an equivalence between Auschwitz or the Warsaw Ghetto and the current situation in the Palestinian territories. This chapter examines the use of the concept of genocide in social media discussions responding to UK news reports about Israel in the years prior to the 2023 Israel- Hamas war, thereby setting out the pre-existing conditions for its rise to prominence in the response to that war. It provides a historical account of the development of the concept of genocide, showing its interrelation with antisemitism, the Holocaust and the State of Israel. It then shows how accusations of genocide started being made against Israel in the decades following the Holocaust, and argues that such use is often accompanied by analogies between Israel and Nazi Germany and forms of Holocaust distortion. The chapter then qualitatively analyses comments referencing a supposed Israeli genocide posted on the Facebook pages of major British newspapers regarding three Israel-related stories: the May 2021 escalation phase of the Arab- Israeli conflict; the July 2021 announcement that the US ice cream company Ben & Jerry’s would be boycotting Jewish settlements in the West Bank; and the rapid roll-out of the Covid-19 vaccine in Israel from December 2020 to January 2021.
Author(s): Chapelan, Alexis
Date: 2024
Abstract: Social media platforms and the interactive web have had a significant impact on political socialisation, creating new pathways of community-building that shifted the focus from real-life, localised networks (such as unions or neighbourhood associations) to vast, diffuse and globalised communities (Finin et al. 2008, Rainie and Wellman 2012, Olson 2014, Miller 2017). Celebrities or influencers are often focal nodes for the spread of information and opinions across these new types of networks in the digital space (see Hutchins and Tindall 2021). Unfortunately, this means that celebrities’ endorsement of extremist discourse or narratives can potently drive the dissemination and normalisation of hate ideologies.

This paper sets out to analyse the reaction of French social media audiences to antisemitism controversies involving pop culture celebrities. I will focus on two such episodes, one with a ‘national’ celebrity at its centre and the other a ‘global’ celebrity: the social media ban of the French-Cameroonian comedian Dieudonné M’bala M’bala in June–July 2020 and the controversy following US rapper Kanye West’s spate of antisemitic statements in October–November 2022. The empirical corpus comprises over 4,000 user comments on Facebook, YouTube and Twitter (now X). My methodological approach is two-pronged: a preliminary mapping of the text through content analysis is followed by a qualitative Critical Discourse Analysis that examines linguistic strategies and discursive constructions employed by social media users to legitimise antisemitic worldviews. We lay particular emphasis on the manner in which memes, dog-whistling or coded language (such as allusions or inside jokes popular within certain communities or fandoms) are used not only to convey antisemitic meaning covertly but also to build a specific form of counter-cultural solidarity. This solidarity expresses itself in the form of “ deviant communities” (see Proust et al. 2020) based on the performative and deliberate transgression of societal taboos and norms.
Author(s): Placzynta, Karolina
Date: 2024
Abstract: Despite the benefits of the intersectional approach to antisemitism studies, it seems to have been given little attention so far. This chapter compares the online reactions to two UK news stories, both centred around the common theme of cultural boycott of Israel in support of the BDS movement, both with a well-known female figure at the centre of media coverage, only one of which identifies as Jewish. In the case of British television presenter Rachel Riley, a person is attacked for being female as well as Jewish, with misogyny compounding the antisemitic commentary. In the case of the Irish writer Sally Rooney, misogynistic discourse is used to strengthen the message countering antisemitism. The contrastive analysis of the two datasets, with references to similar analyses of media stories centred around well-known men, illuminates the relationships between the two forms of hate, revealing that—even where the antisemitic attitudes overlap— misogynistic insults and disempowering or undermining language are being weaponised on both sides of the debate, with additional characterisation of Riley as a “grifter” and Rooney as “naive”.

More research comparing discourses around Jewish and non-Jewish women is needed to ascertain whether this pattern is consistent; meanwhile, the many analogies in the abuse suffered by both groups can perhaps serve a useful purpose: shared struggles can foster understanding needed to then notice the particularised prejudice. By including more than one hate ideology in the research design, intersectionality offers exciting new approaches to studies of antisemitism and, more broadly, of
hate speech or discrimination.
Date: 2022
Author(s): Cambruzzi, Murilo
Date: 2024
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country web-monitoring of illegal antisemitic hate speech.

The shadow monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms signatory to the Code of Conduct on countering illegal hate speech online, namely Facebook, Twitter, YouTube and TikTok.
● Analyzing the types of content and narratives collected by the research team.

Partners organizations focused on their country language: French for Belgium, Italian, Romanian and Spanish. Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise: Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe
(Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion,
descent or national or ethnic origin.”

The content was collected and reported to social media platforms in three rounds between October 2022 and October 2023. Content was checked for removal after a week or so to give enough time to social media platforms to analyze and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Author(s): Cambruzzi, Murilo
Date: 2023
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.

The monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
● Partners organizations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
● Analyzing the types of content and narratives collected by the research team

Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

The content was collected and reported to social media platforms between April 21st and 22nd, 2023. Content was checked for removal on April 26th to give enough time to social media platforms to analyze and remove the content.1 The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Date: 2022
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.
The monitoring exercises aim at
• Analysing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
• Partners organisations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
• Analysing the types of content and narratives collected by the research team.

Four organisations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

The content was collected and reported to social media platforms between October 6th and 7th, 2022. Content was checked for removal on October 12th to give enough time to social media platforms to analyse and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Date: 2020
Abstract: The spread of hate speech and anti-Semitic content has become endemic to social media. Faced
with a torrent of violent and offensive content, nations in Europe have begun to take measures to
remove such content from social media platforms such as Facebook and Twitter. However, these
measures have failed to curtail the spread, and possible impact of anti-Semitic content. Notably,
violence breeds violence and calls for action against Jewish minorities soon lead to calls for
violence against other ethnic or racial minorities. Online anti-Semitism thus drives social tensions
and harms social cohesion. Yet the spread of online anti-Semitism also has international
ramifications as conspiracy theories and disinformation campaigns now often focus on WWII and
the Holocaust.
On Nov 29, 2019, the Oxford Digital Diplomacy Research Group (DigDiploROx) held a one-day
symposium at the European Commission in Brussels. The symposium brought together diplomats,
EU officials, academics and civil society organizations in order to search for new ways to combat
the rise in online anti-Semitism. This policy brief offers an overview of the day’s discussions, the
challenges identified and a set of solutions that may aid nations looking to stem the flow of antiSemitic content online. Notably, these solutions, or recommendations, are not limited to the realm
of anti-Semitism and can to help combat all forms of discrimination, hate and bigotry online.
Chief among these recommendations is the need for a multi-stakeholder solution that brings
together governments, multilateral organisations, academic institutions, tech companies and
NGOs. For the EU itself, there is a need to increase collaborations between units dedicated to
fighting online crime, terrorism and anti-Semitism. This would enable the EU to share skills,
resources and working procedures. Moreover, the EU must adopt technological solutions, such as
automation, to identify, flag and remove hateful content in the quickest way possible. The EU
could also redefine its main activities - rather than combat incitement to violence online, it may
attempt to tackle incitement to hate, given that hate metastases online to calls for violence.
Finally, the EU should deepen its awareness to the potential harm of search engines. These offer
access to content that has already been removed by social media companies. Moreover, search
engines serve as a gateway to hateful content. The EU should thus deepen is collaborations with
companies such as Google and Yahoo, and not just Facebook or Twitter. It should be noted that
social media companies opted not to take part in the symposium demonstrating that the solution
to hate speech and rising anti-Semitism may be in legislation and not just in collaboration.
The rest of this brief consists of five parts. The first offers an up-to-date analysis of the prevalence
of anti-Semitic content online. The second, discuss the national and international implications of
this prevalence. The third part stresses the need for a multi-stakeholder solution while the fourth
offers an overview of the presentations made at the symposium. The final section includes a set
of policy recommendations that should be adopted by the EU and its members states.