Topics: Antisemitism, Antisemitism: Attitude Surveys, Antisemitism: Christian, Antisemitism: Definitions, Antisemitism: Discourse, Antisemitism: Education against, Antisemitism: Far right, Antisemitism: Left-Wing, Antisemitism: Monitoring, Antisemitism: Muslim, Antisemitism: New Antisemitism, Antisemitism: Online, Internet, Jewish Perceptions of Antisemitism, Attitudes to Jews, Anti-Zionism, Israel Criticism, Main Topic: Antisemitism, Methodology, Social Media
Abstract: This open access book is the first comprehensive guide to identifying antisemitism online today, in both its explicit and implicit (or coded) forms. Developed through years of on-the-ground analysis of over 100,000 authentic comments posted by social media users in the UK, France, Germany and beyond, the book introduces and explains the central historical, conceptual and linguistic-semiotic elements of 46 antisemitic concepts, stereotypes and speech acts. The guide was assembled by researchers working on the Decoding Antisemitism project at the Centre for Research on Antisemitism at Technische Universität Berlin, building on existing basic definitions of antisemitism, and drawing on expertise in various fields. Using authentic examples taken from social media over the past four years, it sets out a pioneering step-by-step approach to identifying and categorising antisemitic content, providing guidance on how to recognise a statement as antisemitic or not. This book will be an invaluable tool through which researchers, students, practitioners and social media moderators can learn to recognise contemporary antisemitism online – and the structural aspects of hate speech more generally – in all its breadth and diversity.
Abstract: The proliferation of hateful and violent speech in online media underscores the need for technological support to combat such discourse, create safer and more inclusive online environments, support content moderation and study political-discourse dynamics online. Automated detection of antisemitic content has been little explored compared to other forms of hate-speech. This chapter examines the automated detection of antisemitic speech in online and social media using a corpus of online comments sourced from various online and social media platforms. The corpus spans a three-year period and encompasses diverse discourse events that were deemed likely to provoke antisemitic reactions. We adopt two approaches. First, we explore the efficacy of Perspective API, a popular content- moderation tool that rates texts in terms of, e.g., toxicity or identity-related attacks, in scoring antisemitic content as toxic. We find that the tool rates a high proportion of antisemitic texts with very low toxicity scores, indicating a potential blind spot for such content. Additionally, Perspective API demonstrates a keyword bias towards words related to Jewish identities, which could result in texts being falsely flagged and removed from platforms. Second, we fine-tune deep learning models to detect antisemitic texts. We show that OpenAI’s GPT-3.5 can be fine-tuned to effectively detect antisemitic speech in our corpus and beyond, with F1 scores above 0.7. We discuss current achievements in this area and point out directions for future work, such as the utilisation of prompt-based models.
Abstract: Antisemitism often takes implicit forms on social media, therefore making it difficult to detect. In many cases, context is essential to recognise and understand the antisemitic meaning of an utterance (Becker et al. 2021, Becker and Troschke 2023, Jikeli et al. 2022a). Previous quantitative work on antisemitism online has focused on independent comments obtained through keyword search (e.g. Jikeli et al. 2019, Jikeli et al. 2022b), ignoring the discussions in which they occurred. Moreover, on social media, discussions are rarely linear. Web users have the possibility to comment on the original post and start a conversation or to reply to earlier web user comments. This chapter proposes to consider the structure of the comment trees constructed in the online discussion, instead of single comments individually, in an attempt to include context in the study of antisemitism online. This analysis is based on a corpus of 25,412 trees, consisting of 76,075 Facebook comments. The corpus is built from web comments reacting to posts published by mainstream news outlets in three countries: France, Germany, and the UK. The posts are organised into 16 discourse events, which have a high potential for triggering antisemitic comments. The analysis of the data help verify whether (1) antisemitic comments come together (are grouped under the same trees), (2) the structure of trees (lengths, number of branches) is significant in the emergence of antisemitism, (3) variations can be found as a function of the countries and the discourse events. This study presents an original way to look at social media data, which has potential for helping identify and moderate antisemitism online. It specifically can advance research in machine learning by allowing to look at larger segments of text, which is essential for reliable results in artificial intelligence methodology. Finally, it enriches our understanding of social interactions online in general, and hate speech online in particular.
Abstract: Social media platforms and the interactive web have had a significant impact on political socialisation, creating new pathways of community-building that shifted the focus from real-life, localised networks (such as unions or neighbourhood associations) to vast, diffuse and globalised communities (Finin et al. 2008, Rainie and Wellman 2012, Olson 2014, Miller 2017). Celebrities or influencers are often focal nodes for the spread of information and opinions across these new types of networks in the digital space (see Hutchins and Tindall 2021). Unfortunately, this means that celebrities’ endorsement of extremist discourse or narratives can potently drive the dissemination and normalisation of hate ideologies.
This paper sets out to analyse the reaction of French social media audiences to antisemitism controversies involving pop culture celebrities. I will focus on two such episodes, one with a ‘national’ celebrity at its centre and the other a ‘global’ celebrity: the social media ban of the French-Cameroonian comedian Dieudonné M’bala M’bala in June–July 2020 and the controversy following US rapper Kanye West’s spate of antisemitic statements in October–November 2022. The empirical corpus comprises over 4,000 user comments on Facebook, YouTube and Twitter (now X). My methodological approach is two-pronged: a preliminary mapping of the text through content analysis is followed by a qualitative Critical Discourse Analysis that examines linguistic strategies and discursive constructions employed by social media users to legitimise antisemitic worldviews. We lay particular emphasis on the manner in which memes, dog-whistling or coded language (such as allusions or inside jokes popular within certain communities or fandoms) are used not only to convey antisemitic meaning covertly but also to build a specific form of counter-cultural solidarity. This solidarity expresses itself in the form of “ deviant communities” (see Proust et al. 2020) based on the performative and deliberate transgression of societal taboos and norms.
Abstract: Despite the benefits of the intersectional approach to antisemitism studies, it seems to have been given little attention so far. This chapter compares the online reactions to two UK news stories, both centred around the common theme of cultural boycott of Israel in support of the BDS movement, both with a well-known female figure at the centre of media coverage, only one of which identifies as Jewish. In the case of British television presenter Rachel Riley, a person is attacked for being female as well as Jewish, with misogyny compounding the antisemitic commentary. In the case of the Irish writer Sally Rooney, misogynistic discourse is used to strengthen the message countering antisemitism. The contrastive analysis of the two datasets, with references to similar analyses of media stories centred around well-known men, illuminates the relationships between the two forms of hate, revealing that—even where the antisemitic attitudes overlap— misogynistic insults and disempowering or undermining language are being weaponised on both sides of the debate, with additional characterisation of Riley as a “grifter” and Rooney as “naive”.
More research comparing discourses around Jewish and non-Jewish women is needed to ascertain whether this pattern is consistent; meanwhile, the many analogies in the abuse suffered by both groups can perhaps serve a useful purpose: shared struggles can foster understanding needed to then notice the particularised prejudice. By including more than one hate ideology in the research design, intersectionality offers exciting new approaches to studies of antisemitism and, more broadly, of
hate speech or discrimination.
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.
The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country web-monitoring of illegal antisemitic hate speech.
The shadow monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms signatory to the Code of Conduct on countering illegal hate speech online, namely Facebook, Twitter, YouTube and TikTok.
● Analyzing the types of content and narratives collected by the research team.
Partners organizations focused on their country language: French for Belgium, Italian, Romanian and Spanish. Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise: Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe
(Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).
The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion,
descent or national or ethnic origin.”
The content was collected and reported to social media platforms in three rounds between October 2022 and October 2023. Content was checked for removal after a week or so to give enough time to social media platforms to analyze and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.
The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.
The monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
● Partners organizations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
● Analyzing the types of content and narratives collected by the research team
Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).
The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”
The content was collected and reported to social media platforms between April 21st and 22nd, 2023. Content was checked for removal on April 26th to give enough time to social media platforms to analyze and remove the content.1 The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.
The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.
The monitoring exercises aim at
• Analysing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
• Partners organisations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
• Analysing the types of content and narratives collected by the research team.
Four organisations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).
The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”
The content was collected and reported to social media platforms between October 6th and 7th, 2022. Content was checked for removal on October 12th to give enough time to social media platforms to analyse and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Abstract: The spread of hate speech and anti-Semitic content has become endemic to social media. Faced
with a torrent of violent and offensive content, nations in Europe have begun to take measures to
remove such content from social media platforms such as Facebook and Twitter. However, these
measures have failed to curtail the spread, and possible impact of anti-Semitic content. Notably,
violence breeds violence and calls for action against Jewish minorities soon lead to calls for
violence against other ethnic or racial minorities. Online anti-Semitism thus drives social tensions
and harms social cohesion. Yet the spread of online anti-Semitism also has international
ramifications as conspiracy theories and disinformation campaigns now often focus on WWII and
the Holocaust.
On Nov 29, 2019, the Oxford Digital Diplomacy Research Group (DigDiploROx) held a one-day
symposium at the European Commission in Brussels. The symposium brought together diplomats,
EU officials, academics and civil society organizations in order to search for new ways to combat
the rise in online anti-Semitism. This policy brief offers an overview of the day’s discussions, the
challenges identified and a set of solutions that may aid nations looking to stem the flow of antiSemitic content online. Notably, these solutions, or recommendations, are not limited to the realm
of anti-Semitism and can to help combat all forms of discrimination, hate and bigotry online.
Chief among these recommendations is the need for a multi-stakeholder solution that brings
together governments, multilateral organisations, academic institutions, tech companies and
NGOs. For the EU itself, there is a need to increase collaborations between units dedicated to
fighting online crime, terrorism and anti-Semitism. This would enable the EU to share skills,
resources and working procedures. Moreover, the EU must adopt technological solutions, such as
automation, to identify, flag and remove hateful content in the quickest way possible. The EU
could also redefine its main activities - rather than combat incitement to violence online, it may
attempt to tackle incitement to hate, given that hate metastases online to calls for violence.
Finally, the EU should deepen its awareness to the potential harm of search engines. These offer
access to content that has already been removed by social media companies. Moreover, search
engines serve as a gateway to hateful content. The EU should thus deepen is collaborations with
companies such as Google and Yahoo, and not just Facebook or Twitter. It should be noted that
social media companies opted not to take part in the symposium demonstrating that the solution
to hate speech and rising anti-Semitism may be in legislation and not just in collaboration.
The rest of this brief consists of five parts. The first offers an up-to-date analysis of the prevalence
of anti-Semitic content online. The second, discuss the national and international implications of
this prevalence. The third part stresses the need for a multi-stakeholder solution while the fourth
offers an overview of the presentations made at the symposium. The final section includes a set
of policy recommendations that should be adopted by the EU and its members states.
Abstract: Amidst the Covid-19 pandemic, antisemitic scapegoating has surfaced, giving ammunition to antisemites and extremists looking for someone to blame. Online, memes have been circulating espousing antisemitism, whilst offline, several public figures and others in the public eye have alluded to Jews being the cause of the pandemic. Blame and scapegoating of Jews is not new, and it didn’t take long for antisemitism to mutate. Concerning coronavirus, Jews have not been the primary target for hatred. Anti-Chinese messages are being shared online, with references to the “Chinese flu”, the “Wuhan virus” and the “kung flu”. This collective blame leading to denigration
of, and attacks on, people of Chinese descendent, is reminiscent the collective blame in antisemitic conspiracy theories. This briefing highlights several examples of scapegoating of Jews for earlier global pandemics and addresses Covid related antisemitism in the United Kingdom and globally.
Abstract: Developments in Artificial Intelligence (AI) are prompting governments across the globe, and experts from across multiple sectors, to future proof society. In the UK, Ministers have published a discussion paper on the capabilities, opportunities and risks presented by frontier artificial intelligence. The document outlines that whilst AI has many benefits, it can act as a simple, accessible and cheap tool for the dissemination of disinformation, and could be misused by terrorists to enhance their capabilities. The document warns that AI technology will become so advanced and realistic, that it will be nearly impossible to distinguish deep fakes and other fake content from real content. AI could also be used to incite violence and reduce people’s trust in true information.
It is clear that mitigating risks from AI will become the next great challenge for governments, and for society.
Of all the possible risks, the Antisemitism Policy Trust is focused on the development of systems that facilitate
the promotion, amplification and sophistication of discriminatory and racist content, that is material
that can incite hatred of and harm to Jewish people.
This briefing explores how AI can be used to spread antisemitism. It also shows that AI can offer benefits
in combating antisemitism online and discusses ways to mitigate the risks of AI in relation to anti-Jewish
racism. We set out our recommendations for action, including the development of system risk assessments,
transparency and penalties for any failure to act.
Abstract: As the ethical barriers surrounding ‘digital Holocaust etiquette remain contested, scholars like Daniel Magilow and Lisa Silverman question whether there can be unwritten rules of behavior at sites of historical trauma. Because of
significant shifts in the digital arena, too, legacy types of memory formation, such as collective memories associated with physical spaces, are being challenged by a new type of digital archive that is both active and passive. This article seeks to interrogate the socio-psychological aspects of selfies taken at Holocaust memorial sites and of their subsequent shaming. We wish to juxtapose current research findings with the public audience’s reaction to these photos after they have been posted on social media. In many respects, commenters may offer insight into a larger phenomenon outside of what is deemed appropriate in terms of Holocaust memory. Our article may not provide solutions or easy answers, but this is not our goal. Rather, our research aims to point to the complex, often
uncomfortable, nature of this topic due to the fact that selfies encapsulate both micro and macro histories, reality and virtual reality, and a shift in traditional types of memory formation.
Abstract: Reflecting on the months since the recent October 7 attack, rarely has the theme of Holocaust Memorial Day 2024, ‘The Fragility of Freedom’, felt so poignant. Communities globally experienced the shattering of presumed security, and antisemitic incidents responsively spiked.
Antisemitism rose across both mainstream and fringe social media platforms, and communities resultantly reported a rise in insecurity and fear. CCOA constituent countries have recorded significant rises in antisemitic incidents, including an immediate 240% increase in Germany, a three-fold rise in France, and a marked increase in Italy.
The antisemitism landscape, including Holocaust denial and distortion, had shifted so drastically since October 7 that previous assumptions and understands now demand re-examination. In the run up to Holocaust Memorial Day 2024, this research compilation by members of the Coalition to Counter Online Antisemitism offers a vital contemporary examination of the current and emergent issues facing Holocaust denial and distortion online. As unique forms of antisemitism, denial and distortion are a tool of historical revisionism which specifically targets Jews, eroding Jewish experience and threatening democracy.
Across different geographies and knowledge fields, this compilation unites experts around the central and sustained proliferation of Holocaust denial and distortion on social media.
Abstract: This chapter introduces the notion of ‘enabling concepts’: concepts which may or may not themselves constitute a mode of hate speech, but which through their broad social acceptability facilitate or legitimate the articulation of concepts which can be more directly classed as hate speech. We argue that each distinct hate ideology will contain its own, partly overlapping set of ‘enabling concepts.’ In this chapter, we will focus on the enabling role of references to apartheid for the constitution of antisemitism in British online discourse around Israel. This argument does not rest on agreement as to whether the ‘apartheid analogy’—comparisons between contemporary Israel and the former Apartheid regime in South Africa—itself constitutes a form of antisemitism. The chapter draws on qualitative analysis of more than 10,000 user comments posted on social media profiles of mainstream media in the UK, undertaken by the Decoding Antisemitism project in the wake of the May 2021 escalation phase of the Arab-Israeli conflict. We will show how web commenters frequently use the apartheid analogy to trigger more extreme antisemitic stereotypes, including age-old tropes, intensifying and distorting analogies (such as Nazi comparisons) or calls for Israel’s elimination. The results will be presented in detail based on a pragmalinguistic approach taking into account the immediate context of the comment thread and broader world knowledge. Both of these aspects are relevant preconditions for examining all forms of antisemitic hate speech that can remain undetected when conducting solely statistical analysis. Based on this large dataset, we suggest that—under the cover of its widespread social acceptability—the apartheid analogy thus facilitates the articulation and legitimation of extreme antisemitic concepts that would, without this prior legitimation, be more likely to be rejected or countered.
Abstract: Over the past 3.5 years, the Decoding Antisemitism research project has been analysing antisemitism on the internet in terms of content, structure, and frequency. Over this time, there has been no shortage of flashpoints which have generated antisemitic responses. Yet the online response to the Hamas attacks of 7 October and the subsequent Israeli operations in Gaza has surpassed anything the project has witnessed before. In no preceding escalation phase of the Arab-Israeli conflict has the predominant antisemitic reaction been one of open jubilation and joy over the deaths of Israeli Jews. As demonstrated in the sixth and final Discourse Report, this explicit approval of the Hamas attacks was the primary response from web users. The response to 7 October therefore represents a turning point in antisemitic online discourse, and its repercussions will be felt long into the future.
The report contains analysis of the various stages of online reactions to events in the Middle East, from the immediate aftermath to the Israeli retaliations and subsequent accusations of genocide against Israel. As well as examining online reactions in the project’s core focus—the United Kingdom, France, and Germany—this report also, for the first time, extends its view to analyse Israel-related web discourses in six further countries, including those in Southern and Eastern Europe as well as in North Africa. Alongside reactions to the escalation phase, the report also examines online responses to billionaire Elon Musk’s explosive comments about Jewish individuals and institutions.
Additionally, the report provides a retrospective overview of the project’s development over the past 3.5 years, tracking its successes and challenges, particularly regarding the conditions for successful interdisciplinary work and the ability of machine learning to capture the versatility and complexity of authentic web communication.
To mark the publication of the report, we are also sharing our new, interactive data visualisations tool, which lets you examine any two discourse events analysed by our research team between 2021 and 2023. You can compare the frequencies and co-occurrences of antisemitic concepts and speech acts by type and by country, look at frequencies of keywords in antisemitic comments, and plot keyword networks.
Topics: Antisemitism, Antisemitism: Discourse, Antisemitism: Monitoring, Internet, Social Media, Main Topic: Antisemitism, War, Terrorism, Attitudes to Israel, Israeli-Palestinian Conflict, Boycott Divestment and Sanctions (BDS)
Abstract: Key findings
• Since 7 October, Decoding Antisemitism has analysed more than 11,000 comments
posted on YouTube and Facebook in response to mainstream media reports of the
Hamas terrorist attacks in Israel.
• Our analysis reveals a significant jump in the number of antisemitic comments, even
compared with other violent incidents in the Middle East.
• CELEBRATION, SUPPORT FOR and JUSTIFICATION OF THE HAMAS TERROR ATTACKS make up the
largest proportion of antisemitic comments – ranging between 19 % in German
Facebook comment sections and 53 and 54.7 % in French Facebook and UK YouTube
comment sections, respectively – in contrast to previous studies where direct
affirmation of violence was negligible.
• The number of antisemitic comments CELEBRATING THE ATROCITIES rises in response to
media reports of attacks on Israelis/Jews themselves, compared with reports on the
conflict more generally.
• Beyond affirmation of the Hamas attacks, other frequently expressed antisemitic
concepts across the corpus included DENIALS OF ISRAEL’S RIGHT TO EXIST, attributing SOLE
GUILT to Israel for the entire history of the conflict, describing Israel as a TERRORIST
STATE, CONSPIRACY THEORIES about Jewish POWER, and ideas of inherent Israeli EVIL.
• As with the project’s past research, this analysis reveals a diversity of antisemitic
concepts and communicative strategies. The findings reaffirm that antisemitism
appears as a multifaceted mosaic, as a result of which it is not possible to deal with
all the elements. Only the most prominent tendencies are brought into focus here.
Abstract: This article introduces the pilot project “Decoding Antisemitism: An AI-driven Study on Hate Speech and Imagery Online.” The aim of the project is to analyse the frequency, content and linguistic structure of online antisemitism, with the eventual aim of developing AI machine learning that is capable of recognizing explicit and implicit forms of antisemitic hate speech. The initial focus is on comments found on the websites and social media platforms of major media outlets in the United Kingdom, Germany, and France. The article outlines the project’s multi-step methodological design, which seeks to capture the complexity, diversity and continual development of antisemitism online. The first step is qualitative content analysis. Rather than relying on surveys, here a pre-existing “real-world” data set-namely, threads of online comments responding to media stories judged to be potential triggers for antisemitic speech-is collected and analysed for antisemitic content and linguistic structure by expert coders. The second step is supervised machine learning. Here, models are trained to mimic the decisions of human coders and learn how antisemitic stereotypes are currently reproduced in different web milieus-including implicit forms. The third step is large-scale quantitative analyses in which frequencies and combinations of words and phrases are measured, allowing the exploration of trends from millions of pieces of data.