Search results

Your search found 85 items
Previous | Next
Sort: Relevance | Topics | Title | Author | Publication Year View all 1 2
Home  / Search Results
Date: 2024
Author(s): Peretz, Dekel
Date: 2024
Date: 2024
Abstract: The proliferation of hateful and violent speech in online media underscores the need for technological support to combat such discourse, create safer and more inclusive online environments, support content moderation and study political-discourse dynamics online. Automated detection of antisemitic content has been little explored compared to other forms of hate-speech. This chapter examines the automated detection of antisemitic speech in online and social media using a corpus of online comments sourced from various online and social media platforms. The corpus spans a three-year period and encompasses diverse discourse events that were deemed likely to provoke antisemitic reactions. We adopt two approaches. First, we explore the efficacy of Perspective API, a popular content- moderation tool that rates texts in terms of, e.g., toxicity or identity-related attacks, in scoring antisemitic content as toxic. We find that the tool rates a high proportion of antisemitic texts with very low toxicity scores, indicating a potential blind spot for such content. Additionally, Perspective API demonstrates a keyword bias towards words related to Jewish identities, which could result in texts being falsely flagged and removed from platforms. Second, we fine-tune deep learning models to detect antisemitic texts. We show that OpenAI’s GPT-3.5 can be fine-tuned to effectively detect antisemitic speech in our corpus and beyond, with F1 scores above 0.7. We discuss current achievements in this area and point out directions for future work, such as the utilisation of prompt-based models.
Author(s): Vincent, Chloé
Date: 2024
Abstract: Antisemitism often takes implicit forms on social media, therefore making it difficult to detect. In many cases, context is essential to recognise and understand the antisemitic meaning of an utterance (Becker et al. 2021, Becker and Troschke 2023, Jikeli et al. 2022a). Previous quantitative work on antisemitism online has focused on independent comments obtained through keyword search (e.g. Jikeli et al. 2019, Jikeli et al. 2022b), ignoring the discussions in which they occurred. Moreover, on social media, discussions are rarely linear. Web users have the possibility to comment on the original post and start a conversation or to reply to earlier web user comments. This chapter proposes to consider the structure of the comment trees constructed in the online discussion, instead of single comments individually, in an attempt to include context in the study of antisemitism online. This analysis is based on a corpus of 25,412 trees, consisting of 76,075 Facebook comments. The corpus is built from web comments reacting to posts published by mainstream news outlets in three countries: France, Germany, and the UK. The posts are organised into 16 discourse events, which have a high potential for triggering antisemitic comments. The analysis of the data help verify whether (1) antisemitic comments come together (are grouped under the same trees), (2) the structure of trees (lengths, number of branches) is significant in the emergence of antisemitism, (3) variations can be found as a function of the countries and the discourse events. This study presents an original way to look at social media data, which has potential for helping identify and moderate antisemitism online. It specifically can advance research in machine learning by allowing to look at larger segments of text, which is essential for reliable results in artificial intelligence methodology. Finally, it enriches our understanding of social interactions online in general, and hate speech online in particular.
Author(s): Ascone, Laura
Date: 2024
Author(s): Bolton, Matthew
Date: 2024
Abstract: Accusations that Israel has committed, or is in the process of committing, genocide against the Palestinian population of the Middle East are a familiar presence within anti- Israel and anti Zionist discourse. In the wake of the Hamas attacks of 7 October 2023 and the subsequent Israeli military invasion of Gaza, claims of an Israeli genocide reached new heights, culminating in Israel being accused of genocide by South Africa at the International Court of Justice. Such claims can be made directly or indirectly, via attempts to draw an equivalence between Auschwitz or the Warsaw Ghetto and the current situation in the Palestinian territories. This chapter examines the use of the concept of genocide in social media discussions responding to UK news reports about Israel in the years prior to the 2023 Israel- Hamas war, thereby setting out the pre-existing conditions for its rise to prominence in the response to that war. It provides a historical account of the development of the concept of genocide, showing its interrelation with antisemitism, the Holocaust and the State of Israel. It then shows how accusations of genocide started being made against Israel in the decades following the Holocaust, and argues that such use is often accompanied by analogies between Israel and Nazi Germany and forms of Holocaust distortion. The chapter then qualitatively analyses comments referencing a supposed Israeli genocide posted on the Facebook pages of major British newspapers regarding three Israel-related stories: the May 2021 escalation phase of the Arab- Israeli conflict; the July 2021 announcement that the US ice cream company Ben & Jerry’s would be boycotting Jewish settlements in the West Bank; and the rapid roll-out of the Covid-19 vaccine in Israel from December 2020 to January 2021.
Author(s): Chapelan, Alexis
Date: 2024
Abstract: Social media platforms and the interactive web have had a significant impact on political socialisation, creating new pathways of community-building that shifted the focus from real-life, localised networks (such as unions or neighbourhood associations) to vast, diffuse and globalised communities (Finin et al. 2008, Rainie and Wellman 2012, Olson 2014, Miller 2017). Celebrities or influencers are often focal nodes for the spread of information and opinions across these new types of networks in the digital space (see Hutchins and Tindall 2021). Unfortunately, this means that celebrities’ endorsement of extremist discourse or narratives can potently drive the dissemination and normalisation of hate ideologies.

This paper sets out to analyse the reaction of French social media audiences to antisemitism controversies involving pop culture celebrities. I will focus on two such episodes, one with a ‘national’ celebrity at its centre and the other a ‘global’ celebrity: the social media ban of the French-Cameroonian comedian Dieudonné M’bala M’bala in June–July 2020 and the controversy following US rapper Kanye West’s spate of antisemitic statements in October–November 2022. The empirical corpus comprises over 4,000 user comments on Facebook, YouTube and Twitter (now X). My methodological approach is two-pronged: a preliminary mapping of the text through content analysis is followed by a qualitative Critical Discourse Analysis that examines linguistic strategies and discursive constructions employed by social media users to legitimise antisemitic worldviews. We lay particular emphasis on the manner in which memes, dog-whistling or coded language (such as allusions or inside jokes popular within certain communities or fandoms) are used not only to convey antisemitic meaning covertly but also to build a specific form of counter-cultural solidarity. This solidarity expresses itself in the form of “ deviant communities” (see Proust et al. 2020) based on the performative and deliberate transgression of societal taboos and norms.
Author(s): Placzynta, Karolina
Date: 2024
Abstract: Despite the benefits of the intersectional approach to antisemitism studies, it seems to have been given little attention so far. This chapter compares the online reactions to two UK news stories, both centred around the common theme of cultural boycott of Israel in support of the BDS movement, both with a well-known female figure at the centre of media coverage, only one of which identifies as Jewish. In the case of British television presenter Rachel Riley, a person is attacked for being female as well as Jewish, with misogyny compounding the antisemitic commentary. In the case of the Irish writer Sally Rooney, misogynistic discourse is used to strengthen the message countering antisemitism. The contrastive analysis of the two datasets, with references to similar analyses of media stories centred around well-known men, illuminates the relationships between the two forms of hate, revealing that—even where the antisemitic attitudes overlap— misogynistic insults and disempowering or undermining language are being weaponised on both sides of the debate, with additional characterisation of Riley as a “grifter” and Rooney as “naive”.

More research comparing discourses around Jewish and non-Jewish women is needed to ascertain whether this pattern is consistent; meanwhile, the many analogies in the abuse suffered by both groups can perhaps serve a useful purpose: shared struggles can foster understanding needed to then notice the particularised prejudice. By including more than one hate ideology in the research design, intersectionality offers exciting new approaches to studies of antisemitism and, more broadly, of
hate speech or discrimination.
Date: 2022
Author(s): Cambruzzi, Murilo
Date: 2024
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country web-monitoring of illegal antisemitic hate speech.

The shadow monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms signatory to the Code of Conduct on countering illegal hate speech online, namely Facebook, Twitter, YouTube and TikTok.
● Analyzing the types of content and narratives collected by the research team.

Partners organizations focused on their country language: French for Belgium, Italian, Romanian and Spanish. Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise: Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe
(Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion,
descent or national or ethnic origin.”

The content was collected and reported to social media platforms in three rounds between October 2022 and October 2023. Content was checked for removal after a week or so to give enough time to social media platforms to analyze and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Author(s): Cambruzzi, Murilo
Date: 2023
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project (https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.

The monitoring exercises aim at:
● Analyzing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
● Partners organizations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
● Analyzing the types of content and narratives collected by the research team

Four organizations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

The content was collected and reported to social media platforms between April 21st and 22nd, 2023. Content was checked for removal on April 26th to give enough time to social media platforms to analyze and remove the content.1 The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Date: 2022
Abstract: The EU-Funded RELATION – RESEARCH, KNOWLEDGE & EDUCATION AGAINST ANTISEMITISM project https://www.relationproject.eu) aims at defining an innovative strategy that starts from a better knowledge of the Jewish history/traditions as part of the common history/traditions, and puts in place a set of educational activities in Belgium, Italy, Romania and Spain as well as online actions in order to tackle the phenomenon.

The project activities include the monitoring of antisemitism phenomenon online in the four countries of the project (Belgium, Italy, Romania and Spain) by creating a cross-country webmonitoring of illegal antisemitic hate speech.
The monitoring exercises aim at
• Analysing the removal rate of illegal antisemitic hate speech available on diverse Social Media Platforms, namely Facebook, Twitter, YouTube and TikTok.
• Partners organisations focused on their country language: French in Belgium, Italian, Romanian and Spanish;
• Analysing the types of content and narratives collected by the research team.

Four organisations from four different countries (Belgium, Italy, Spain and Romania) took part in the monitoring exercise. Comunitat Jueva Bet Shalom De Catalunya (Bet Shalom, Spain), CEJI - A Jewish Contribution to an Inclusive Europe (Belgium), Fondazione Centro Di Documentazione Ebraica Contemporanea (CDEC, Italy), Intercultural Institute Timișoara (IIT, Romania).

The monitoring exercise follows the definition of Illegal hate speech as defined “by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

The content was collected and reported to social media platforms between October 6th and 7th, 2022. Content was checked for removal on October 12th to give enough time to social media platforms to analyse and remove the content. The monitoring exercises devote particular attention to the intersection of antisemitism and sexism.
Date: 2020
Abstract: The spread of hate speech and anti-Semitic content has become endemic to social media. Faced
with a torrent of violent and offensive content, nations in Europe have begun to take measures to
remove such content from social media platforms such as Facebook and Twitter. However, these
measures have failed to curtail the spread, and possible impact of anti-Semitic content. Notably,
violence breeds violence and calls for action against Jewish minorities soon lead to calls for
violence against other ethnic or racial minorities. Online anti-Semitism thus drives social tensions
and harms social cohesion. Yet the spread of online anti-Semitism also has international
ramifications as conspiracy theories and disinformation campaigns now often focus on WWII and
the Holocaust.
On Nov 29, 2019, the Oxford Digital Diplomacy Research Group (DigDiploROx) held a one-day
symposium at the European Commission in Brussels. The symposium brought together diplomats,
EU officials, academics and civil society organizations in order to search for new ways to combat
the rise in online anti-Semitism. This policy brief offers an overview of the day’s discussions, the
challenges identified and a set of solutions that may aid nations looking to stem the flow of antiSemitic content online. Notably, these solutions, or recommendations, are not limited to the realm
of anti-Semitism and can to help combat all forms of discrimination, hate and bigotry online.
Chief among these recommendations is the need for a multi-stakeholder solution that brings
together governments, multilateral organisations, academic institutions, tech companies and
NGOs. For the EU itself, there is a need to increase collaborations between units dedicated to
fighting online crime, terrorism and anti-Semitism. This would enable the EU to share skills,
resources and working procedures. Moreover, the EU must adopt technological solutions, such as
automation, to identify, flag and remove hateful content in the quickest way possible. The EU
could also redefine its main activities - rather than combat incitement to violence online, it may
attempt to tackle incitement to hate, given that hate metastases online to calls for violence.
Finally, the EU should deepen its awareness to the potential harm of search engines. These offer
access to content that has already been removed by social media companies. Moreover, search
engines serve as a gateway to hateful content. The EU should thus deepen is collaborations with
companies such as Google and Yahoo, and not just Facebook or Twitter. It should be noted that
social media companies opted not to take part in the symposium demonstrating that the solution
to hate speech and rising anti-Semitism may be in legislation and not just in collaboration.
The rest of this brief consists of five parts. The first offers an up-to-date analysis of the prevalence
of anti-Semitic content online. The second, discuss the national and international implications of
this prevalence. The third part stresses the need for a multi-stakeholder solution while the fourth
offers an overview of the presentations made at the symposium. The final section includes a set
of policy recommendations that should be adopted by the EU and its members states.
Abstract: Developments in Artificial Intelligence (AI) are prompting governments across the globe, and experts from across multiple sectors, to future proof society. In the UK, Ministers have published a discussion paper on the capabilities, opportunities and risks presented by frontier artificial intelligence. The document outlines that whilst AI has many benefits, it can act as a simple, accessible and cheap tool for the dissemination of disinformation, and could be misused by terrorists to enhance their capabilities. The document warns that AI technology will become so advanced and realistic, that it will be nearly impossible to distinguish deep fakes and other fake content from real content. AI could also be used to incite violence and reduce people’s trust in true information.

It is clear that mitigating risks from AI will become the next great challenge for governments, and for society.
Of all the possible risks, the Antisemitism Policy Trust is focused on the development of systems that facilitate
the promotion, amplification and sophistication of discriminatory and racist content, that is material
that can incite hatred of and harm to Jewish people.

This briefing explores how AI can be used to spread antisemitism. It also shows that AI can offer benefits
in combating antisemitism online and discusses ways to mitigate the risks of AI in relation to anti-Jewish
racism. We set out our recommendations for action, including the development of system risk assessments,
transparency and penalties for any failure to act.
Date: 2022
Date: 2023
Date: 2024
Editor(s): Rose, Hannah
Date: 2024
Editor(s): Ermida, Isabel
Date: 2023
Abstract: This chapter introduces the notion of ‘enabling concepts’: concepts which may or may not themselves constitute a mode of hate speech, but which through their broad social acceptability facilitate or legitimate the articulation of concepts which can be more directly classed as hate speech. We argue that each distinct hate ideology will contain its own, partly overlapping set of ‘enabling concepts.’ In this chapter, we will focus on the enabling role of references to apartheid for the constitution of antisemitism in British online discourse around Israel. This argument does not rest on agreement as to whether the ‘apartheid analogy’—comparisons between contemporary Israel and the former Apartheid regime in South Africa—itself constitutes a form of antisemitism. The chapter draws on qualitative analysis of more than 10,000 user comments posted on social media profiles of mainstream media in the UK, undertaken by the Decoding Antisemitism project in the wake of the May 2021 escalation phase of the Arab-Israeli conflict. We will show how web commenters frequently use the apartheid analogy to trigger more extreme antisemitic stereotypes, including age-old tropes, intensifying and distorting analogies (such as Nazi comparisons) or calls for Israel’s elimination. The results will be presented in detail based on a pragmalinguistic approach taking into account the immediate context of the comment thread and broader world knowledge. Both of these aspects are relevant preconditions for examining all forms of antisemitic hate speech that can remain undetected when conducting solely statistical analysis. Based on this large dataset, we suggest that—under the cover of its widespread social acceptability—the apartheid analogy thus facilitates the articulation and legitimation of extreme antisemitic concepts that would, without this prior legitimation, be more likely to be rejected or countered.
Date: 2024
Abstract: Over the past 3.5 years, the Decoding Antisemitism research project has been analysing antisemitism on the internet in terms of content, structure, and frequency. Over this time, there has been no shortage of flashpoints which have generated antisemitic responses. Yet the online response to the Hamas attacks of 7 October and the subsequent Israeli operations in Gaza has surpassed anything the project has witnessed before. In no preceding escalation phase of the Arab-Israeli conflict has the predominant antisemitic reaction been one of open jubilation and joy over the deaths of Israeli Jews. As demonstrated in the sixth and final Discourse Report, this explicit approval of the Hamas attacks was the primary response from web users. The response to 7 October therefore represents a turning point in antisemitic online discourse, and its repercussions will be felt long into the future.

The report contains analysis of the various stages of online reactions to events in the Middle East, from the immediate aftermath to the Israeli retaliations and subsequent accusations of genocide against Israel. As well as examining online reactions in the project’s core focus—the United Kingdom, France, and Germany—this report also, for the first time, extends its view to analyse Israel-related web discourses in six further countries, including those in Southern and Eastern Europe as well as in North Africa. Alongside reactions to the escalation phase, the report also examines online responses to billionaire Elon Musk’s explosive comments about Jewish individuals and institutions.

Additionally, the report provides a retrospective overview of the project’s development over the past 3.5 years, tracking its successes and challenges, particularly regarding the conditions for successful interdisciplinary work and the ability of machine learning to capture the versatility and complexity of authentic web communication.

To mark the publication of the report, we are also sharing our new, interactive data visualisations tool, which lets you examine any two discourse events analysed by our research team between 2021 and 2023. You can compare the frequencies and co-occurrences of antisemitic concepts and speech acts by type and by country, look at frequencies of keywords in antisemitic comments, and plot keyword networks.
Date: 2023
Author(s): Becker, Matthias J
Date: 2022
Date: 2022
Date: 2023
Date: 2023
Abstract: Key findings
• Since 7 October, Decoding Antisemitism has analysed more than 11,000 comments
posted on YouTube and Facebook in response to mainstream media reports of the
Hamas terrorist attacks in Israel.
• Our analysis reveals a significant jump in the number of antisemitic comments, even
compared with other violent incidents in the Middle East.
• CELEBRATION, SUPPORT FOR and JUSTIFICATION OF THE HAMAS TERROR ATTACKS make up the
largest proportion of antisemitic comments – ranging between 19 % in German
Facebook comment sections and 53 and 54.7 % in French Facebook and UK YouTube
comment sections, respectively – in contrast to previous studies where direct
affirmation of violence was negligible.
• The number of antisemitic comments CELEBRATING THE ATROCITIES rises in response to
media reports of attacks on Israelis/Jews themselves, compared with reports on the
conflict more generally.
• Beyond affirmation of the Hamas attacks, other frequently expressed antisemitic
concepts across the corpus included DENIALS OF ISRAEL’S RIGHT TO EXIST, attributing SOLE
GUILT to Israel for the entire history of the conflict, describing Israel as a TERRORIST
STATE, CONSPIRACY THEORIES about Jewish POWER, and ideas of inherent Israeli EVIL.
• As with the project’s past research, this analysis reveals a diversity of antisemitic
concepts and communicative strategies. The findings reaffirm that antisemitism
appears as a multifaceted mosaic, as a result of which it is not possible to deal with
all the elements. Only the most prominent tendencies are brought into focus here.
Date: 2023
Date: 2022
Date: 2022