Search results

Your search found 10 items
Sort: Relevance | Topics | Title | Author | Publication Year
Home  / Search Results
Date: 2024
Abstract: The proliferation of hateful and violent speech in online media underscores the need for technological support to combat such discourse, create safer and more inclusive online environments, support content moderation and study political-discourse dynamics online. Automated detection of antisemitic content has been little explored compared to other forms of hate-speech. This chapter examines the automated detection of antisemitic speech in online and social media using a corpus of online comments sourced from various online and social media platforms. The corpus spans a three-year period and encompasses diverse discourse events that were deemed likely to provoke antisemitic reactions. We adopt two approaches. First, we explore the efficacy of Perspective API, a popular content- moderation tool that rates texts in terms of, e.g., toxicity or identity-related attacks, in scoring antisemitic content as toxic. We find that the tool rates a high proportion of antisemitic texts with very low toxicity scores, indicating a potential blind spot for such content. Additionally, Perspective API demonstrates a keyword bias towards words related to Jewish identities, which could result in texts being falsely flagged and removed from platforms. Second, we fine-tune deep learning models to detect antisemitic texts. We show that OpenAI’s GPT-3.5 can be fine-tuned to effectively detect antisemitic speech in our corpus and beyond, with F1 scores above 0.7. We discuss current achievements in this area and point out directions for future work, such as the utilisation of prompt-based models.
Author(s): Vincent, Chloé
Date: 2024
Abstract: Antisemitism often takes implicit forms on social media, therefore making it difficult to detect. In many cases, context is essential to recognise and understand the antisemitic meaning of an utterance (Becker et al. 2021, Becker and Troschke 2023, Jikeli et al. 2022a). Previous quantitative work on antisemitism online has focused on independent comments obtained through keyword search (e.g. Jikeli et al. 2019, Jikeli et al. 2022b), ignoring the discussions in which they occurred. Moreover, on social media, discussions are rarely linear. Web users have the possibility to comment on the original post and start a conversation or to reply to earlier web user comments. This chapter proposes to consider the structure of the comment trees constructed in the online discussion, instead of single comments individually, in an attempt to include context in the study of antisemitism online. This analysis is based on a corpus of 25,412 trees, consisting of 76,075 Facebook comments. The corpus is built from web comments reacting to posts published by mainstream news outlets in three countries: France, Germany, and the UK. The posts are organised into 16 discourse events, which have a high potential for triggering antisemitic comments. The analysis of the data help verify whether (1) antisemitic comments come together (are grouped under the same trees), (2) the structure of trees (lengths, number of branches) is significant in the emergence of antisemitism, (3) variations can be found as a function of the countries and the discourse events. This study presents an original way to look at social media data, which has potential for helping identify and moderate antisemitism online. It specifically can advance research in machine learning by allowing to look at larger segments of text, which is essential for reliable results in artificial intelligence methodology. Finally, it enriches our understanding of social interactions online in general, and hate speech online in particular.
Author(s): Ascone, Laura
Date: 2024
Author(s): Bolton, Matthew
Date: 2024
Abstract: Accusations that Israel has committed, or is in the process of committing, genocide against the Palestinian population of the Middle East are a familiar presence within anti- Israel and anti Zionist discourse. In the wake of the Hamas attacks of 7 October 2023 and the subsequent Israeli military invasion of Gaza, claims of an Israeli genocide reached new heights, culminating in Israel being accused of genocide by South Africa at the International Court of Justice. Such claims can be made directly or indirectly, via attempts to draw an equivalence between Auschwitz or the Warsaw Ghetto and the current situation in the Palestinian territories. This chapter examines the use of the concept of genocide in social media discussions responding to UK news reports about Israel in the years prior to the 2023 Israel- Hamas war, thereby setting out the pre-existing conditions for its rise to prominence in the response to that war. It provides a historical account of the development of the concept of genocide, showing its interrelation with antisemitism, the Holocaust and the State of Israel. It then shows how accusations of genocide started being made against Israel in the decades following the Holocaust, and argues that such use is often accompanied by analogies between Israel and Nazi Germany and forms of Holocaust distortion. The chapter then qualitatively analyses comments referencing a supposed Israeli genocide posted on the Facebook pages of major British newspapers regarding three Israel-related stories: the May 2021 escalation phase of the Arab- Israeli conflict; the July 2021 announcement that the US ice cream company Ben & Jerry’s would be boycotting Jewish settlements in the West Bank; and the rapid roll-out of the Covid-19 vaccine in Israel from December 2020 to January 2021.
Author(s): Chapelan, Alexis
Date: 2024
Abstract: Social media platforms and the interactive web have had a significant impact on political socialisation, creating new pathways of community-building that shifted the focus from real-life, localised networks (such as unions or neighbourhood associations) to vast, diffuse and globalised communities (Finin et al. 2008, Rainie and Wellman 2012, Olson 2014, Miller 2017). Celebrities or influencers are often focal nodes for the spread of information and opinions across these new types of networks in the digital space (see Hutchins and Tindall 2021). Unfortunately, this means that celebrities’ endorsement of extremist discourse or narratives can potently drive the dissemination and normalisation of hate ideologies.

This paper sets out to analyse the reaction of French social media audiences to antisemitism controversies involving pop culture celebrities. I will focus on two such episodes, one with a ‘national’ celebrity at its centre and the other a ‘global’ celebrity: the social media ban of the French-Cameroonian comedian Dieudonné M’bala M’bala in June–July 2020 and the controversy following US rapper Kanye West’s spate of antisemitic statements in October–November 2022. The empirical corpus comprises over 4,000 user comments on Facebook, YouTube and Twitter (now X). My methodological approach is two-pronged: a preliminary mapping of the text through content analysis is followed by a qualitative Critical Discourse Analysis that examines linguistic strategies and discursive constructions employed by social media users to legitimise antisemitic worldviews. We lay particular emphasis on the manner in which memes, dog-whistling or coded language (such as allusions or inside jokes popular within certain communities or fandoms) are used not only to convey antisemitic meaning covertly but also to build a specific form of counter-cultural solidarity. This solidarity expresses itself in the form of “ deviant communities” (see Proust et al. 2020) based on the performative and deliberate transgression of societal taboos and norms.
Author(s): Placzynta, Karolina
Date: 2024
Abstract: Despite the benefits of the intersectional approach to antisemitism studies, it seems to have been given little attention so far. This chapter compares the online reactions to two UK news stories, both centred around the common theme of cultural boycott of Israel in support of the BDS movement, both with a well-known female figure at the centre of media coverage, only one of which identifies as Jewish. In the case of British television presenter Rachel Riley, a person is attacked for being female as well as Jewish, with misogyny compounding the antisemitic commentary. In the case of the Irish writer Sally Rooney, misogynistic discourse is used to strengthen the message countering antisemitism. The contrastive analysis of the two datasets, with references to similar analyses of media stories centred around well-known men, illuminates the relationships between the two forms of hate, revealing that—even where the antisemitic attitudes overlap— misogynistic insults and disempowering or undermining language are being weaponised on both sides of the debate, with additional characterisation of Riley as a “grifter” and Rooney as “naive”.

More research comparing discourses around Jewish and non-Jewish women is needed to ascertain whether this pattern is consistent; meanwhile, the many analogies in the abuse suffered by both groups can perhaps serve a useful purpose: shared struggles can foster understanding needed to then notice the particularised prejudice. By including more than one hate ideology in the research design, intersectionality offers exciting new approaches to studies of antisemitism and, more broadly, of
hate speech or discrimination.
Author(s): Tiffany, Austin
Date: 2018