Abstract: The proliferation of hateful and violent speech in online media underscores the need for technological support to combat such discourse, create safer and more inclusive online environments, support content moderation and study political-discourse dynamics online. Automated detection of antisemitic content has been little explored compared to other forms of hate-speech. This chapter examines the automated detection of antisemitic speech in online and social media using a corpus of online comments sourced from various online and social media platforms. The corpus spans a three-year period and encompasses diverse discourse events that were deemed likely to provoke antisemitic reactions. We adopt two approaches. First, we explore the efficacy of Perspective API, a popular content- moderation tool that rates texts in terms of, e.g., toxicity or identity-related attacks, in scoring antisemitic content as toxic. We find that the tool rates a high proportion of antisemitic texts with very low toxicity scores, indicating a potential blind spot for such content. Additionally, Perspective API demonstrates a keyword bias towards words related to Jewish identities, which could result in texts being falsely flagged and removed from platforms. Second, we fine-tune deep learning models to detect antisemitic texts. We show that OpenAI’s GPT-3.5 can be fine-tuned to effectively detect antisemitic speech in our corpus and beyond, with F1 scores above 0.7. We discuss current achievements in this area and point out directions for future work, such as the utilisation of prompt-based models.
Abstract: Developments in Artificial Intelligence (AI) are prompting governments across the globe, and experts from across multiple sectors, to future proof society. In the UK, Ministers have published a discussion paper on the capabilities, opportunities and risks presented by frontier artificial intelligence. The document outlines that whilst AI has many benefits, it can act as a simple, accessible and cheap tool for the dissemination of disinformation, and could be misused by terrorists to enhance their capabilities. The document warns that AI technology will become so advanced and realistic, that it will be nearly impossible to distinguish deep fakes and other fake content from real content. AI could also be used to incite violence and reduce people’s trust in true information.
It is clear that mitigating risks from AI will become the next great challenge for governments, and for society.
Of all the possible risks, the Antisemitism Policy Trust is focused on the development of systems that facilitate
the promotion, amplification and sophistication of discriminatory and racist content, that is material
that can incite hatred of and harm to Jewish people.
This briefing explores how AI can be used to spread antisemitism. It also shows that AI can offer benefits
in combating antisemitism online and discusses ways to mitigate the risks of AI in relation to anti-Jewish
racism. We set out our recommendations for action, including the development of system risk assessments,
transparency and penalties for any failure to act.
Abstract: Reflecting on the months since the recent October 7 attack, rarely has the theme of Holocaust Memorial Day 2024, ‘The Fragility of Freedom’, felt so poignant. Communities globally experienced the shattering of presumed security, and antisemitic incidents responsively spiked.
Antisemitism rose across both mainstream and fringe social media platforms, and communities resultantly reported a rise in insecurity and fear. CCOA constituent countries have recorded significant rises in antisemitic incidents, including an immediate 240% increase in Germany, a three-fold rise in France, and a marked increase in Italy.
The antisemitism landscape, including Holocaust denial and distortion, had shifted so drastically since October 7 that previous assumptions and understands now demand re-examination. In the run up to Holocaust Memorial Day 2024, this research compilation by members of the Coalition to Counter Online Antisemitism offers a vital contemporary examination of the current and emergent issues facing Holocaust denial and distortion online. As unique forms of antisemitism, denial and distortion are a tool of historical revisionism which specifically targets Jews, eroding Jewish experience and threatening democracy.
Across different geographies and knowledge fields, this compilation unites experts around the central and sustained proliferation of Holocaust denial and distortion on social media.