Skip to content

leondz/hatespeechdata

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hate Speech Dataset Catalogue

This page catalogues datasets annotated for hate speech, online abuse, and offensive language. They may be useful for e.g. training a natural language processing system to detect this language.

The list is maintained by Leon Derczynski, Bertie Vidgen, Hannah Rose Kirk, Pica Johansson, Yi-Ling Chung, Mads Guldborg Kjeldgaard Kongsbak, Laila Sprejer, and Philine Zeinert.

We provide a list of datasets and keywords. If you would like to contribute to our catalogue or add your dataset, please see the instructions for contributing.

If you use these resources, please cite (and read!) our paper: Directions in Abusive Language Training Data: Garbage In, Garbage Out. And if you would like to find other resources for researching online hate, visit The Alan Turing Institute's Online Hate Research Hub or read The Alan Turing Institute's Reading List on Online Hate and Abuse Research.

If you're looking for a good paper on online hate training datasets (beyond our paper, of course!) then have a look at 'Resources and benchmark corpora for hate speech detection: a systematic review' by Poletto et al. in Language Resources and Evaluation.

Please send contributions via github pull request. You can do this by visiting the source code on github and clicking the edit icon (a pencil, above the text, on the right) - more details below. There's a commented-out markdown template at the top of this file. Accompanying data statements preferred for all corpora.

Datasets Table of Contents

List of datasets

Albanian

Detecting Abusive Albanian

Arabic

Let-Mi: An Arabic Levantine Twitter Dataset for Misogynistic Language

  • Link to publication: https://arxiv.org/abs/2103.10195
  • Link to data: https://drive.google.com/file/d/1mM2vnjsy7QfUmdVUpKqHRJjZyQobhTrW/view
  • Task description: Binary (misogyny/none) and Multi-class (none, discredit, derailing, dominance, stereotyping & objectification, threat of violence, sexual harassment, damning)
  • Details of task: Introducing an Arabic Levantine Twitter dataset for Misogynistic language
  • Size of dataset: 6,603 direct tweet replies
  • Percentage abusive: 48.76%
  • Language: Arabic
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Hala Mulki and Bilal Ghanem. 2021. Let-Mi: An Arabic Levantine Twitter Dataset for Misogynistic Language. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 154–163, Kyiv, Ukraine (Virtual). Association for Computational Linguistics

Are They our Brothers? Analysis and Detection of Religious Hate Speech in the Arabic Twittersphere

Multilingual and Multi-Aspect Hate Speech Analysis (Arabic)

L-HSAB: A Levantine Twitter Dataset for Hate Speech and Abusive Language

Abusive Language Detection on Arabic Social Media (Twitter)

Abusive Language Detection on Arabic Social Media (Al Jazeera)

Dataset Construction for the Detection of Anti-Social Behaviour in Online Communication in Arabic

Bengali

Hate Speech Detection in the Bengali language: A Dataset and its Baseline Evaluation

Chinese

SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection

Croatian

CoRAL: a Context-aware Croatian Abusive Language Dataset

Datasets of Slovene and Croatian Moderated News Comments

Automating News Comment Moderation with Limited Resources: Benchmarking in Croatian and Estonian

Danish

Offensive Language and Hate Speech Detection for Danish

BAJER: Misogyny in Danish

  • Link to publication: https://aclanthology.org/2021.acl-long.247/
  • Link to data: request here
  • Task description: Hierarchy of abusive content labels including subcategories of misogyny
  • Details of task: "Misogyny detection on social media in Danish"
  • Size of dataset: 27.9K comments
  • Percentage abusive: 7% misogynistic, 27% abusive (i.e. 20% abusive but not misogyny)
  • Language: Danish
  • Level of annotation: Social media post / comment
  • Platform: Twitter, Facebook, Reddit
  • Medium: text
  • Reference: Zeinert, Inie, & Derczynski, 2021. "Annotating Online Misogyny". Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL
  • Dataset reader: 🤗 strombergnlp/bajer_danish_misogyny

Dutch

The Dutch Abusive Language Corpus v1.0 (DALC v1.0)

  • Link to publication: https://aclanthology.org/2021.woah-1.6.pdf - link to the documentation and/or a data statement about the data
  • Link to data: https://github.com/tommasoc80/DALC
  • Task description: Multilayered (explicitness and target) for abusive language
  • Details of task: Abusive language detection in social media in Dutch
  • Size of dataset: 8,156 tweets
  • Percentage abusive: 15.06% explicitly abusive; 8.09% implicitly abusive
  • Language: Dutch
  • Level of annotation: tweets
  • Platform: Twitter
  • Medium: text
  • Reference: Caselli, T., Schelhaas, A., Weultjes, M., Leistra, F., van der Veen, H., Timmerman, G., and Nissim, M. 2021. "DALC: the Dutch Abusive Language Corpus". Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), ACL.

English

Not All Counterhate Tweets Elicit the Same Replies: A Fine-Grained Analysis

  • Link to publication: https://aclanthology.org/2023.starsem-1.8/
  • Link to data: https://github.com/albanyan/counterhate_reply
  • Task description: Four binary classification tasks to investigate replies to counterhate tweets (1) Binary (Agree, Not), (2) Binary (Support_Hateful-tweet, Not), (3) Binary (Attack_Author, Not), and (4) Binary (Additional_Counterhate, Not)
  • Details of task: Three levels of tweets are considered: a hateful tweet, a counterhate tweet (a reply to a hateful tweet), and all replies to the counterhate tweet. Indicate whether the reply to a counterhate tweet (a) agrees with the counterhate tweet, (b) supports the hateful tweet, (c) attacks the author of the counterhate tweet, and (d) adds additional counterhate
  • Size of dataset: 2,621 (hateful tweet, counterhate tweet, reply) triples
  • Percentage abusive: 100% (All main tweets are hateful tweets)
  • Language: English
  • Level of annotation: Tweets
  • Platform: Twitter
  • Medium: Text
  • Reference: Abdullah Albanyan, Ahmed Hassan, and Eduardo Blanco. 2023. Not All Counterhate Tweets Elicit the Same Replies: A Fine-Grained Analysis. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 71–88, Toronto, Canada. Association for Computational Linguistics.

Pinpointing Fine-Grained Relationships between Hateful Tweets and Replies

  • Link to publication: https://ojs.aaai.org/index.php/AAAI/article/view/21284
  • Link to data: https://github.com/albanyan/hateful-tweets-replies
  • Task description: Four binary classification tasks (1) Binary (Counterhate, Not), (2) Binary (Counterhate_with_Justification, Not), (3) Binary (Attack_Author, Not), and (4) Binary (Additional_Hate, Not)
  • Details of task: Indicate whether the reply to a hateful tweet (a) is counter hate speech, (b) provides a justification, (c) attacks the author of the tweet, and (d) adds additional hate
  • Size of dataset: 5,652 hateful tweets and replies
  • Percentage abusive: 100% (All main tweets are hateful tweets)
  • Language: English
  • Level of annotation: Tweets
  • Platform: Twitter
  • Medium: Text
  • Reference: Abdullah Albanyan and Eduardo Blanco. 2022. Pinpointing Fine-Grained Relationships Between Hateful Tweets and Replies. Proceedings of the AAAI Conference on Artificial Intelligence 36 (10):10418-26.

Large-Scale Hate Speech Detection with Cross-Domain Transfer

  • Link to publication: https://aclanthology.org/2022.lrec-1.238/
  • Link to data: https://github.com/avaapm/hatespeech
  • Task description: Three-class (Hate speech, Offensive language, None)
  • Details of task: Hate speech detection on social media (Twitter) including 5 target groups (gender, race, religion, politics, sports)
  • Size of dataset: 100k English (27593 hate, 30747 offensive, 41660 none)
  • Percentage abusive: 58.3%
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text and image
  • Reference: Cagri Toraman, Furkan Şahinuç, Eyup Yilmaz. 2022. Large-Scale Hate Speech Detection with Cross-Domain Transfer. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2215–2225, Marseille, France. European Language Resources Association.
  • Online-Abusive-Attacks-OAA-Dataset

  • Link to publication: https://ieeexplore.ieee.org/abstract/document/10160004
  • Link to data: https://github.com/RaneemAlharthi/Online-Abusive-Attacks-OAA-Dataset
  • Task description: "Binary (abusive, Notabusive)", "Hierarchical", "six-class (toxicity, severe toxicity, identity attack,insult, profanity, and threat)"
  • Details of task: "the first benchmark dataset providing a holistic view of online abusive attacks, including social media profile data and metadata for both targets and perpetrators, in addition to context. The dataset contains 2.3K Twitter accounts, 5M tweets, and 106.9K categorised conversations."
  • Size of dataset: 2.3K Twitter accounts, 5M tweets, and 106.9K categorised conversations.
  • Percentage abusive: online abusive attacks motivated by the targets’ identities (97%), and motivated by the targets’ behavioural attacks (3%).
  • Language: e.g. English
  • Level of annotation: What is an "instance", in this dataset? e.g. Conversation
  • Platform: e.g. twitter
  • Medium: text /metadata
  • Reference: @article{alharthi2023target, title={Target-Oriented Investigation of Online Abusive Attacks: A Dataset and Analysis}, author={Alharthi, Raneem and Alharthi, Rajwa and Shekhar, Ravi and Zubiaga, Arkaitz}, journal={IEEE Access}, year={2023}, publisher={IEEE} }

ConvAbuse

  • Link to publication: https://aclanthology.org/2021.emnlp-main.587/
  • Link to data: https://github.com/amandacurry/convabuse
  • Task description: Hierarchical: 1. Abuse binary, Abuse severity 1,0,-1,-2,-3; 2. Directedness explicit, implicit Target group, individual–system, individual–3rd party, Type general, sexist, sexual harassment, homophobic, racist, transphobic, ableist, intellectual
  • Details of task: Abuse detection in conversational AI
  • Size of dataset: 4,185
  • Percentage abusive: c. 20%
  • Language: English
  • Level of annotation: utterance (with conversational context)
  • Platform: Carbonbot on Facebook Messenger and E.L.I.Z.A. chatbots
  • Medium: text
  • Reference: Curry, A. C., Abercrombie, G., & Rieser, V. 2021. ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Detection in Conversational AI. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 7388-7403).

Measuring Hate Speech

  • Link to publication: https://arxiv.org/abs/2009.10277
  • Link to data: https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech
  • Task description: 10 ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech), which are debiased and aggregated into a continuous hate speech severity score (hate_speech_score) that includes a region for counterspeech & supportive speeech. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 identity subgroups.
  • Details of task: Hate speech measurement on social media in English
  • Size of dataset: 39,565 comments annotated by 7,912 annotators on 10 ordinal labels, for 1,355,560 total labels.
  • Percentage abusive: 25% - however this dichotomization is not in the spirit of the paper/dataset
  • Language: English
  • Level of annotation: Social media comment
  • Platform: Twitter, Reddit, YouTube
  • Medium: Text
  • Reference: Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. arXiv preprint arXiv:2009.10277.

Learning From the Worst (Dynamically generated hate speech dataset)

  • Link to publication: https://aclanthology.org/2021.acl-long.132/
  • Link to data: https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset
  • Task description: Multi-category hate speech detection
  • Details of task: Hate detection with fine-grained labels for the type and target of hate. Generated over 4 rounds of human-and-model-in-the-loop adversarial data generation. Collected through Dynabench.
  • Size of dataset: 41,255
  • Percentage abusive: 54%
  • Language: English
  • Level of annotation: posts
  • Platform: Synthetically generated by humans to mimic real-world social media posts
  • Medium: text
  • Reference: Vidgen, B., Thurush, T., Waseem, Z., Kiela, D., 2021. Learning from the worst: dynamically generated datasets to improve online hate detection. In Proceedings of the 59th Meeting of the Association for Computational Lingusitics (pp. 1667-1682).

The 'Call me sexist, but' sexism dataset

  • Link to publication: https://ojs.aaai.org/index.php/ICWSM/article/view/18085/17888
  • Link to data: https://doi.org/10.7802/2251
  • Task description: Sexism detection based on content and phrasing
  • Details of task: Sexism detection on English social media data informed by survey items measuring sexist attitudes and adversarial examples
  • Size of dataset: 6325
  • Percentage abusive: 28%
  • Language: English
  • Level of annotation: tweets and survey items
  • Platform: Twitter, Social Psychology scales
  • Medium: text
  • Reference: Samory, M., Sen, I., Kohne, J., Flöck, F. and Wagner, C., 2021, May. Call me sexist, but…: Revisiting sexism detection using psychological scales and adversarial samples. In Intl AAAI Conf. Web and Social Media (pp. 573-584).

Hate Towards the Political Opponent: A Twitter Corpus Study of the 2020 US Elections on the Basis of Offensive Speech and Stance Detection__

AbuseEval v1.0

  • Link to publication: http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf
  • Link to data: https://github.com/tommasoc80/AbuseEval
  • Task description: Explicitness annotation of offensive and abusive content
  • Details of task: Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU
  • Size of dataset: 14,100
  • Percentage abusive: 20.75%
  • Language: English
  • Level of annotation: tweets
  • Platform: Twitter
  • Medium: text
  • Reference: Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. "I feel offended, don’t be abusive! implicit/explicit messages in offensive and abusive language". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association.

Do You Really Want to Hurt Me? Predicting Abusive Swearing in Social Media

Multimodal Meme Dataset (MultiOFF) for Identifying Offensive Content in Image and Text

Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate

  • Link to publication: https://arxiv.org/abs/2108.05921
  • Link to data: https://github.com/HannahKirk/Hatemoji
  • Task description: Branching structure of tasks: Binary (Hate, Not Hate), Within Hate (Type, Target)
  • Details of task: Hate speech detection for text statements including emoji, consisting of a checklist-based test suite (HatemojiCheck) and an adversarially-generated dataset (HatemojiBuild)
  • Size of dataset: HatemojiCheck = 3,930; HatemojiBuild = 5,912.
  • Percentage abusive: HatemojiCheck = 69%, HatemojiBuild = 50%
  • Language: English
  • Level of annotation: Post
  • Platform: Synthetically-Generated
  • Medium: Text with emoji
  • Reference: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. 2021. Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.

HateCheck: Functional Tests for Hate Speech Detection Models

  • Link to publication: https://arxiv.org/pdf/2012.15606.pdf
  • Link to data: https://github.com/paul-rottger/hatecheck-data
  • Task description: Binary (Hate, Not Hate), 7 Targets Within Hate (Women, Trans people, Black people, Gay people, Disabled people, Muslims, Immigrants)
  • Details of task: A checklist of functional tests to evaluate hate speech detection models.
  • Size of dataset: 3,728
  • Percentage abusive: 68%
  • Language: English
  • Level of annotation: Post
  • Platform: Synthetically-Generated
  • Medium: Text
  • Reference: Röttger, P., Vidgen, B., Nguyen, D., Waseem, Z., Margetts, H. and Pierrehumbert, J., 2020. Hatecheck: Functional tests for hate speech detection models. arXiv preprint arXiv:2012.15606.

Semeval-2021 Task 5: Toxic Spans Detection

ToxiSpanSE: An Explainable Toxicity Detection in Code Review Comments

  • Link to publication: https://arxiv.org/abs/2307.03386
  • Link to data and tool: (https://github.com/WSU-SEAL/ToxiSpanSE)
  • Task description: Binary toxic spans (Toxic, Non-toxic)
  • Details of task: Toxicity, Context
  • Size of dataset: 19,651
  • Percentage of toxic in span level: 13.85
  • Language: English
  • Level of annotation: Code Review Comments
  • Platform: Open Source Software
  • Medium: Text
  • Reference: Sarker, Jaydeb, Sultana, Sayma, Wilson, Steven R., and Amiangshu Bosu. "ToxiSpanSE: An Explainable Toxicity Detection in Code Review Comments" The 17th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2023.

Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech

  • Link to publication: https://aclanthology.org/2021.acl-long.250.pdf
  • Link to data: https://github.com/marcoguerini/CONAN
  • Task description: Binary (hateful, not)
  • Details of task: race, religion, country of origin, sexual orientation, disability, gender
  • Size of dataset: 5,003
  • Percentage abusive: 1
  • Language: English
  • Level of annotation: Posts
  • Platform: Semi-synthetic text
  • Medium: Text
  • Reference: Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroğlu, Marco Guerini Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics: Long Papers.

HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection

  • Link to publication: https://arxiv.org/abs/2012.10289
  • Link to data: https://github.com/punyajoy/HateXplain
  • Task description: Level of hate (hate, offensive or normal), on target groups (race, religion, gender, sexual orientation, miscellaneous), and rationales
  • Details of task: Hate per se
  • Size of dataset: 20,148
  • Percentage abusive: 0.57
  • Language: English
  • Level of annotation: Words, phrases, posts
  • Platform: Twitter and Gab
  • Medium: Text
  • Reference: Mathew, B., Saha, P., Yimam, S. M., Biemann, C., Goyal, P., & Mukherjee, A. (2021, May). HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 17, pp. 14867-14875).

ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter

  • Link to publication: https://arxiv.org/pdf/2008.06465.pdf
  • Link to data: Data made available upon request, please email Ugur Kursuncu [email protected] and [email protected] [email protected].
  • Task description: Binary (Toxic, Non-Toxic)
  • Details of task: Annotates interactions (Tweets and their replies), and assigns keywords describing use of emojis, URL content and images.
  • Size of dataset: 688
  • Percentage abusive: 0.17
  • Language: English
  • Level of annotation: Post
  • Platform: Twitter
  • Medium: Multimodal (text, images, emojis, metadata)
  • Reference: Wijesiriwardene, T., Inan, H., Kursuncu, U., Gaur, M., Shalin, V., Thirunarayan, K., Sheth, A. and Arpinar, I., 2020, Arxiv.

Towards a Comprehensive Taxonomy and Large-Scale Annotated Corpus for Online Slur Usage

Multimodal Meme Dataset (MultiOFF) for Identifying Offensive Content in Image and Text

Predicting the Type and Target of Offensive Posts in Social Media

  • Link to publication: https://aclanthology.org/N19-1144.pdf
  • Link to data: https://scholar.harvard.edu/malmasi/olid
  • Task description: Branching structure of tasks. A: offensive / not, B: targeted insult / untargeted, C: individual, group, other.
  • Details of task: Hate per se
  • Size of dataset: 14,100
  • Percentage abusive: 0.33
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., & Kumar, R. (2019, June). Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 1415-1420).

Nuanced metrics for measuring unintended bias with real data for text classification

  • Link to publication: https://arxiv.org/pdf/1903.04561.pdf
  • Link to data: https://www.tensorflow.org/datasets/catalog/civil_comments
  • Task description: Toxicity (severe, obscene, threat, insult, identity attack, sexual explicit), and several identity attributes (e.g., gender, religion and race)
  • Details of task: Hate per se
  • Size of dataset: 1,804,875
  • Percentage abusive: 0.8
  • Language: English
  • Level of annotation: Comments/posts
  • Platform: Civil Comments
  • Medium: Text
  • Reference: Borkan, D., Dixon, L., Sorensen, J., Thain, N., & Vasserman, L. (2019, May). Nuanced metrics for measuring unintended bias with real data for text classification. In Companion proceedings of the 2019 world wide web conference (pp. 491-500).

Introducing CAD: the Contextual Abuse Dataset

  • Link to publication: https://aclanthology.org/2021.naacl-main.182.pdf
  • Link to data: https://zenodo.org/record/4881008#.Ye6OwhP7R6o
  • Task description: Contextually abusive language, person-directed + group-directed
  • Details of task: Primary categories (secondary categories): Abusive + Identity-directed (derogation/animosity/threatening/glorification/dehumanization), Abusive + Person-directed (derogation/animosity/threatening/glorification/dehumanization), Abusive + Affiliation directed (abuse to them/abuse about them), Counter Speech (against identity-directed abuse/against affiliation-directed abuse/against person-directed abuse), Non-hateful Slurs and Neutral.
  • Size of dataset: 25,000
  • Percentage abusive: Affiliation-directed, 6%; Identity-directed, 13%; Person-directed, 5%
  • Language: English
  • Level of annotation: Conversation thread
  • Platform: Reddit
  • Medium: Text
  • Reference: Vidgen, B., Nguyen, D., Margetts, H., Rossini, P., and Troble, R., Introducing CAD: the Contextual Abuse Dataset, 2021, In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.2289–2303

Automated Hate Speech Detection and the Problem of Offensive Language

Hate Speech Dataset from a White Supremacy Forum

Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter

Detecting Online Hate Speech Using Context Aware Models

The Gab Hate Corpus: A collection of 27k posts annotated for hate speech

  • Link to publication: https://psyarxiv.com/hqjxn/
  • Link to data: https://osf.io/edua3/
  • Task description: Binary (Hate vs. Offensive/Vulgarity), Binary (Assault on human Dignity/Call for Violence – sub task on message delivery, binary: explicit/implicit), Multinomial classification: Identity based hate (race/ethnicity, nationality/regionalism/xenophobia, gender, religion/belief system, sexual orientation, ideology, political identification/party, mental/physical health)
  • Details of task: Group-directed + Person-directed
  • Size of dataset: 27,665
  • Percentage abusive: 0.09 Hate, 0.06 Offensive/Vulgar
  • Language: English
  • Level of annotation: Post
  • Platform: Gab
  • Medium: Text
  • Reference: Kennedy, B., Araria, M., Mostafazadeh Davani, A., Yeh, L., Omrani, A., Kim, Y., Koombs, K., Havaldar, S., Portillo-Wightman, G., Gonzalez, E., Hoover, J., Azatain, A., Hussain, A., Lara, A., Olmos, G., Omary, A., Park, C., Wang, C., Wang, X., Zhang, Y. and Dehghani, M., 2018, The Gab Hate Corpus: A collection of 27k posts annotated for hate speech. PsyArXiv.

Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter

When Does a Compliment Become Sexist? Analysis and Classification of Ambivalent Sexism Using Twitter Data

Overview of the Task on Automatic Misogyny Identification at IberEval 2018 (English)

CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech (English)

  • Link to publication: https://www.aclweb.org/anthology/P19-1271.pdf
  • Link to data: https://github.com/marcoguerini/CONAN
  • Task description: Binary (Islamophobic / not), multi-topic (Culture, Economics, Crimes, Rapism, Terrorism, Women Oppression, History, Other/generic)
  • Details of task: Islamophobia
  • Size of dataset: 1,288
  • Percentage abusive: 1
  • Language: English
  • Level of annotation: Posts
  • Platform: Synthetic / Facebook
  • Medium: Text
  • Reference: Chung, Y., Kuzmenko, E., Tekiroglu, S. and Guerini, M., 2019. CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, pp.2819-2829.

Characterizing and Detecting Hateful Users on Twitter

A Benchmark Dataset for Learning to Intervene in Online Hate Speech (Gab)

A Benchmark Dataset for Learning to Intervene in Online Hate Speech (Reddit)

Multilingual and Multi-Aspect Hate Speech Analysis (English)

Exploring Hate Speech Detection in Multimodal Publications

Predicting the Type and Target of Offensive Posts in Social Media

hatEval, SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (English)

  • Link to publication: https://www.aclweb.org/anthology/S19-2007
  • Link to data: http://competitions.codalab.org/competitions/19935
  • Task description: Branching structure of tasks: Binary (Hate, Not), Within Hate (Group, Individual), Within Hate (Agressive, Not)
  • Details of task: Group-directed + Person-directed
  • Size of dataset: 13,000
  • Percentage abusive: 0.4
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Pardo, F., Rosso, P. and Sanguinetti, M., 2019. SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter. In: Proceedings of the 13th International Workshop on Semantic Evaluation. Minneapolis, Minnesota: Association for Computational Linguistics, pp.54-63.

Peer to Peer Hate: Hate Speech Instigators and Their Targets

Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

Detecting East Asian Prejudice on Social media

  • Link to publication: https://www.aclweb.org/anthology/2020.alw-1.19.pdf
  • Link to data: https://zenodo.org/record/3816667
  • Task description: Task 1: Thematic annotation (East Asia/Covid-19) Task 2: Primary category annotation: 1) Hostility against an East Asian (EA) entity 2) Criticism of an East Asian entity 3) Counter speech 5) Discussion of East Asian prejudice 5) Non-related. Task 3: Secondary category annotation (if (1) or (2) - identifying what East Asian entity was targeted + if (1) interpersonal abuse/threatening language/dehumanization).
  • Details of task: Detecting East Asian prejudice
  • Size of dataset: 20,000
  • Percentage abusive: 27% (Hostility, 19.5%; Criticism, 7.2%)
  • Language: English
  • Level of annotation: Post
  • Platform: Twitter
  • Medium: Text
  • Reference: Vidgen, B., Botelho, A., Broniatowski, D., Guest, E., Hall, M., Margetts, H., Tromble, R., Waseem, Z. and Hale, S., Detecting East Asian Prejudice on Social media, 2020, In: Proceedings of the Fourth Workshop on Online Abuse and Harms, pp.162–172

Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior

  • Link to publication: https://arxiv.org/pdf/1802.00393.pdf
  • Link to data: https://dataverse.mpi-sws.org/dataset.xhtml?persistentId=doi:10.5072/FK2/ZDTEMN
  • Task description: Multi-thematic (Abusive, Hateful, Normal, Spam)
  • Details of task: Hate per se
  • Size of dataset: 80,000
  • Percentage abusive: 0.18
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Annotation process: Very detailed information is given: multiple rounds, using a smaller 300 tweet dataset for testing the schema. For the final 80k, 5 judgements per tweet. CrowdFlower
  • Annotation agreement: 55.9% = 4/5, 36.6% = 3/5, 7.5% = 2/5
  • Reference: Founta, A., Djouvas, C., Chatzakou, D., Leontiadis, I., Blackburn, J., Stringhini, G., Vakali, A., Sirivianos, M. and Kourtellis, N., 2018. Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior. ArXiv,.

A Large Labeled Corpus for Online Harassment Research

  • Link to publication: http://www.cs.umd.edu/~golbeck/papers/trolling.pdf
  • Link to data: [email protected]
  • Task description: Binary (Harassment, Not)
  • Details of task: Person-directed
  • Size of dataset: 35,000
  • Percentage abusive: 0.16
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Golbeck, J., Ashktorab, Z., Banjo, R., Berlinger, A., Bhagwan, S., Buntain, C., Cheakalos, P., Geller, A., Gergory, Q., Gnanasekaran, R., Gnanasekaran, R., Hoffman, K., Hottle, J., Jienjitlert, V., Khare, S., Lau, R., Martindale, M., Naik, S., Nixon, H., Ramachandran, P., Rogers, K., Rogers, L., Sarin, M., Shahane, G., Thanki, J., Vengataraman, P., Wan, Z. and Wu, D., 2017. A Large Labeled Corpus for Online Harassment Research. In: Proceedings of the 2017 ACM on Web Science Conference. New York: Association for Computing Machinery, pp.229-233.

Ex Machina: Personal Attacks Seen at Scale, Personal attacks

Ex Machina: Personal Attacks Seen at Scale, Toxicity

Detecting cyberbullying in online communities (World of Warcraft)

Detecting cyberbullying in online communities (League of Legends)

A Quality Type-aware Annotated Corpus and Lexicon for Harassment Research

Ex Machina: Personal Attacks Seen at Scale, Aggression and Friendliness

Are Chess Discussions Racist? An Adversarial Hate Speech Data Set

ETHOS: an Online Hate Speech Detection Dataset (Binary)

ETHOS: an Online Hate Speech Detection Dataset (Multi label)

Twitter Sentiment Analysis

Toxicity Detection in Software Engineering: Automated Identification of Toxic Code Reviews Using ToxiCR

Toxicity Detection: Does Context Really Matter? CAT-LARGE (No Context)

Toxicity Detection: Does Context Really Matter? CAT-LARGE (With Context)

Anatomy of Online Hate: Developing a Taxonomy and Machine Learning Models for Identifying and Classifying Hate in Online News Media

Estonian

Automating News Comment Moderation with Limited Resources: Benchmarking in Croatian and Estonian

HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection

  • Link to publication: https://arxiv.org/pdf/2012.10289.pdf
  • Link to data: https://github.com/punyajoy/HateXplain
  • Task description: Binary (Hate, Not) and Three-class (Hate speech, Offensive language, None)
  • Details of task: Hatespeech detection on social media in English, including 10 categories: African, Islam, Jewish, LGBTQ, Women, Refugee, Arab, Caucasian, Hispanic, Asian
  • Size of dataset: 20148
  • Percentage abusive: 57%
  • Language: English
  • Level of annotation: Posts
  • Platform: Twitter and Gab
  • Medium: Text
  • Reference: Mathew, B., Saha, P., Yimam, S. M., Biemann, C., Goyal, P., & Mukherjee, A. (2020). Hatexplain: A benchmark dataset for explainable hate speech detection. arXiv preprint arXiv:2012.10289.

French

CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech (French)

  • Link to publication: https://www.aclweb.org/anthology/P19-1271.pdf
  • Link to data: https://github.com/marcoguerini/CONAN
  • Task description: Binary (Islamophobic / not), Multi-topic (Culture, Economics, Crimes, Rapism, Terrorism, Women Oppression, History, Other/generic)
  • Details of task: Islamophobia
  • Size of dataset: 1,719
  • Percentage abusive: 1
  • Language: French
  • Level of annotation: Posts
  • Platform: Synthetic / Facebook
  • Medium: Text
  • Reference: Chung, Y., Kuzmenko, E., Tekiroglu, S. and Guerini, M., 2019. CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, pp.2819-2829.

Multilingual and Multi-Aspect Hate Speech Analysis (French)

CyberAgressionAdo-v1

  • Link to publication: (url) - link to the documentation and/or a data statement about the data
  • Link to data: (url) - direct download is preferred, e.g. a link straight to a .zip file
  • Task description: The collected conversations have been annotated using a fine-grained tagset including information related to the participant roles, the presence of hate speech, the type of verbal abuse present in the message, and whether utterances use different humour figurative devices (e.g., sarcasm or irony).
  • Details of task: This dataset allows to perform several subtasks related to the task of online hate detection in a conversational setting (hate speech detection, bullying participant role detection, verbal abuse detection, etc.)
  • Size of dataset: 19 conversations
  • Language: French
  • Level of annotation: exchanged messages
  • Platform: collected from role playing games mimicking cyberagression situations occuring on private instant messaging platforms.
  • Medium: text (csv)
  • Reference: Anaïs Ollagnier, Elena Cabrio, Serena Villata, Catherine Blaya. CyberAgressionAdo-v1: a Dataset of Annotated Online Aggressions in French Collected through a Role-playing Game. Language Resources and Evaluation Conference, Jun 2022, Marseille, France. ⟨hal-03765860⟩

German

DeTox: A Comprehensive Dataset for German Offensive Language and Conversation Analysis

  • Link to publication: https://aclanthology.org/2022.woah-1.14.pdf
  • Link to data: https://github.com/hdaSprachtechnologie/detox
  • Task description: Comprehensive annotation schema (including sentiment, hate speech, type of discrimination, criminal relevance, expression, toxicity, extremism, target, threat)
  • Details of task: About half of the comments are from coherent comment threads which allows simple conversation analyses. Every comment was annotated by three annotators.
  • Size of dataset: 10,278
  • Percentage abusive: 10.85%
  • Language: German
  • Level of annotation: Comments
  • Platform: Twitter
  • Medium: Text
  • Reference: Demus, C., Pitz, P., Schütz, M., Probol, N., Siegel, M., and Labudde, L. 2022. DeTox: A Comprehensive Dataset for German Offensive Language and Conversation Analysis. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 143–153, Seattle, Washington (Hybrid). Association for Computational Linguistics.

RP-Mod & RP-Crowd: Moderator- and Crowd-Annotated German News Comment Datasets

Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis

Detecting Offensive Statements Towards Foreigners in Social Media

GermEval 2018

Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

GAHD: A German Adversarial Hate Speech Dataset

  • Link to publication: https://arxiv.org/abs/2403.19559
  • Link to data: https://github.com/jagol/gahd
  • Task description: Binary hate speech detection ("hate speech", "not-hate speech")
  • Details of task: Consists of adversarial and contrastive examples
  • Size of dataset: 10,996 texts
  • Percentage abusive: 42.4%
  • Language: German
  • Level of annotation: Post/Sentence
  • Platform: Synthetic data and news sentences
  • Medium: Text
  • Reference: Goldzycher, J., Röttger, P., and Schneider, G., 2024. Improving Adversarial Data Collection by Supporting Annotators: Lessons from GAHD, a German Hate Speech Dataset. To appear in the Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico, Mexico City, June 17–19.

Greek

Deep Learning for User Comment Moderation, Flagged Comments

Deep Learning for User Comment Moderation, Moderated Comments

Offensive Language Identification in Greek

Hindi / Hindi-English

Hostility Detection Dataset in Hindi

Aggression-annotated Corpus of Hindi-English Code-mixed Data

Aggression-annotated Corpus of Hindi-English Code-mixed Data

Did You Offend Me? Classification of Offensive Tweets in Hinglish Language

A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection

Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

Indonesian

Hate Speech Detection in the Indonesian Language: A Dataset and Preliminary Study

Multi-Label Hate Speech and Abusive Language Detection in Indonesian Twitter

  • Link to publication: https://www.aclweb.org/anthology/W19-3506
  • Link to data: https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection
  • Task description: (No hate speech, No hate speech but abusive, Hate speech but no abuse, Hate speech and abuse), within hate, category (Religion/creed, Race/ethnicity, Physical/disability, Gender/sexual orientation, Other invective/slander), within hate, strength (Weak, Moderate and Strong)
  • Details of task: Religion, Race, Disability, Gender
  • Size of dataset: 13,169
  • Percentage abusive: 0.42
  • Language: Indonesian
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Okky Ibrohim, M. and Budi, I., 2019. Multi-label Hate Speech and Abusive Language Detection in Indonesian Twitter. In: Proceedings of the Third Workshop on Abusive Language Online. Florence, Italy: Association for Computational Linguistics, pp.46-57.

A Dataset and Preliminaries Study for Abusive Language Detection in Indonesian Social Media

Korean

BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection__

  • Link to publication: https://www.aclweb.org/anthology/2020.socialnlp-1.4
  • Link to data: https://github.com/kocohub/korean-hate-speech
  • Task description: Binary (Gender bias, No gender bias), Ternary (Gender bias, Other biases, None), Ternary (Hate, Offensive, None)
  • Details of task: Person/Group-directed, Gender/Sexual orientation, Sexism, Harmfulness/Toxicity
  • Size of dataset: 9,381
  • Percentage abusive: 33.87 (Bias), 57.77 (Toxicity)
  • Language: Korean
  • Level of annotation: Comments
  • Platform: NAVER entertainment news
  • Medium: Text
  • Reference: Moon, J., Cho, W. I., and Lee, J., 2020. BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection. In: Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media Month: July. Online: Association for Computational Linguistics, pp.25-31.

Latvian

Latvian newspaper user comment dataset

  • Link to publication: https://aclanthology.org/2021.hackashop-1.14.pdf
  • Link to data: https://www.clarin.si/repository/xmlui/handle/11356/1407
  • Task description: Binary (Deleted, Not)
  • Details of task: Flagged content performmed by the real newspaper moderators
  • Size of dataset: 12M
  • Percentage abusive: ~10%
  • Language: Latvian
  • Level of annotation: Posts
  • Platform: Newspaper comments
  • Medium: Text
  • Reference: Senja Pollak, Marko Robnik-Šikonja, Matthew Purver, Michele Boggia, Ravi Shekhar, Marko Pranjić, Salla Salmela, Ivar Krustok, Tarmo Paju, Carl-Gustav Linden, Leo Leppänen, Elaine Zosa, Matej Ulčar, Linda Freiental, Silver Traat, Luis Adrián Cabrera-Diego, Matej Martinc, Nada Lavrač, Blaž Škrlj, Martin Žnidaršič, Andraž Pelicon, Boshko Koloski, Vid Podečan, Janez Kranjc, Shane Sheehan, Emanuela Boros, Jose Moreno, Antoine Doucet, Hannu Toivonen (2021). EMBEDDIA Tools, Datasets and Challenges: Resources and Hackathon Contributions. Proceedings of the Hackashop on News Media Content Analysis and Automated Report Generation (EACL).

Italian

An Italian Twitter Corpus of Hate Speech against Immigrants

  • Link to publication: https://www.aclweb.org/anthology/L18-1443
  • Link to data: https://github.com/msang/hate-speech-corpus
  • Task description: Binary (Immigrants/Roma/Muslims, Not), additional categories. Within Hate, Intensity measurement (Aggressiveness: No, Weak, Strong, Offensiveness: No, Weak, Strong, Irony: No, Yes, Stereotype: No, Yes, Incitement degree: 0-4)
  • Details of task: Immigrants, Roma and Muslims + numerous sub-categorizations
  • Size of dataset: 1,827
  • Percentage abusive: 0.13
  • Language: Italian
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Sanguinetti, M., Poletto, F., Bosco, C., Patti, V. and Stranisci, M., 2018. An Italian Twitter Corpus of Hate Speech against Immigrants. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Miyazaki, Japan: European Language Resources Association (ELRA).

Overview of the EVALITA 2018 Hate Speech Detection Task (Facebook)

  • Link to publication: http://ceur-ws.org/Vol-2263/paper010.pdf
  • Link to data: http://www.di.unito.it/~tutreeb/haspeede-evalita18/data.html
  • Task description: Binary (Hate, Not), Within hate for Facebook only, strength (No hate, Weak hate, Strong hate) and theme ((1) religion, (2) physical and/or mental handicap, (3) socio-economic status, (4) politics, (5) race, (6) sex and gender, (7) Other)
  • Details of task: Religion, physical and/or mental handicap, socio-economic status, politics, race, sex and gender
  • Size of dataset: 4,000
  • Percentage abusive: 0.51
  • Language: Italian
  • Level of annotation: Posts
  • Platform: Facebook
  • Medium: Text
  • Reference: Bosco, C., Dell'Orletta, F. and Poletto, F., 2018. Overview of the EVALITA 2018 Hate Speech Detection Task. In: EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. CEUR, pp.1-9.

Overview of the EVALITA 2018 Hate Speech Detection Task (Twitter)

Automatic Misogyny Identification (AMI) at Evalita 2020

  • Link to publication: http://ceur-ws.org/Vol-2765/paper161.pdf
  • Link to data: https://github.com/dnozza/ami2020
  • Task description: Binary (misogyny / not), Binary (aggressive / not), Binary on synthetic fairness test (misogyny / not)
  • Details of task: Sexism
  • Size of dataset: 6,000 and 1,961 (synthetic fairness test)
  • Percentage abusive: 47% and 50% (synthetic fairness test)
  • Language: Italian
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Fersini, E., Nozza, D., and Rosso, P., 2020. AMI @ EVALITA2020: Automatic Misogyny Identification. In: Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020).

CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech (Italian)

  • Link to publication: https://www.aclweb.org/anthology/P19-1271.pdf
  • Link to data: https://github.com/marcoguerini/CONAN
  • Task description: Binary (Islamophobic, Not), Multi-topic (Culture, Economics, Crimes, Rapism, Terrorism, Women Oppression, History, Other/generic)
  • Details of task: Islamophobia
  • Size of dataset: 1,071
  • Percentage abusive: 1
  • Language: Italian
  • Level of annotation: Posts
  • Platform: Synthetic / Facebook
  • Medium: Text
  • Reference: Chung, Y., Kuzmenko, E., Tekiroglu, S. and Guerini, M., 2019. CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, pp.2819-2829.

Creating a WhatsApp Dataset to Study Pre-teen Cyberbullying

  • Link to publication: https://www.aclweb.org/anthology/W18-5107
  • Link to data: https://github.com/dhfbk/WhatsApp-Dataset
  • Task description: Binary (Cyberbullying, Not)
  • Details of task: Person-directed
  • Size of dataset: 14,600
  • Percentage abusive: 0.08
  • Language: Italian
  • Level of annotation: Posts, structured into 10 chats, with token level information
  • Platform: Synthetic / Whatsapp
  • Medium: Text
  • Reference: Sprugnoli, R., Menini, S., Tonelli, S., Oncini, F. and Piras, E., 2018. Creating a WhatsApp Dataset to Study Pre-teen Cyberbullying. In: Proceedings of the 2nd Workshop on Abusive Language Online (ALW2) Month: October. Brussels, Belgium: Association for Computational Linguistics, pp.51-59.

Polish

Results of the PolEval 2019 Shared Task 6:First Dataset and Open Shared Task for Automatic Cyberbullying Detection in Polish Twitter

  • Link to publication: http://poleval.pl/files/poleval2019.pdf
  • Link to data: http://poleval.pl/tasks/task6
  • Task description: Harmfulness score (three values), Multilabel from seven phenomena
  • Details of task: Person-directed
  • Size of dataset: 10,041
  • Percentage abusive: 0.09
  • Language: Polish
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Ogrodniczuk, M. and Kobyliński, L., 2019. Results of the PolEval 2019 Shared Task 6: First Dataset and Open Shared Task for Automatic Cyberbullying Detection in Polish Twitter. In: Proceedings of the PolEval 2019 Workshop. Warszawa: Institute of Computer Science, Polish Academy of Sciences.

Portuguese

Toxic Language Dataset for Brazilian Portuguese (ToLD-Br)

  • Link to publication: https://arxiv.org/abs/2010.04543
  • Link to data: https://github.com/JAugusto97/ToLD-Br
  • Task description: Multiclass (LGBTQ+phobia, Insult, Xenophobia, Misogyny, Obscene, Racism)
  • Details of task: Three annotators per example, demographically diverse selected annotators.
  • Size of dataset: 21.000
  • Percentage abusive: 44%
  • Language: Portuguese
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: João A. Leite, Diego F. Silva, Kalina Bontcheva, Carolina Scarton (2020): Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis. AACL-IJCNLP 2020

A Hierarchically-Labeled Portuguese Hate Speech Dataset

  • Link to publication: https://www.aclweb.org/anthology/W19-3510
  • Link to data: https://b2share.eudat.eu/records/9005efe2d6be4293b63c3cffd4cf193e
  • Task description: Binary (Hate, Not), Multi-level (81 categories, identified inductively; categories have different granularities and content can be assigned to multiple categories at once)
  • Details of task: Multiple identities inductively categorized
  • Size of dataset: 3,059
  • Percentage abusive: 0.32
  • Language: Portuguese
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Fortuna, P., Rocha da Silva, J., Soler-Company, J., Warner, L. and Nunes, S., 2019. A Hierarchically-Labeled Portuguese Hate Speech Dataset. In: Proceedings of the Third Workshop on Abusive Language Online. Florence, Italy: Association for Computational Linguistics, pp.94-104.

Offensive Comments in the Brazilian Web: A Dataset and Baseline Results

Offensive Language Identification Dataset for Brazilian Portuguese (OLID-BR)

Russian

Automatic Toxic Comment Detection in Social Media for Russian

Reducing Unintended Identity Bias in Russian Hate Speech Detection

  • Link to publication: https://aclanthology.org/2020.alw-1.8.pdf
  • Link to data: License Required (Last checked 17/01/2022)
  • Task description: Binary (Hate, Not)
  • Details of task: Toxicity, Harassment, Sexism, Homophobia, Nationalism
  • Size of dataset: 100,000
  • Percentage abusive: NA
  • Language: Russian
  • Level of annotation: Posts
  • Platform: Youtube
  • Medium: Text
  • Reference: Zueva, Nadezhda, et al, Oct. 2020. Reducing Unintended Identity Bias in Russian Hate Speech Detection. In: Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 65–69

Detection of Abusive Speech for Mixed Sociolects of Russian and Ukrainian Languages

Russian South Park

Slovene

Datasets of Slovene and Croatian Moderated News Comments

Spanish

Overview of MEX-A3T at IberEval 2018: Authorship and Aggressiveness Analysis in Mexican Spanish Tweets

Overview of the Task on Automatic Misogyny Identification at IberEval 2018 (Spanish)

hatEval, SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (Spanish)

  • Link to publication: https://www.aclweb.org/anthology/S19-2007
  • Link to data: competitions.codalab.org/competitions/19935
  • Task description: Branching structure of tasks: Binary (Hate, Not), Within Hate (Group, Individual), Within Hate (Agressive, Not)
  • Details of task: Group-directed + Person-directed
  • Size of dataset: 6,600
  • Percentage abusive: 0.4
  • Language: Spanish
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Pardo, F., Rosso, P. and Sanguinetti, M., 2019. SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter. In: Proceedings of the 13th International Workshop on Semantic Evaluation. Minneapolis, Minnesota: Association for Computational Linguistics, pp.54-63.

Turkish

Large-Scale Hate Speech Detection with Cross-Domain Transfer

  • Link to publication: https://aclanthology.org/2022.lrec-1.238/
  • Link to data: https://github.com/avaapm/hatespeech
  • Task description: Three-class (Hate speech, Offensive language, None)
  • Details of task: Hate speech detection on social media (Twitter) including 5 target groups (gender, race, religion, politics, sports)
  • Size of dataset: 100k (7325 hate, 27140 offensive, 65535 none)
  • Percentage abusive: 34.5%
  • Language: Turkish
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text and image
  • Reference: Cagri Toraman, Furkan Şahinuç, Eyup Yilmaz. 2022. Large-Scale Hate Speech Detection with Cross-Domain Transfer. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2215–2225, Marseille, France. European Language Resources Association.

A Corpus of Turkish Offensive Language on Social Media

Ukranian

Detection of Abusive Speech for Mixed Sociolects of Russian and Ukrainian Languages

Urdu

Hate-Speech and Offensive Language Detection in Roman Urdu

  • Link to publication: https://www.aclweb.org/anthology/2020.emnlp-main.197/
  • Link to data: https://github.com/haroonshakeel/roman_urdu_hate_speech
  • Task description: There are 2 subtasks, Coarse-grained Classification(Hate-Offensive vs Normal) and Fine-grained classification( Abusive/Offensive, Sexism, Religious Hate, Profane, Normal)
  • Details of task: Binary classification + Hate-Offensive label is further broken down into 4 fine-grained labels
  • Size of dataset: 10041
  • Percentage abusive: 0.24%
  • Language: Urdu-English
  • Level of annotation: Posts
  • Platform: Twitter
  • Medium: Text
  • Reference: Hammad Rizwan, Muhammad Haroon Shakeel, and Asim Karim. 2020. Hate-speech and offensive language detection in Roman Urdu. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2512–2522, Online. Association for Computational Linguistics.

Lists of abusive keywords

  1. The Weaponized Word

    • "The Weaponized Word offers several thousand discriminatory, derogatory and threatening terms across 125+ languages, available through a RESTful API. Access is free for most academic researchers and registered humanitarian nonprofits."
    • Data link: weaponizedword.org
  2. Hurtlex

  3. Gorrell et al.

  4. Wiegand et al.

  5. Chandrasekharan et al.

  6. Jiang et al.

How to Contribute

We accept entries to our catalogue based on pull requests to the README.md file. The dataset must be avaliable for download to be included in the list. If you want to add an entry, follow these steps!

  • Please send just one dataset addition/edit at a time - edit it in, then save. This will make everyone's life easier (including yours!)
  • Go to the README.md file and click the edit button in the top right corner of the file.

Pasted Graphic

  • Edit the markdown file. Please first go the correct language. The items are then sorted by their publication date (newest first). Add your item by copy and pasting the following template and adding all the details:
#### Title
* Link to publication: [url](url) - link to the documentation and/or a data statement about the data
* Link to data: [url](url) - direct download is preferred, e.g. a link straight to a .zip file
* Task description: How the task is framed in this data, e.g. "Binary (Hate, Not)", "Hierarchical", "Three-class (Hate speech, Offensive language, None)"
* Details of task: Free-text description of the task this data models, e.g. "Misogyny detection on social media in Danish"
* Size of dataset: Give the number of instances of abusive/non-abusive/other items
* Percentage abusive: e.g. 1.2%
* Language: e.g. Arabic
* Level of annotation: What is an "instance", in this dataset? e.g. Posts, User, Conversation, ... 
* Platform: e.g. twitter, snapchat, ..
* Medium: text / image / audio / ...
* Reference: Give a bibliographic reference for the data (if there is one), with title, author, year, venue etc
  • Check the “Preview Changes” tab to confirm everything is good to go!
  • If you’re ready to submit, propose the changes. Make sure you give some brief detail on the proposed change.

Pasted Graphic 1

  • Submit the pull request on the next page when prompted.

This page is http://hatespeechdata.com/.

About

Catalog of abusive language data (PLoS 2020)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 32

Languages