Datasets:

Modalities:
Text
Formats:
csv
Libraries:
Datasets
pandas
License:
mgrace31 commited on
Commit
468c80b
1 Parent(s): e18818e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -18
README.md CHANGED
@@ -11,41 +11,44 @@ Alexis Palmer, Christine Carr, Melissa Robinson, and Jordan Sanders. 2020 (to ap
11
 
12
  The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
13
 
14
- Please note: There are two versions of the data set: COLD-2016, and COLD-all.
15
-
16
- 1. **COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
17
-
18
- 2. **COLD-all**: The full COLD data set contains 2500 instances. The instances from COLDID#1 - COLDID#2152 are the original tweets, selected using filters aiming to capture the complex offensive language types listed above. Some instances originally had fewer than three annotations, and we acquired the missing annotations after having finished the work described in the paper. In addition, we randomly selected another 348 tweets from Davidson et al. (2017) to bring the total number of instances in the data set up to 2500. These instances were annotated by three of the original annotators. The randomly-selected data can be found starting with COLDID#2153.
19
 
20
  ## Format and annotations
21
 
22
  The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
23
 
24
  ### Informational columns:
 
 
 
 
 
 
25
 
26
- 1. **COLDID** - unique number (assigned by us) for each textual instance. Since each instance has been labeled by three different annotators, there are three or more occurrences of each COLDID in the data set.
27
 
28
- 2. **Annotator** - annotator ID. The six annotators are indicated by the letters A-F.
29
 
30
- 3. **OriginalID** - information about the original data set and the textual instance's ID from the data set it was extracted from. The OriginalID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
31
 
32
- 4. **Text** - the text of the instance
33
 
34
- ### Annotation columns:
35
 
36
- For each instance, annotators were asked to answer Yes or No to each of four questions. (See the paper for much more detailed discussion, as well as distributions, etc.)
 
37
 
38
- 1. **Q1:** Is this text offensive?
39
 
40
- 2. **Q2:** Is there a slur in the text?
41
 
42
- 3. **Q3:** Is there an adjectival nominalization in the text?
43
 
44
- 4. **Q4:** Is there (linguistic) distancing in the text?
45
 
46
- ## Getting majority votes and fine-grained labels
47
 
48
- TODO! (August 3, 2020)
49
 
50
  ## Contact
51
  If you have any questions please contact [email protected], [email protected], or [email protected].
@@ -76,4 +79,4 @@ nominalization in pejorative meaning. Master's thesis, Department of Linguistics
76
 
77
  Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
78
  Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
79
- <a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a>
 
11
 
12
  The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
13
 
14
+ **COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
 
 
 
 
15
 
16
  ## Format and annotations
17
 
18
  The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
19
 
20
  ### Informational columns:
21
+ 1. **ID** - information about the original data set and the textual instance's ID from the data set it was extracted from. The ID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
22
+
23
+ 2. **Dataset** - a letter indicating from which dataset this instance originates.
24
+ 3. **Text** - the text of the instance.
25
+
26
+ ### Majority Vote Columns:
27
 
28
+ For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the majority vote from three annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
29
 
30
+ 1. **Off** Is this text offensive?
31
 
32
+ 2. **Slur** Is there a slur in the text?
33
 
34
+ 3. **Nom** Is there an adjectival nominalization in the text?
35
 
36
+ 4. **Dist** Is there (linguistic) distancing in the text?
37
 
38
+ ### Individual Annotator Columns:
39
+ For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the individual response from each annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
40
 
41
+ 1. **Off1/2/3** Is this text offensive?
42
 
43
+ 2. **Slur1/2/3** Is there a slur in the text?
44
 
45
+ 3. **Nom1/2/3** Is there an adjectival nominalization in the text?
46
 
47
+ 4. **Dist1/2/3** Is there (linguistic) distancing in the text?
48
 
49
+ ### Category
50
 
51
+ 1. **Cat** This column is deduced based on the majority votes for OFF/SLUR/NOM/DIST. (See the paper for detailed explination the categories, as well as distributions, etc.)
52
 
53
  ## Contact
54
  If you have any questions please contact [email protected], [email protected], or [email protected].
 
79
 
80
  Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
81
  Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
82
+ <a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a>