Datasets:

Modalities:
Text
Formats:
csv
Libraries:
Datasets
pandas
License:
mgrace31 commited on
Commit
e18818e
1 Parent(s): 267be1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -26
README.md CHANGED
@@ -1,55 +1,58 @@
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
4
-
5
- COLD: Complex Offensive Language Dataset
6
 
7
  If you use this dataset, please cite the following paper (BibTex below):
8
 
9
- Alexis Palmer, Christine Carr, Melissa Robinson, and Jordan Sanders. 2020 (to appear). COLD: Annotation scheme and evaluation data set for complex offensive language in English. Journal of Linguistics and Computational Linguistics.
10
- Overview of data
 
11
 
12
- The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
13
 
14
- Please note: There are two versions of the data set: COLD-2016, and COLD-all.
15
 
16
- COLD-2016 is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
17
 
18
- COLD-all: The full COLD data set contains 2500 instances. The instances from COLDID#1 - COLDID#2152 are the original tweets, selected using filters aiming to capture the complex offensive language types listed above. Some instances originally had fewer than three annotations, and we acquired the missing annotations after having finished the work described in the paper. In addition, we randomly selected another 348 tweets from Davidson et al. (2017) to bring the total number of instances in the data set up to 2500. These instances were annotated by three of the original annotators. The randomly-selected data can be found starting with COLDID#2153.
19
 
20
- Format and annotations
21
 
22
  The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
23
- Informational columns:
24
 
25
- COLDID - unique number (assigned by us) for each textual instance. Since each instance has been labeled by three different annotators, there are three or more occurrences of each COLDID in the data set.
26
 
27
- Annotator - annotator ID. The six annotators are indicated by the letters A-F.
28
 
29
- OriginalID - information about the original data set and the textual instance's ID from the data set it was extracted from. The OriginalID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
30
 
31
- Text - the text of the instance
32
 
33
- Annotation columns:
 
 
34
 
35
  For each instance, annotators were asked to answer Yes or No to each of four questions. (See the paper for much more detailed discussion, as well as distributions, etc.)
36
 
37
- Q1: Is this text offensive?
38
 
39
- Q2: Is there a slur in the text?
40
 
41
- Q3: Is there an adjectival nominalization in the text?
42
 
43
- Q4: Is there (linguistic) distancing in the text?
44
 
45
- Getting majority votes and fine-grained labels
46
 
47
  TODO! (August 3, 2020)
48
- Contact
49
 
 
50
  If you have any questions please contact [email protected], [email protected], or [email protected].
51
- BibTex
52
 
 
 
 
53
  @article{cold:2020,
54
  title = {COLD: Annotation scheme and evaluation data set for complex offensive language in English},
55
  author = {Palmer, Alexis and Carr, Christine and Robinson, Melissa and Sanders, Jordan},
@@ -59,11 +62,18 @@ BibTex
59
  number={to appear},
60
  pages = {tbd}
61
  }
 
62
 
63
- References
64
 
65
- Davidson, T., Wamsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Eleventh international conference on web and social media. [the paper], [the repository]
 
 
66
 
67
- Robinson, M. (2018). A man needs a female like a fish needs a lobotomy: The role of adjectival nominalization in pejorative meaning. Master's thesis, Department of Linguistics, University of North Texas. [the thesis]
 
 
68
 
69
- Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California. [the paper]
 
 
 
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
4
+ ## COLD: Complex Offensive Language Dataset
 
5
 
6
  If you use this dataset, please cite the following paper (BibTex below):
7
 
8
+ Alexis Palmer, Christine Carr, Melissa Robinson, and Jordan Sanders. 2020 (to appear). COLD: Annotation scheme and evaluation data set for complex offensive language in English. *Journal of Linguistics and Computational Linguistics*.
9
+
10
+ ## Overview of data
11
 
12
+ The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
13
 
14
+ Please note: There are two versions of the data set: COLD-2016, and COLD-all.
15
 
16
+ 1. **COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
17
 
18
+ 2. **COLD-all**: The full COLD data set contains 2500 instances. The instances from COLDID#1 - COLDID#2152 are the original tweets, selected using filters aiming to capture the complex offensive language types listed above. Some instances originally had fewer than three annotations, and we acquired the missing annotations after having finished the work described in the paper. In addition, we randomly selected another 348 tweets from Davidson et al. (2017) to bring the total number of instances in the data set up to 2500. These instances were annotated by three of the original annotators. The randomly-selected data can be found starting with COLDID#2153.
19
 
20
+ ## Format and annotations
21
 
22
  The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
 
23
 
24
+ ### Informational columns:
25
 
26
+ 1. **COLDID** - unique number (assigned by us) for each textual instance. Since each instance has been labeled by three different annotators, there are three or more occurrences of each COLDID in the data set.
27
 
28
+ 2. **Annotator** - annotator ID. The six annotators are indicated by the letters A-F.
29
 
30
+ 3. **OriginalID** - information about the original data set and the textual instance's ID from the data set it was extracted from. The OriginalID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
31
 
32
+ 4. **Text** - the text of the instance
33
+
34
+ ### Annotation columns:
35
 
36
  For each instance, annotators were asked to answer Yes or No to each of four questions. (See the paper for much more detailed discussion, as well as distributions, etc.)
37
 
38
+ 1. **Q1:** Is this text offensive?
39
 
40
+ 2. **Q2:** Is there a slur in the text?
41
 
42
+ 3. **Q3:** Is there an adjectival nominalization in the text?
43
 
44
+ 4. **Q4:** Is there (linguistic) distancing in the text?
45
 
46
+ ## Getting majority votes and fine-grained labels
47
 
48
  TODO! (August 3, 2020)
 
49
 
50
+ ## Contact
51
  If you have any questions please contact [email protected], [email protected], or [email protected].
 
52
 
53
+ ## BibTex
54
+
55
+ ```
56
  @article{cold:2020,
57
  title = {COLD: Annotation scheme and evaluation data set for complex offensive language in English},
58
  author = {Palmer, Alexis and Carr, Christine and Robinson, Melissa and Sanders, Jordan},
 
62
  number={to appear},
63
  pages = {tbd}
64
  }
65
+ ```
66
 
67
+ ## References
68
 
69
+ Davidson, T., Wamsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and
70
+ the problem of offensive language. In Eleventh international conference on web and
71
+ social media. <a href="https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15665">[the paper]</a>, <a href="https://github.com/t-davidson/hate-speech-and-offensive-language">[the repository]</a>
72
 
73
+ Robinson, M. (2018). A man needs a female like a fish needs a lobotomy: The role of adjectival
74
+ nominalization in pejorative meaning. Master's thesis, Department of Linguistics, University of North Texas.
75
+ <a href="https://digital.library.unt.edu/ark:/67531/metadc1157617/m2/1/high_res_d/ROBINSON-THESIS-2018.pdf">[the thesis]</a>
76
 
77
+ Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
78
+ Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
79
+ <a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a>