Datasets:
ibm
/

Modalities:
Tabular
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
avishaig commited on
Commit
78d9d36
1 Parent(s): a050f79

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -18,11 +18,20 @@ dataset_info:
18
  ---
19
  # Dataset Card for Argument-Quality-Ranking-30k Dataset
20
 
 
 
 
 
 
 
 
 
 
21
  ## Dataset Summary
22
 
23
  The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, dev and test sets.
24
 
25
- The dataset was originally published as part of our paper [A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis](https://arxiv.org/abs/1911.11408).
26
 
27
  ## Dataset Structure
28
 
@@ -32,11 +41,11 @@ Each instance contains a string argument, a string topic, and quality and stance
32
  * stance_WA - the stance label according to the weighted-average scoring function
33
  * stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function
34
 
35
- Quality labels
36
  --------------
37
  For an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.
38
 
39
- Stance labels
40
  -------------
41
  There were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.
42
 
 
18
  ---
19
  # Dataset Card for Argument-Quality-Ranking-30k Dataset
20
 
21
+ ## Table of Contents
22
+
23
+ - [Dataset Summary](#dataset-summary)
24
+ - [Dataset Structure](#dataset-structure)
25
+ - [Quality Labels](#quality-labels)
26
+ - [Stance Labels](#stance-labels)
27
+ - [Licensing Information](#licensing-information)
28
+ - [Citation Information](#citation-information)
29
+
30
  ## Dataset Summary
31
 
32
  The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, dev and test sets.
33
 
34
+ The dataset was originally published as part of our paper: [A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis](https://arxiv.org/abs/1911.11408).
35
 
36
  ## Dataset Structure
37
 
 
41
  * stance_WA - the stance label according to the weighted-average scoring function
42
  * stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function
43
 
44
+ Quality Labels
45
  --------------
46
  For an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.
47
 
48
+ Stance Labels
49
  -------------
50
  There were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.
51