Datasets:
GEM
/

ArXiv:
License:
Niansuh lbourdois commited on
Commit
8d82468
1 Parent(s): b276583

Add language tag (#5)

Browse files

- Add language tag (1993c7d57b0028c5afbe5a9280315da11d2a6c2f)


Co-authored-by: Loïck BOURDOIS <[email protected]>

Files changed (1) hide show
  1. README.md +45 -4
README.md CHANGED
@@ -4,7 +4,50 @@ annotations_creators:
4
  language_creators:
5
  - unknown
6
  language:
7
- - und
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  license:
9
  - cc-by-nc-sa-4.0
10
  multilinguality:
@@ -649,6 +692,4 @@ The dataset is limited to news domain only. Hence it wouldn't be advisable to us
649
 
650
  <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
651
  <!-- scope: microscope -->
652
- ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.
653
-
654
-
 
4
  language_creators:
5
  - unknown
6
  language:
7
+ - am
8
+ - ar
9
+ - az
10
+ - bn
11
+ - my
12
+ - zh
13
+ - en
14
+ - fr
15
+ - gu
16
+ - ha
17
+ - hi
18
+ - ig
19
+ - id
20
+ - ja
21
+ - rn
22
+ - ko
23
+ - ky
24
+ - mr
25
+ - ne
26
+ - om
27
+ - ps
28
+ - fa
29
+ - gpe
30
+ - pt
31
+ - pa
32
+ - ru
33
+ - gd
34
+ - sr
35
+ - rsb
36
+ - si
37
+ - so
38
+ - es
39
+ - sw
40
+ - ta
41
+ - te
42
+ - th
43
+ - ti
44
+ - tr
45
+ - uk
46
+ - ur
47
+ - uz
48
+ - vi
49
+ - cy
50
+ - yo
51
  license:
52
  - cc-by-nc-sa-4.0
53
  multilinguality:
 
692
 
693
  <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
694
  <!-- scope: microscope -->
695
+ ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.