cartesinus commited on
Commit
524f5dc
1 Parent(s): 271f8f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -3,7 +3,7 @@ license: mit
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - iva_mt_wslot2
7
  metrics:
8
  - bleu
9
  model-index:
@@ -13,8 +13,8 @@ model-index:
13
  name: Sequence-to-sequence Language Modeling
14
  type: text2text-generation
15
  dataset:
16
- name: iva_mt_wslot2
17
- type: iva_mt_wslot2
18
  config: en-es
19
  split: validation
20
  args: en-es
@@ -22,6 +22,9 @@ model-index:
22
  - name: Bleu
23
  type: bleu
24
  value: 69.2836
 
 
 
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,7 +32,7 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # iva_mt_wslot-m2m100_418M-en-es
31
 
32
- This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot2 dataset.
33
  It achieves the following results on the evaluation set:
34
  - Loss: 0.0115
35
  - Bleu: 69.2836
@@ -79,4 +82,4 @@ The following hyperparameters were used during training:
79
  - Transformers 4.28.1
80
  - Pytorch 2.0.0+cu118
81
  - Datasets 2.11.0
82
- - Tokenizers 0.13.3
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
+ - cartesinus/iva_mt_wslot
7
  metrics:
8
  - bleu
9
  model-index:
 
13
  name: Sequence-to-sequence Language Modeling
14
  type: text2text-generation
15
  dataset:
16
+ name: iva_mt_wslot
17
+ type: iva_mt_wslot
18
  config: en-es
19
  split: validation
20
  args: en-es
 
22
  - name: Bleu
23
  type: bleu
24
  value: 69.2836
25
+ language:
26
+ - en
27
+ - es
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  # iva_mt_wslot-m2m100_418M-en-es
34
 
35
+ This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
36
  It achieves the following results on the evaluation set:
37
  - Loss: 0.0115
38
  - Bleu: 69.2836
 
82
  - Transformers 4.28.1
83
  - Pytorch 2.0.0+cu118
84
  - Datasets 2.11.0
85
+ - Tokenizers 0.13.3