tarob0ba commited on
Commit
0a16d94
1 Parent(s): d471bca

update model card

Browse files
Files changed (1) hide show
  1. README.md +9 -31
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -7,14 +7,15 @@ metrics:
7
  model-index:
8
  - name: opus-mt-en-mul-finetuned-lfn-to-en
9
  results: []
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # opus-mt-en-mul-finetuned-lfn-to-en
16
 
17
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.6208
20
  - Bleu: 62.9717
@@ -22,15 +23,8 @@ It achieves the following results on the evaluation set:
22
 
23
  ## Model description
24
 
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
 
35
  ## Training procedure
36
 
@@ -46,25 +40,9 @@ The following hyperparameters were used during training:
46
  - num_epochs: 10
47
  - mixed_precision_training: Native AMP
48
 
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
52
- |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
53
- | No log | 1.0 | 290 | 0.9193 | 51.7454 | 11.5088 |
54
- | 1.2009 | 2.0 | 580 | 0.7614 | 56.6817 | 11.4971 |
55
- | 1.2009 | 3.0 | 870 | 0.6865 | 59.4575 | 11.4815 |
56
- | 0.6524 | 4.0 | 1160 | 0.6545 | 60.9631 | 11.5088 |
57
- | 0.6524 | 5.0 | 1450 | 0.6360 | 61.8171 | 11.5039 |
58
- | 0.4903 | 6.0 | 1740 | 0.6337 | 61.9929 | 11.5049 |
59
- | 0.4064 | 7.0 | 2030 | 0.6269 | 62.8025 | 11.5146 |
60
- | 0.4064 | 8.0 | 2320 | 0.6234 | 62.5979 | 11.5292 |
61
- | 0.3434 | 9.0 | 2610 | 0.6197 | 63.0131 | 11.5428 |
62
- | 0.3434 | 10.0 | 2900 | 0.6208 | 62.9717 | 11.5165 |
63
-
64
-
65
  ### Framework versions
66
 
67
  - Transformers 4.27.3
68
  - Pytorch 1.13.1+cu116
69
  - Datasets 2.10.1
70
- - Tokenizers 0.13.2
 
1
  ---
2
+ license: mit
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
7
  model-index:
8
  - name: opus-mt-en-mul-finetuned-lfn-to-en
9
  results: []
10
+ language:
11
+ - en
12
+ - lfn
13
+ pipeline_tag: translation
14
  ---
15
 
 
 
 
16
  # opus-mt-en-mul-finetuned-lfn-to-en
17
 
18
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the Tatoeba English-Elefen sentence pair dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.6208
21
  - Bleu: 62.9717
 
23
 
24
  ## Model description
25
 
26
+ Elefen (or Lingua Franca Nova, abbreviated to "LFN") is a simple language designed for international communication.
27
+ Its vocabulary is based on Catalan, Spanish, French, Italian and Portuguese. The grammar is very reduced, similar to Romance creoles.
 
 
 
 
 
 
 
28
 
29
  ## Training procedure
30
 
 
40
  - num_epochs: 10
41
  - mixed_precision_training: Native AMP
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ### Framework versions
44
 
45
  - Transformers 4.27.3
46
  - Pytorch 1.13.1+cu116
47
  - Datasets 2.10.1
48
+ - Tokenizers 0.13.2