sarch7040 commited on
Commit
c877df2
1 Parent(s): e457890

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -16,22 +16,25 @@ pipeline_tag: translation
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # M2M101
20
 
21
- This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.3269
24
  - Bleu: 8.4241
25
  - Meteor: 0.3851
26
  - Gen Len: 30.7356
27
 
28
- ## Model description
29
 
30
- More information needed
31
 
32
- ## Intended uses & limitations
33
 
34
- More information needed
 
 
 
35
 
36
  ## Training and evaluation data
37
 
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # praTran
20
 
21
+ This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on Deshika dataset which is a prallel corpus of Prakrit with their corresponding English Translations
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.3269
24
  - Bleu: 8.4241
25
  - Meteor: 0.3851
26
  - Gen Len: 30.7356
27
 
28
+ ## Model Description
29
 
30
+ praTran is a finetuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) which was trained on a downstream task.
31
 
32
+ ## Intended uses
33
 
34
+ This model is intended to use for academic purposes.
35
+
36
+ ## Limitation
37
+ The models translation is not that good at this current stage to the language being extremely low resource. Impro
38
 
39
  ## Training and evaluation data
40