kavyamanohar commited on
Commit
a6de0ae
1 Parent(s): 4bc8147

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -7,6 +7,7 @@ datasets:
7
  - thennal/IMaSC
8
  - vrclc/openslr63
9
  - thennal/indic_tts_ml
 
10
  model-index:
11
  - name: XLSR-WithLM-Malayalam
12
  results:
@@ -49,15 +50,16 @@ model-index:
49
  - type: wer
50
  value: 52.9
51
  name: WER
52
-
53
  ---
54
- # XLSR-LM-NewData
55
 
56
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [IMASC](https://huggingface.co/datasets/thennal/IMaSC), [Indic TTS Malayalam](https://huggingface.co/datasets/thennal/indic_tts_ml), [OpenSLR Malayalam Train split](https://huggingface.co/datasets/vrclc/openslr63) datasets.
57
  It achieves the following results on the evaluation set:
58
  - Loss: 0.1395
59
  - Wer: 0.2952
60
 
 
 
61
 
62
  ### Training hyperparameters
63
 
@@ -93,4 +95,4 @@ The following hyperparameters were used during training:
93
  - Transformers 4.42.4
94
  - Pytorch 2.3.1+cu121
95
  - Datasets 2.20.0
96
- - Tokenizers 0.19.1
 
7
  - thennal/IMaSC
8
  - vrclc/openslr63
9
  - thennal/indic_tts_ml
10
+ - kavyamanohar/ml-sentences
11
  model-index:
12
  - name: XLSR-WithLM-Malayalam
13
  results:
 
50
  - type: wer
51
  value: 52.9
52
  name: WER
 
53
  ---
54
+ # XLSR-WithLM-Malayalam
55
 
56
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [IMASC](https://huggingface.co/datasets/thennal/IMaSC), [Indic TTS Malayalam](https://huggingface.co/datasets/thennal/indic_tts_ml), [OpenSLR Malayalam Train split](https://huggingface.co/datasets/vrclc/openslr63) datasets.
57
  It achieves the following results on the evaluation set:
58
  - Loss: 0.1395
59
  - Wer: 0.2952
60
 
61
+ Trigram Language Model Trained using KENLM Library on [kavyamanohar/ml-sentences](https://huggingface.co/datasets/kavyamanohar/ml-sentences) dataset
62
+
63
 
64
  ### Training hyperparameters
65
 
 
95
  - Transformers 4.42.4
96
  - Pytorch 2.3.1+cu121
97
  - Datasets 2.20.0
98
+ - Tokenizers 0.19.1