language:
- he license: apache-2.0 base_model: openai/whisper-small tags:
- hf-asr-leaderboard
- generated_from_trainer datasets:
- ivrit-ai/whisper-training metrics:
- wer model-index:
- name: Whisper Small Hebrew 4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: ivrit-ai/whisper-training
type: ivrit-ai/whisper-training
args: 'config: he, split: train'
metrics:
- name: Wer
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: ivrit-ai/whisper-training
type: ivrit-ai/whisper-training
args: 'config: he, split: train'
metrics:
Whisper Small Hebrew 4
This model is a fine-tuned version of openai/whisper-small on the ivrit-ai/whisper-training dataset. This model achieves ~1.5% better results in terms of WER over the previous mike249/whisper-small-he-3.
Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.19.1