Update README.md
Browse files
README.md
CHANGED
@@ -21,4 +21,53 @@ widget:
|
|
21 |
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
|
22 |
- example_title: Librispeech sample 2
|
23 |
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
|
24 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
|
22 |
- example_title: Librispeech sample 2
|
23 |
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
|
24 |
+
---
|
25 |
+
## Description
|
26 |
+
This model is a distilled version of the Whisper large v2 model using decoder pruning.
|
27 |
+
It is trained to give the same distribution as the teacher(large-v2) model using Distillation loss (KL loss) + CE Loss.
|
28 |
+
The original model contains 32 decoder layers, whereas the distilled model contains only 8 layers and achieves 4.2% WER on the
|
29 |
+
librispeech dataset with finetuning for just one epoch. The decoding speed of the model is 2x faster than vanilla large-v2 and
|
30 |
+
40% smaller in size.
|
31 |
+
|
32 |
+
## Train on your data
|
33 |
+
```shell
|
34 |
+
accelerate launch student-teacher-distillation-streaming.py --freeze_encoder --keep_punctuation
|
35 |
+
--keep_case --teacher_model_name_or_path openai/whisper-large-v2 --student_model_name_or_path large-v2-2
|
36 |
+
--student_cache_dir large-v2-2 --output_dir whisper-large-v2-2-en-cv --data_cache_dir commonvoice
|
37 |
+
--teacher_cache_dir cache --student_cache_dir large-v2-2-en-cv --text_column sentence
|
38 |
+
--train_dataset_name mozilla-foundation/common_voice_13_0 --train_dataset_config_name en --train_split_name train
|
39 |
+
--validation_dataset_name mozilla-foundation/common_voice_13_0 --validation_dataset_config_name en
|
40 |
+
--validation_split_name test --max_val_samples 2000
|
41 |
+
```
|
42 |
+
|
43 |
+
## Inference
|
44 |
+
```python
|
45 |
+
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
|
46 |
+
>>> from datasets import load_dataset
|
47 |
+
|
48 |
+
>>> # load model and processor
|
49 |
+
>>> processor = WhisperProcessor.from_pretrained("rsonavane/distil-whisper-large-v2-8-ls")
|
50 |
+
>>> model = WhisperForConditionalGeneration.from_pretrained("rsonavane/distil-whisper-large-v2-8-ls")
|
51 |
+
>>> model.config.forced_decoder_ids = None
|
52 |
+
|
53 |
+
>>> # load dummy dataset and read audio files
|
54 |
+
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
55 |
+
>>> sample = ds[0]["audio"]
|
56 |
+
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
|
57 |
+
|
58 |
+
>>> # generate token ids
|
59 |
+
>>> predicted_ids = model.generate(input_features)
|
60 |
+
>>> # decode token ids to text
|
61 |
+
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
|
62 |
+
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
|
63 |
+
|
64 |
+
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
|
65 |
+
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
|
66 |
+
```
|
67 |
+
|
68 |
+
## Limitations
|
69 |
+
This experiment aimed to explore the effectiveness of decoder pruning and distillation in enhancing performance after training.
|
70 |
+
The model acquires a similar internal representation of the English language as its teacher model,
|
71 |
+
but with improved inference speed and efficiency for downstream tasks. Additionally, it can be fine-tuned for multiple languages,
|
72 |
+
maintaining the original model's performance while reducing inference latency.
|
73 |
+
There are other frameworks such as JAX that can help improve the same.
|