File size: 2,947 Bytes
837051f
 
 
 
 
 
 
 
 
 
 
 
b19e2d3
837051f
 
 
 
aac40cd
 
9b006ea
 
aac40cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
language:
- ms
- en
---

# Malaysian Finetune Whisper Base

Finetune Whisper Base on Malaysian dataset,
1. IMDA STT, https://huggingface.co/datasets/mesolitica/IMDA-STT
2. Pseudolabel Malaysian youtube videos, https://huggingface.co/datasets/mesolitica/pseudolabel-malaysian-youtube-whisper-large-v3
3. Malay Conversational Speech Corpus, https://huggingface.co/datasets/malaysia-ai/malay-conversational-speech-corpus
4. Haqkiem TTS Dataset, this is private, but you can request access from https://www.linkedin.com/in/haqkiem-daim/
5. Pseudolabel Nusantara audiobooks, https://huggingface.co/datasets/mesolitica/nusantara-audiobook

Script at https://github.com/mesolitica/malaya-speech/tree/malaysian-speech/session/whisper

Wandb at https://wandb.ai/huseinzol05/malaysian-whisper-base?workspace=user-huseinzol05

Wandb report at https://wandb.ai/huseinzol05/malaysian-whisper-base/reports/Finetune-Whisper--Vmlldzo2Mzg2NDgx

## What languages we finetuned?

1. `ms`, Malay, can be standard malay and local malay.
2. `en`, English, can be standard english and manglish.

## how-to

```python
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq, pipeline
from datasets import Audio
import requests

sr = 16000
audio = Audio(sampling_rate=sr)

processor = AutoProcessor.from_pretrained("mesolitica/malaysian-whisper-base")
model = AutoModelForSpeechSeq2Seq.from_pretrained("mesolitica/malaysian-whisper-base")

r = requests.get('https://huggingface.co/datasets/huseinzol05/malaya-speech-stt-test-set/resolve/main/test.mp3')
y = audio.decode_example(audio.encode_example(r.content))['array']
inputs = processor([y], return_tensors = 'pt')
r = model.generate(inputs['input_features'], language='ms', return_timestamps=True)
processor.tokenizer.decode(r[0])
```

```text
'<|startoftranscript|><|ms|><|transcribe|> Zamily On Aging di Vener Australia, Australia yang telah diadakan pada tahun 1982 dan berasaskan unjuran tersebut maka jabatan perangkaan Malaysia menganggarkan menjelang tahun 2005 sejumlah 15% penduduk kita adalah daripada kalangan warga emas. Untuk makluman Tuan Yang Pertua dan juga Alian Bohon, pembangunan sistem pendafiran warga emas ataupun kita sebutkan event adalah usaha kerajaan ke arah merealisasikan objektif yang telah digangkatkan<|endoftext|>'
```

```python
r = model.generate(inputs['input_features'], language='en', return_timestamps=True)
processor.tokenizer.decode(r[0])
```

```text
<|startoftranscript|><|en|><|transcribe|> Assembly on Aging, Divina Australia, Australia, which has been provided in 1982 and the operation of the transportation of Malaysia's implementation to prevent the tourism of the 25th, 15% of our population is from the market. For the information of the President and also the respected, the development of the market system or we have made an event.<|endoftext|>
```

## how to predict longer audio?

You need to chunk the audio by 30 seconds, and predict each samples.