Edresson's picture
Update README.md
42138f7
metadata
language: ru
datasets:
  - Common Voice
metrics:
  - wer
tags:
  - audio
  - speech
  - wav2vec2
  - Russian-speech-corpus
  - automatic-speech-recognition
  - speech
  - PyTorch
license: apache-2.0
model-index:
  - name: >-
      Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a
      single-speaker dataset in Russian
    results:
      - task:
          name: Speech Recognition
          type: automatic-speech-recognition
        metrics:
          - name: Test Common Voice 7.0 WER
            type: wer
            value: 74.02

Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Russian

Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using a single-speaker dataset.

Use this model


from transformers import AutoTokenizer, Wav2Vec2ForCTC
  
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian")

model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian")

Results

For the results check the paper

Example test with Common Voice Dataset

dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")

resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000)

def map_to_array(batch):
    speech, _ = torchaudio.load(batch["path"])
    batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
    batch["sampling_rate"] = resampler.new_freq
    batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
    return batch
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))