Edit model card

Wav2vec 2.0 large VoxRex Swedish (C)

Finetuned version of KBs VoxRex large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 2.5%. WER for Common Voice test set is 8.49% directly and 7.37% with a 4-gram language model.

When using this model, make sure that your speech input is sampled at 16kHz.

Update 2022-01-10: Updated to VoxRex-C version.

Update 2022-05-16: Paper is is here.

Performance*

Comparison

*Chart shows performance without the additional 20k steps of Common Voice fine-tuning

Training

This model has been fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].

WER during training

Usage

The model can be used directly (without a language model) as follows:

import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
    speech_array, sampling_rate = torchaudio.load(batch["path"])
    batch["speech"] = resampler(speech_array).squeeze().numpy()
    return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
    logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])

Citation

https://arxiv.org/abs/2205.03026

@misc{malmsten2022hearing,
      title={Hearing voices at the National Library -- a speech corpus and acoustic model for the Swedish language}, 
      author={Martin Malmsten and Chris Haffenden and Love BΓΆrjeson},
      year={2022},
      eprint={2205.03026},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
43,611
Safetensors
Model size
315M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KBLab/wav2vec2-large-voxrex-swedish

Finetunes
4 models

Dataset used to train KBLab/wav2vec2-large-voxrex-swedish

Spaces using KBLab/wav2vec2-large-voxrex-swedish 28

Collection including KBLab/wav2vec2-large-voxrex-swedish

Evaluation results