Edit model card

Whisper Telugu Large-v2

This model is a fine-tuned version of openai/whisper-large-v2 on the Telugu data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint.

NOTE: The code used to train this model is available for re-use in the whisper-finetune repository.

Usage

In order to evaluate this model on an entire dataset, the evaluation codes available in the whisper-finetune repository can be used.

The same repository also provides the scripts for faster inference using whisper-jax.

In order to infer a single audio file using this model, the following code snippet can be used:

>>> import torch
>>> from transformers import pipeline

>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"

>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-telugu-large-v2", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")

>>> print('Transcription: ', transcribe(audio)["text"])

For faster inference of whisper models, the whisper-jax library can be used. Please follow the necessary installation steps as mentioned here, before using the following code snippet:

>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline

>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"

>>> transcribe = FlaxWhisperPipline("vasista22/whisper-telugu-large-v2", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="te", task="transcribe")

>>> print('Transcription: ', transcribe(audio)["text"])

Training and evaluation data

Training Data:

Evaluation Data:

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.75e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 22
  • optimizer: adamw_bnb_8bit
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 22000
  • training_steps: 75000
  • mixed_precision_training: True

Acknowledgement

This work was done at Speech Lab, IIT Madras.

The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.

Downloads last month
392
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results