LEVI_whisper_medium / README.md
rosyvs's picture
update readme
8d9244e verified
---
language:
- en
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Model Card: LEVI Whisper Medium Fine-Tuned Model
## Model Information
- **Model Name:** levicu/LEVI_whisper_medium
- **Description:** This model is a fine-tuned version of the OpenAI Whisper Medium model, tailored for speech recognition tasks using the LEVI v2 dataset, which consists of classroom audiovisual recording data.
- **Model Architecture:** openai/whisper-medium
- **Dataset:** LEVI_LoFi_v2/TRAIN (per-utterance transcript and 16k WAV audio)
- both student and tutor speech were used
- manifest: LEVI_LoFi_v2_TRAIN_punc+cased.csv
-
## Training Details
- **Training Procedure:**
- LoRA Parameter Efficient Fine-tuning technique with the following parameters:
- r=32
- lora_alpha=64
- target_modules=["q_proj", "v_proj"]
- lora_dropout=0.05
- bias="none"
- INT8 quantization
- Trained for 6 epochs with a learning rate of 1e-4 and warmup steps of 100 without gradient accumulation.
- **Evaluation Metrics:** Word Error Rate (WER)
## Evaluation
- **Testing Data**
- Test Data 1: LoFi Students (LEVI_LoFi_v2_TEST_punc+cased_student)
- Test Data 2: LoFi Tutors (LEVI_LoFi_v2_TEST_punc+cased_tutor)
- Test Data 3: HiFi Students (LEVI_orig11_HiFi_punc+cased_student)
- Test Data 4: HiFi Tutor (LEVI_orig11_HiFi_punc+cased_tutor)
- **Metric**
- Word Error Rate (WER)
- **Results**
- Test Data 1: 44.1%
- Test Data 2: 15.1%
- Test Data 3: 44.2%
- Test Data 4: 15.9%
## Usage
- **Usage:** The model can be used for speech recognition tasks. Inputs should be audio files, and the model outputs transcriptions.
## Limitations and Ethical Considerations
- **Limitations:** None provided.
- **Ethical Considerations:** Consider the ethical implications of using this model, particularly in scenarios involving sensitive or private information.
## License
- **License:** Not specified.
## Contact Information
- **Contact:** For questions, feedback, or support regarding the model, please contact [email protected] or [email protected].