|
--- |
|
language: fr |
|
license: mit |
|
library_name: transformers |
|
tags: |
|
- audio |
|
- audio-to-audio |
|
- speech |
|
datasets: |
|
- Cnam-LMSSC/vibravox |
|
model-index: |
|
- name: EBEN(M=4,P=2,Q=4) |
|
results: |
|
- task: |
|
name: Bandwidth Extension |
|
type: speech-enhancement |
|
dataset: |
|
name: Vibravox["rigid_in_ear_microphone"] |
|
type: Cnam-LMSSC/vibravox |
|
args: fr |
|
metrics: |
|
- name: Test STOI, in-domain training |
|
type: stoi |
|
value: 0.8773 |
|
- name: Test Noresqa-MOS, in-domain training |
|
type: n-mos |
|
value: 4.285 |
|
--- |
|
|
|
<p align="center"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" > |
|
</p> |
|
|
|
# Model Card |
|
|
|
- **Developed by:** [Cnam-LMSSC](https://huggingface.co/Cnam-LMSSC) |
|
- **Model type:** [EBEN](https://github.com/jhauret/vibravox/blob/main/vibravox/torch_modules/dnn/eben_generator.py) (see [publication](https://ieeexplore.ieee.org/document/10244161)) |
|
- **Language:** French |
|
- **License:** MIT |
|
- **Finetuned dataset:** `speech_clean` subset of [Cnam-LMSSC/vibravox](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) |
|
- **Samplerate for usage:** 16kHz |
|
|
|
## Overview |
|
|
|
This bandwidth extension model is trained on one specific body conduction sensor data from the [Vibravox dataset](https://huggingface.co/datasets/Cnam-LMSSC/vibravox). |
|
The model is designed to to enhance the audio quality of body-conducted captured speech, by denoising and regenerating mid and high frequencies from low frequency content only. |
|
|
|
## Disclaimer |
|
This model has been trained for **specific non-conventional speech sensors** and is intended to be used with **in-domain data**. |
|
Please be advised that using these models outside their intended sensor data may result in suboptimal performance. |
|
|
|
|
|
## Training procedure |
|
|
|
Detailed instructions for reproducing the experiments are available on the [jhauret/vibravox](https://github.com/jhauret/vibravox) Github repository. |
|
|
|
## Inference script : |
|
|
|
```python |
|
import torch, torchaudio |
|
from vibravox import EBENGenerator |
|
from datasets import load_dataset |
|
|
|
model = EBENGenerator.from_pretrained("Cnam-LMSSC/EBEN_rigid_in_ear_microphone") |
|
test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True) |
|
|
|
audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.rigid_in_ear_microphone"]["array"]) |
|
audio_16kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=16_000) |
|
|
|
cut_audio_16kHz = model.cut_to_valid_length(audio_16kHz) |
|
enhanced_audio_16kHz = model(cut_audio_16kHz) |
|
``` |
|
|
|
## Link to other BWE models trained on other body conducted sensors : |
|
|
|
An entry point to all **audio bandwidth extension** (BWE) models trained on different sensor data from the trained on different sensor data from the [Vibravox dataset](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) is available at [https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models](https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models). |
|
|