jhauret's picture
Update README.md
e6b3b3c verified
metadata
language: fr
license: mit
library_name: transformers
tags:
  - audio
  - audio-to-audio
  - speech
datasets:
  - Cnam-LMSSC/vibravox
model-index:
  - name: EBEN(M=4,P=2,Q=4)
    results:
      - task:
          name: Bandwidth Extension
          type: speech-enhancement
        dataset:
          name: Vibravox["rigid_in_ear_microphone"]
          type: Cnam-LMSSC/vibravox
          args: fr
        metrics:
          - name: Test STOI, in-domain training
            type: stoi
            value: 0.8773
          - name: Test Noresqa-MOS, in-domain training
            type: n-mos
            value: 4.285

Model Card

Overview

This bandwidth extension model, trained on Vibravox body conduction sensor data, enhances body-conducted speech audio by denoising and regenerating mid and high frequencies from low-frequency content.

Disclaimer

This model, trained for a specific non-conventional speech sensor, is intended to be used with in-domain data. Using it with other sensor data may lead to suboptimal performance.

Link to BWE models trained on other body conducted sensors :

The entry point to all EBEN models for Bandwidth Extension (BWE) is available at https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_models.

Training procedure

Detailed instructions for reproducing the experiments are available on the jhauret/vibravox Github repository.

Inference script :

import torch, torchaudio
from vibravox.torch_modules.dnn.eben_generator import EBENGenerator
from datasets import load_dataset

model = EBENGenerator.from_pretrained("Cnam-LMSSC/EBEN_rigid_in_ear_microphone")
test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True)

audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.rigid_in_ear_microphone"]["array"])
audio_16kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=16_000)

cut_audio_16kHz = model.cut_to_valid_length(audio_16kHz[None, None, :])
enhanced_audio_16kHz, enhanced_speech_decomposed = model(cut_audio_16kHz)