tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- speaker-segmentation
- voice-activity-detection
- overlapped-speech-detection
- resegmentation
license: mit
inference: false
extra_gated_prompt: >-
The collected information will help acquire a better knowledge of
pyannote.audio userbase and help its maintainers apply for grants to improve
it further. If you are an academic researcher, please cite the relevant papers
in your own publications using the model. If you work for a company, please
consider contributing back to pyannote.audio development (e.g. through
unrestricted gifts). We also provide scientific consulting services around
speaker diarization and machine listening.
extra_gated_fields:
Company/university: text
Website: text
I plan to use this model for (task, type of audio data, etc): text
We propose (paid) scientific consulting services to companies willing to make the most of their data and open-source speech processing toolkits (and pyannote
in particular).
🎹 Speaker segmentation with powerset encoding
The various concepts behind this model are described in details in this paper.
It ingests (ideally 10s of) mono audio sampled at 16kHz and outputs speaker diarization as a (num_frames, num_classes) matrix where the 7 classes are non-speech, speaker #1, speaker #2, speaker #3, speakers #1 and #2, speakers #1 and #3, and s_peakers #2 and #3_
It has been trained by Séverin Baroudi with pyannote.audio 3.0.0
using the combination of the training sets of AISHELL, AliMeeting, AMI, AVA-AVD, DIHARD, Ego4D, MSDWild, REPERE, and VoxConverse.
Usage
# 1. visit hf.co/pyannote/segmentation-3.0.0 and accept user conditions
# 2. visit hf.co/settings/tokens to create an access token
# 3. instantiate pretrained model
from pyannote.audio import Model
model = Model.from_pretrained("pyannote/segmentation-3.0.0",
use_auth_token="ACCESS_TOKEN_GOES_HERE")
Speaker diarization
This model cannot be used to perform speaker diarization of full recordings on its own (it only processes 10s chunk).
See pyannote/speaker-diarization-3.0.0 pipeline that uses an additional speaker embedding model to perform full recording speaker diarization.
Voice activity detection
from pyannote.audio.pipelines import VoiceActivityDetection
pipeline = VoiceActivityDetection(segmentation=model)
HYPER_PARAMETERS = {
# remove speech regions shorter than that many seconds.
"min_duration_on": 0.0,
# fill non-speech regions shorter than that many seconds.
"min_duration_off": 0.0
}
pipeline.instantiate(HYPER_PARAMETERS)
vad = pipeline("audio.wav")
# `vad` is a pyannote.core.Annotation instance containing speech regions
Overlapped speech detection
from pyannote.audio.pipelines import OverlappedSpeechDetection
pipeline = OverlappedSpeechDetection(segmentation=model)
HYPER_PARAMETERS = {
# remove overlapped speech regions shorter than that many seconds.
"min_duration_on": 0.0,
# fill non-overlapped speech regions shorter than that many seconds.
"min_duration_off": 0.0
}
pipeline.instantiate(HYPER_PARAMETERS)
osd = pipeline("audio.wav")
# `osd` is a pyannote.core.Annotation instance containing overlapped speech regions
Citation
@inproceedings{Plaquet23,
author={Alexis Plaquet and Hervé Bredin},
title={{Powerset multi-class cross entropy loss for neural speaker diarization}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}
@inproceedings{Bredin23,
author={Hervé Bredin},
title={{pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}