File size: 9,733 Bytes
5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 f516bd1 5bca4d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
datasets:
- ami
- dihard
- voxconverse
- aishell
- repere
- voxceleb
license: mit
---
# 🎹 Speaker diarization
Relies on pyannote.audio 2.0: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation).
## TL;DR
```python
# load the pipeline from Hugginface Hub
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/[email protected]")
# apply the pipeline to an audio file
diarization = pipeline("audio.wav")
# dump the diarization output to disk using RTTM format
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
```
## Advanced usage
If the number of speakers is known in advance, you can include the num_speakers parameter in the parameters dictionary:
```python
handler = EndpointHandler()
diarization = handler({"inputs": base64_audio, "parameters": {"num_speakers": 2}})
```
You can also provide lower and/or upper bounds on the number of speakers using the min_speakers and max_speakers parameters:
```python
handler = EndpointHandler()
diarization = handler({"inputs": base64_audio, "parameters": {"min_speakers": 2, "max_speakers": 5}})
```
If you're feeling adventurous, you can experiment with various pipeline hyperparameters.
For instance, you can use a more aggressive voice activity detection by increasing the value of segmentation_onset threshold:
```python
hparams = handler.pipeline.parameters(instantiated=True)
hparams["segmentation_onset"] += 0.1
handler.pipeline.instantiate(hparams)
```
To apply the updated handler for the API inference that can handle the number of speakers, use the following code:
```python
from typing import Dict
from pyannote.audio import Pipeline
import torch
import base64
import numpy as np
SAMPLE_RATE = 16000
class EndpointHandler():
def __init__(self, path=""):
# load the model
self.pipeline = Pipeline.from_pretrained("KIFF/pyannote-speaker-diarization-endpoint")
def __call__(self, data: Dict[str, bytes]) -> Dict[str, str]:
"""
Args:
data (:obj:):
includes the deserialized audio file as bytes
Return:
A :obj:`dict`:. base64 encoded image
"""
# process input
inputs = data.pop("inputs", data)
parameters = data.pop("parameters", None) # min_speakers=2, max_speakers=5
# decode the base64 audio data
audio_data = base64.b64decode(inputs)
audio_nparray = np.frombuffer(audio_data, dtype=np.int16)
# prepare pynannote input
audio_tensor= torch.from_numpy(audio_nparray).float().unsqueeze(0)
pyannote_input = {"waveform": audio_tensor, "sample_rate": SAMPLE_RATE}
# apply pretrained pipeline
# pass inputs with all kwargs in data
if parameters is not None:
diarization = self.pipeline(pyannote_input, **parameters)
else:
diarization = self.pipeline(pyannote_input)
# postprocess the prediction
processed_diarization = [
{"label": str(label), "start": str(segment.start), "stop": str(segment.end)}
for segment, _, label in diarization.itertracks(yield_label=True)
]
return {"diarization": processed_diarization}
```
## Benchmark
### Real-time factor
Real-time factor is around 5% using one Nvidia Tesla V100 SXM2 GPU (for the neural inference part) and one Intel Cascade Lake 6248 CPU (for the clustering part).
In other words, it takes approximately 3 minutes to process a one hour conversation.
### Accuracy
This pipeline is benchmarked on a growing collection of datasets.
Processing is fully automatic:
* no manual voice activity detection (as is sometimes the case in the literature)
* no manual number of speakers (though it is possible to provide it to the pipeline)
* no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named *"Full"* in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
* no forgiveness collar
* evaluation of overlapped speech
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
| ---------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ |
| [AISHELL-4](http://www.openslr.org/111/) | 14.61 | 3.31 | 4.35 | 6.95 | [RTTM](reproducible_research/AISHELL.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/AISHELL.SpeakerDiarization.Full.test.eval) |
| [AMI *Mix-Headset*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.21 | 3.28 | 11.07 | 3.87 | [RTTM](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.eval) |
| [AMI *Array1-01*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 29.00 | 2.71 | 21.61 | 4.68 | [RTTM](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.eval) |
| [CALLHOME](https://catalog.ldc.upenn.edu/LDC2001S97) [*Part2*](https://github.com/BUTSpeechFIT/CALLHOME_sublists/issues/1) | 30.24 | 3.71 | 16.86 | 9.66 | [RTTM](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.rttm) | [eval](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.eval) |
| [DIHARD 3 *Full*](https://arxiv.org/abs/2012.01477) | 20.99 | 4.25 | 10.74 | 6.00 | [RTTM](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.eval) |
| [REPERE *Phase 2*](https://islrn.org/resources/360-758-359-485-0/) | 12.62 | 1.55 | 3.30 | 7.76 | [RTTM](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.eval) |
| [VoxConverse *v0.0.2*](https://github.com/joonson/voxconverse) | 12.76 | 3.45 | 3.85 | 5.46 | [RTTM](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.rttm) | [eval](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.eval) |
## Support
For commercial enquiries and scientific consulting, please contact [me](mailto:[email protected]).
For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository.
## Citations
```bibtex
@inproceedings{Bredin2021,
Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}},
Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine},
Booktitle = {Proc. Interspeech 2021},
Address = {Brno, Czech Republic},
Month = {August},
Year = {2021},
}
```
```bibtex
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}
```
|