File size: 5,808 Bytes
6fceb40
 
 
 
 
 
 
 
 
 
 
 
 
 
79d6df1
 
 
 
 
 
 
6fceb40
 
79d6df1
 
 
 
 
 
 
 
 
 
 
 
 
 
6fceb40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
language:
- en
license: cc-by-4.0
library_name: nemo
tags:
- speaker-recognition
- speech
- audio
- speaker-verification
- titanet
- speaker-diarization
- NeMo
- pytorch
datasets:
- librispeech_asr
- VOXCCELEB-1
- VOXCCELEB-2
- FISHER
- Switchboard
- SRE(2004-2010)
model-index:
- name: speakerverification_en
  results:
  - task:
      type: automatic-speech-recognition
    dataset:
      name: Librispeech (clean)
      type: librispeech_asr
      config: other
      split: test
      args:
        language: en
    metrics:
    - type: wer
      value: 8.1
      name: WER
---


## Model Overview

This model extracts speaker embeddings from given speech, which is the backbone for speaker verification and diarization tasks.
It is a "large" version of TitaNet (around 23M parameters) models.  
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user

## How to Use this Model

The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

### Automatically instantiate the model

```python
import nemo.collections.asr as nemo_asr
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained("nvidia/speakerverification_en_titanet_large")
```

### Embedding Extraction

Using 

```python
emb = speaker_model.get_embedding("an255-fash-b.wav")
```

### Verifying two utterances (Speaker Verification)

Now to check if two audio files are from the same speaker or not, simply do:

```python
speaker_model.verify_speakers("an255-fash-b.wav","cen7-fash-b.wav")
```

### Extracting Embeddings for more audio files

To extract embeddings from a bunch of audio files:

Write audio files to a `manifest.json` file with lines as in format:

```json
{"audio_filepath": "<absolute path to dataset>/audio_file.wav", "duration": "duration of file in sec", "label": "speaker_id"}
```

Then running following script will extract embeddings and writes to current working directory:
```shell
python <NeMo_root>/examples/speaker_tasks/recognition/extract_speaker_embeddings.py --manifest=manifest.json
```

### Input

This model accepts 16000 KHz Mono-channel Audio (wav files) as input.

### Output

This model provides speaker embeddings for an audio file. 

## Model Architecture

TitaNet model is a depth-wise separable conv1D model [1] for Speaker Verification and diarization tasks. You may find more info on the detail of this model here: [TitaNet-Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/models.html). 

## Training

The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/speaker_reco.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/conf/titanet-large.yaml).

### Datasets

All the models in this collection are trained on a composite dataset comprising several thousand hours of English speech:

- Voxceleb-1
- Voxceleb-2
- Fisher
- Switchboard
- Librispeech
- SRE (2004-2010) 


## Performance

Performances of the these models are reported in terms of Equal Error Rate (EER%) on speaker verification evaluation trial files and as Diarization Error Rate (DER%) on diarization test sessions.

* Speaker Verification (EER%)
| Version | Model | Model Size | VoxCeleb1 (Cleaned trial file) |
|---------|--------------|-----|---------------|
| 1.10.0 | TitaNet-Large | 23M | 0.66   |

* Speaker Diarization (DER%)
| Version | Model | Model Size | Evaluation Condition | NIST SRE 2000 | AMI (Lapel) | AMI (MixHeadset) | CH109 |
|---------|--------------|-----|----------------------|---------------|-------------|------------------|-------|
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD KNOWN # of Speakers  |      6.73     |      2.03      |         1.73        |  1.19 |
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD UNKNOWN # of Speakers  |    5.38     |      2.03      |         1.89        |  1.63 |

## Limitations
This model is trained on both telephonic and non-telephonic speech from voxceleb datasets, Fisher and switch board. If your domain of data differs from trained data or doesnot show relatively good performance consider finetuning for that speech domain.

## NVIDIA Riva: Deployment

[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. 
Additionally, Riva provides: 

* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours 
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization 
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support 

Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).  
Check out [Riva live demo](https://developer.nvidia.com/riva#demos). 

## References
[1] [TitaNet: Neural Model for Speaker Representation with 1D Depth-wise Separable convolutions and global context](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9746806) 
[2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)

## Licence

License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.