update title of README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ base_model:
|
|
13 |
pipeline_tag: audio-classification
|
14 |
---
|
15 |
|
|
|
|
|
16 |
The End-of-Speech model is based on the open-source Wav2Vec 2.0 model from Meta AI. It uses convolutional feature encoders, which translate chunks of raw audio input into latent speech representations and a transformer to capture the information throughout this sequence of representations. This helps the model distinguish different pitch declines, as well as final lengthening (and the following pause) in the intonation and therefore distinguish when an end of speech event occurs - the same way us humans do.
|
17 |
|
18 |
# Training Data
|
|
|
13 |
pipeline_tag: audio-classification
|
14 |
---
|
15 |
|
16 |
+
# End of Speech Detection with Wav2Vec 2.0
|
17 |
+
|
18 |
The End-of-Speech model is based on the open-source Wav2Vec 2.0 model from Meta AI. It uses convolutional feature encoders, which translate chunks of raw audio input into latent speech representations and a transformer to capture the information throughout this sequence of representations. This helps the model distinguish different pitch declines, as well as final lengthening (and the following pause) in the intonation and therefore distinguish when an end of speech event occurs - the same way us humans do.
|
19 |
|
20 |
# Training Data
|