Edit model card

Wav2vec 2.0 XLS-R For Spontaneous Speech Emotion Recognition

This is the model that got first place in the SER track of the Automatic Speech Recognition for spontaneous and prepared speech & Speech Emotion Recognition in Portuguese (SE&R 2022) Workshop.

The following datasets were used in the training:

  • CORAA SER v1.0: a dataset composed of spontaneous portuguese speech and approximately 40 minutes of audio segments labeled in three classes: neutral, non-neutral female, and non-neutral male.

  • EMOVO Corpus: a database of emotional speech for the Italian language, built from the voices of up to 6 actors who played 14 sentences simulating 6 emotional states (disgust, fear, anger, joy, surprise, sadness) plus the neutral state.

  • RAVDESS: a dataset that provides 1440 samples of recordings from actors performing on 8 different emotions in English, which are: angry, calm, disgust, fearful, happy, neutral, sad and surprised.

  • BAVED: a collection of audio recordings of Arabic words spoken with varying degrees of emotion. The dataset contains seven words: like, unlike, this, file, good, neutral, and bad, which are spoken at three emotional levels: low emotion (tired or feeling down), neutral emotion (the way the speaker speaks daily), and high emotion (positive or negative emotions such as happiness, joy, sadness, anger).

The test set used is a part of the CORAA SER v1.0 that has been set aside for this purpose.

It achieves the following results on the test set:

  • Accuracy: 0.9090
  • Macro Precision: 0.8171
  • Macro Recall: 0.8397
  • Macro F1-Score: 0.8187

Datasets Details

The following image shows the overall distribution of the datasets:

distribution

The following image shows the number of instances by label:

numberInstances

Repository

The repository that implements the model to be trained and tested is avaible here.

Downloads last month
27
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using alefiury/wav2vec2-xls-r-300m-pt-br-spontaneous-speech-emotion-recognition 1