|
--- |
|
license: apache-2.0 |
|
language: |
|
- nl |
|
pipeline_tag: text-classification |
|
inference: false |
|
--- |
|
# Affective Norms Extrapolation Model for Dutch Language |
|
|
|
## Model Description |
|
|
|
This transformer-based model is designed to extrapolate affective norms for Dutch words, including metrics such as valence, arousal, dominance, and age of acquisition. It has been finetuned from the Dutch BERT Model ('models/gronlp_ducth_base.pth'), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection." (Plisiecki, Sobieszek; 2023) [ https://doi.org/10.3758/s13428-023-02212-3 ] |
|
|
|
## Training Data |
|
|
|
The model was trained on the Dutch affective norms dataset by Moors et al. (2013) [ https://doi.org/10.3758/s13428-012-0243-8 ], which includes 4299 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio. |
|
|
|
## Performance |
|
|
|
The model achieved the following Pearson correlations with human judgments on the test set: |
|
|
|
- Valence: 0.87 |
|
- Arousal: 0.80 |
|
- Dominance: 0.75 |
|
- Age of Acquisition: 0.82 |
|
|
|
|
|
## Usage |
|
|
|
You can use the model and tokenizer as follows: |
|
|
|
First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups. |
|
|
|
```bash |
|
git clone https://huggingface.co/hplisiecki/word2affect_dutch |
|
``` |
|
|
|
Proceed as follows: |
|
|
|
```python |
|
from word2affect_dutch.model_script import CustomModel # importing the custom model class |
|
from transformers import AutoTokenizer |
|
|
|
model_directory = "word2affect_dutch" # path to the cloned repository |
|
|
|
model = CustomModel.from_pretrained(model_directory) |
|
tokenizer = AutoTokenizer.from_pretrained(model_directory) |
|
|
|
inputs = tokenizer("test", return_tensors="pt") |
|
outputs = model(inputs['input_ids'], inputs['attention_mask']) |
|
|
|
# Print out the emotion ratings |
|
for emotion, rating in zip(['Valence', 'Arousal', 'Dominance', 'Acquisition'], outputs): |
|
print(f"{emotion}: {rating.item()}") |
|
``` |
|
|
|
## Citation |
|
|
|
If you use this model please cite the following paper. |
|
|
|
```sql |
|
@article{Plisiecki_Sobieszek_2023, |
|
title={Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection}, |
|
author={Plisiecki, Hubert and Sobieszek, Adam}, |
|
journal={Behavior Research Methods}, |
|
year={2023}, |
|
pages={1-16} |
|
doi={https://doi.org/10.3758/s13428-023-02212-3} |
|
} |
|
``` |