Da-HyggeBERT / README.md
RJuro's picture
Update README.md
c63da06
|
raw
history blame
1.85 kB
metadata
language: da
tags:
  - danish
  - bert
  - sentiment
  - text-classification
  - Maltehb/danish-bert-botxo
  - Helsinki-NLP/opus-mt-en-da
  - go-emotion
  - Certainly
license: cc-by-4.0
datasets:
  - go_emotions
metrics:
  - Accuracy
widget:
  - text: Det er  sødt af dig at tænke  andre  den måde ved du det?
  - text: Jeg vil gerne have en playstation.
  - text: Jeg elsker dig

Danish-Bert-GoÆmotion

Danish Go-Emotion classifier. Maltehb/danish-bert-botxo (uncased) finetuned on a translation of the go_emotion dataset using Helsinki-NLP/opus-mt-en-da. Thus,performance is obviousely only as good as the translation model.

Training Parameters:

Num examples = 189900
Num Epochs = 3
Train batch = 8
Eval batch = 8
Learning Rate = 3e-5
Warmup steps = 4273
Total optimization steps = 71125

Loss

Training loss

Eval. loss

0.1178 (21100 examples)

Using the model with transformers

Easiest use with transformers and pipeline:

from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline

model = AutoModelForSequenceClassification.from_pretrained('RJuro/danish-bert-go-aemotion')
tokenizer = AutoTokenizer.from_pretrained('RJuro/danish-bert-go-aemotion')

classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)

classifier('jeg elsker dig')

[{'label': 'kærlighed', 'score': 0.9634820818901062}]

Using the model with simpletransformers

from simpletransformers.classification import MultiLabelClassificationModel

model = MultiLabelClassificationModel('bert', 'RJuro/danish-bert-go-aemotion')

predictions, raw_outputs = model.predict(df['text'])