Edit model card

Stance-Tw

This model is a fine-tuned version of j-hartmann/sentiment-roberta-large-english-3-classes to predict 3 categories of author stance (attack, support, neutral) towards an entity mentioned in the text.

# Model usage
from transformers import pipeline

model_path = "eevvgg/Stance-Tw"
cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#,  device=0 

sequence = ['his rambling has no clear ideas behind it', 
            'That has nothing to do with medical care',
            "Turns around and shows how qualified she is because of her political career.",
            'She has very little to gain by speaking too much']
            
result = cls_task(sequence)

labels = [i['label'] for i in result]

labels # ['attack', 'neutral', 'support', 'attack']
                                        

Intended uses & limitations

Model suited for classification of stance in short text. Fine-tuned on a manually-annotated corpus of size 3.2k.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}

Trained for 3 epochs, mini-batch size of 8.

  • loss: 0.719

Evaluation data

It achieves the following results on the evaluation set:

  • macro f1-score: 0.758
  • weighted f1-score: 0.762
  • accuracy: 0.762

Citation

BibTeX: tba

Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results