Edit model card

PaReS-sentimenTw-political-PL

This model is a fine-tuned version of dkleczek/bert-base-polish-cased-v1 to predict 3-categorical sentiment. Fine-tuned on 1k sample of manually annotated Twitter data.

Model developed as a part of ComPathos project: https://www.ncn.gov.pl/sites/default/files/listy-rankingowe/2020-09-30apsv2/streszczenia/497124-en.pdf

from transformers import pipeline

model_path = "eevvgg/PaReS-sentimenTw-political-PL"
sentiment_task = pipeline(task = "sentiment-analysis", model = model_path, tokenizer = model_path)

sequence = ["Cała ta śmieszna debata była próbą ukrycia problemów gospodarczych jakie są i nadejdą, pytania w większości o mało istotnych sprawach", 
            "Brawo panie ministrze!"]
            
result = sentiment_task(sequence)
labels = [i['label'] for i in result] # ['Negative', 'Positive']            

Model Sources

  • BibTex citation:
@misc{SentimenTwPLGK2023,
  author={Gajewska, Ewelina and Konat, Barbara},
  title={PaReSTw: BERT for Sentiment Detection in Polish Language},
  year={2023},
  howpublished = {\url{https://huggingface.co/eevvgg/PaReS-sentimenTw-political-PL}},
}

Intended uses & limitations

Sentiment detection in Polish data (fine-tuned on tweets from political domain).

Training and evaluation data

  • Trained for 3 epochs, mini-batch size of 8.
  • Training results: loss: 0.1358926964368792

It achieves the following results on the test set (10%):

  • No. examples = 100

  • mini batch size = 8

  • accuracy = 0.950

  • macro f1 = 0.944

            precision    recall  f1-score   support
    
         0      0.960     0.980     0.970        49
         1      0.958     0.885     0.920        26
         2      0.923     0.960     0.941        25
    
Downloads last month
48
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results