File size: 1,784 Bytes
6e8b1f3 78e4106 6e8b1f3 78e4106 8855ff0 78e4106 72c7e74 78e4106 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: apache-2.0
language: en
datasets:
- sst2
metrics:
- precision
- recall
- f1
tags:
- text-classification
---
# GPT-2-medium fine-tuned for Sentiment Analysis ππ
[OpenAI's GPT-2](https://openai.com/blog/tags/gpt-2/) medium fine-tuned on [SST-2](https://huggingface.co/datasets/st2) dataset for **Sentiment Analysis** downstream task.
## Details of GPT-2
The **GPT-2** model was presented in [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) by *Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever*
## Model fine-tuning ποΈβ
The model has been finetuned for 10 epochs on standard hyperparameters
## Val set metrics π§Ύ
|precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 0.92 | 0.92| 0.92| 428 |
|positive | 0.92 | 0.93| 0.92| 444 |
|----------|----------|---------|----------|-------|
|accuracy| | | 0.92| 872 |
|macro avg| 0.92| 0.92| 0.92| 872 |
|weighted avg| 0.92| 0.92| 0.92| 872 |
## Model in Action π
```python
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification
tokenizer = GPT2Tokenizer.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment")
model = GPT2ForSequenceClassification.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment")
inputs = tokenizer("I love it", return_tensors="pt")
model(**inputs).logits.argmax(axis=1)
# 1: Positive, 0: Negative
# Output: tensor([1])
```
> This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488 |