gpt2-finetuned-cola
This model is gpt2 fine-tuned on GLUE CoLA dataset. It acheives the following results on the validation set. Though MCC is used for CoLA dataset, accuracy is reported here
- Accuracy: 0.77756
Model Details
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. However, it acheives very good results on Text Classification tasks.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 5.0
Training results
Epoch | Training Loss | Validation Loss | Training Accuracy | Validation Accuracy |
---|---|---|---|---|
1 | 0.65824 | 0.55883 | 0.67957 | 0.70086 |
2 | 0.51774 | 0.53402 | 0.74927 | 0.75264 |
3 | 0.44502 | 0.50641 | 0.79172 | 0.77756 |
4 | 0.38996 | 0.54241 | 0.82645 | 0.77469 |
5 | 0.35218 | 0.55088 | 0.84516 | 0.77277 |
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.