mariav's picture
Update README.md
c52cea9
metadata
tags:
  - generated_from_trainer
model-index:
  - name: bert-base-spanish-wwm-cased-finetuned-tweets
    results: []
datasets:
  - jhonparra18/petro-tweets
language:
  - es
metrics:
  - perplexity
license: apache-2.0

bert-base-spanish-wwm-cased-finetuned-tweets

This model is a fine-tuned version of bert-base-spanish-wwm-cased on the dataset petro-tweets. It achieves the following results on the evaluation set:

  • Loss: 2.7191
  • Perplexity: 15.17

Model description

This model is a fine-tuned version of bert-base-spanish-wwm-cased using the dataset from petro-tweets (available in Huggin Face). The purpose is to extend the bert-base-spanish-cased domain, which, once fine-tuned, will be modified for the fill-in-the-gaps task.

Intended uses & limitations

The use is limited to school use and the limitations have to do with the size of the dataset, since it does not allow for a large contribution, a larger dataset would have to be used to get a larger contribution.

Training and evaluation data

I did a training that gives the training and validation set loss. (It takes a lot of time. If you're using colab, I recommend to use less Epochs because the result does not change too much, and even though the loss is quite high, the performance of the model based on the perplexity is quite good) Also, I checked the perplexity, which is one good measure for Languages Models. As seen before, it gives a good perplexity, because it's quite low.

  • Evaluation: I checked the performance of my model in the notebook provided, just by generating examples.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
1.9867 1.0 79 2.8373
2.1251 2.0 158 2.8581
2.0998 3.0 237 2.7878
2.1207 4.0 316 2.8046
2.1268 5.0 395 2.7893
2.1453 6.0 474 2.8231
2.1548 7.0 553 2.7538
2.1582 8.0 632 2.7992
2.1775 9.0 711 2.6910
2.1962 10.0 790 2.7582
2.226 11.0 869 2.7252
2.2153 12.0 948 2.7437
2.2421 13.0 1027 2.7776
2.2297 14.0 1106 2.7257
2.2265 15.0 1185 2.7722
2.2187 16.0 1264 2.7475
2.2272 17.0 1343 2.7317
2.2315 18.0 1422 2.6876
2.2619 19.0 1501 2.7518
2.2313 20.0 1580 2.7373

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2