irfanamal's picture
update model card README.md
3404ce2
|
raw
history blame
No virus
3.12 kB
metadata
license: apache-2.0
base_model: google/electra-base-discriminator
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: electra_multiple_choice
    results: []

electra_multiple_choice

This model is a fine-tuned version of google/electra-base-discriminator on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1882
  • Accuracy: 0.97

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.4562 1.0 1725 1.2637 0.46
1.2814 2.0 3450 1.0774 0.55
1.1775 3.0 5175 0.9072 0.675
1.069 4.0 6900 0.7319 0.72
0.9674 5.0 8625 0.6151 0.78
0.8679 6.0 10350 0.5140 0.825
0.7673 7.0 12075 0.4306 0.84
0.6849 8.0 13800 0.3721 0.87
0.6147 9.0 15525 0.3286 0.885
0.5522 10.0 17250 0.2913 0.895
0.4936 11.0 18975 0.2785 0.915
0.4532 12.0 20700 0.2467 0.91
0.4107 13.0 22425 0.2226 0.93
0.3825 14.0 24150 0.2073 0.945
0.3492 15.0 25875 0.2027 0.93
0.3189 16.0 27600 0.2269 0.925
0.2977 17.0 29325 0.2412 0.93
0.2817 18.0 31050 0.1913 0.935
0.266 19.0 32775 0.1517 0.94
0.2437 20.0 34500 0.2012 0.935
0.234 21.0 36225 0.1600 0.935
0.2195 22.0 37950 0.1688 0.955
0.2002 23.0 39675 0.1347 0.955
0.1987 24.0 41400 0.1976 0.95
0.1858 25.0 43125 0.1568 0.955
0.1784 26.0 44850 0.1453 0.955
0.169 27.0 46575 0.1547 0.955
0.1597 28.0 48300 0.1882 0.97

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3