ingredient_prune / README.md
Acc
End of training
cdfad7f verified
|
raw
history blame
4.09 kB
metadata
license: apache-2.0
base_model: SpamAcc/ingredient_prune
tags:
  - generated_from_trainer
model-index:
  - name: ingredient_prune
    results: []

ingredient_prune

This model is a fine-tuned version of SpamAcc/ingredient_prune on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0432

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.312 1.82 100 0.0295
0.0533 3.64 200 0.0149
0.0247 5.45 300 0.0136
0.0149 7.27 400 0.0124
0.0114 9.09 500 0.0127
0.0086 10.91 600 0.0127
0.0075 12.73 700 0.0145
0.0061 14.55 800 0.0151
0.0058 16.36 900 0.0161
0.0044 18.18 1000 0.0169
0.0039 20.0 1100 0.0199
0.0044 21.82 1200 0.0181
0.0035 23.64 1300 0.0230
0.0039 25.45 1400 0.0226
0.0028 27.27 1500 0.0234
0.0026 29.09 1600 0.0272
0.0023 30.91 1700 0.0261
0.0028 32.73 1800 0.0254
0.0018 34.55 1900 0.0268
0.0022 36.36 2000 0.0303
0.002 38.18 2100 0.0286
0.0018 40.0 2200 0.0299
0.0024 41.82 2300 0.0322
0.0019 43.64 2400 0.0328
0.0015 45.45 2500 0.0310
0.002 47.27 2600 0.0352
0.0015 49.09 2700 0.0361
0.0013 50.91 2800 0.0358
0.0011 52.73 2900 0.0368
0.0017 54.55 3000 0.0387
0.0012 56.36 3100 0.0384
0.0011 58.18 3200 0.0402
0.0016 60.0 3300 0.0394
0.0012 61.82 3400 0.0403
0.0013 63.64 3500 0.0392
0.0011 65.45 3600 0.0413
0.0015 67.27 3700 0.0400
0.0021 69.09 3800 0.0412
0.0009 70.91 3900 0.0410
0.0013 72.73 4000 0.0419
0.0009 74.55 4100 0.0415
0.0011 76.36 4200 0.0418
0.0008 78.18 4300 0.0422
0.0013 80.0 4400 0.0434
0.0011 81.82 4500 0.0436
0.0011 83.64 4600 0.0434
0.0008 85.45 4700 0.0434
0.0009 87.27 4800 0.0436
0.0006 89.09 4900 0.0442
0.0009 90.91 5000 0.0436
0.001 92.73 5100 0.0434
0.0008 94.55 5200 0.0433
0.0013 96.36 5300 0.0434
0.001 98.18 5400 0.0433
0.0008 100.0 5500 0.0432

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2