simonycl's picture
update model card README.md
bdf532e
|
raw
history blame
10.6 kB
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: best_model-sst-2-32-100
    results: []

best_model-sst-2-32-100

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5168
  • Accuracy: 0.9219

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 0.8101 0.9062
No log 2.0 4 0.8102 0.9062
No log 3.0 6 0.8102 0.9062
No log 4.0 8 0.8100 0.9062
0.6019 5.0 10 0.8098 0.9062
0.6019 6.0 12 0.8095 0.9062
0.6019 7.0 14 0.8090 0.9062
0.6019 8.0 16 0.8085 0.9062
0.6019 9.0 18 0.8079 0.9062
0.6181 10.0 20 0.8073 0.9062
0.6181 11.0 22 0.8066 0.9062
0.6181 12.0 24 0.8061 0.9062
0.6181 13.0 26 0.8055 0.9062
0.6181 14.0 28 0.8048 0.9062
0.5045 15.0 30 0.8037 0.9062
0.5045 16.0 32 0.8020 0.9062
0.5045 17.0 34 0.8003 0.9062
0.5045 18.0 36 0.7978 0.9062
0.5045 19.0 38 0.7955 0.9062
0.4784 20.0 40 0.7928 0.9062
0.4784 21.0 42 0.7902 0.9062
0.4784 22.0 44 0.7868 0.9062
0.4784 23.0 46 0.7824 0.9062
0.4784 24.0 48 0.7764 0.9062
0.3582 25.0 50 0.7695 0.9062
0.3582 26.0 52 0.7628 0.9062
0.3582 27.0 54 0.7548 0.9062
0.3582 28.0 56 0.7473 0.9062
0.3582 29.0 58 0.7388 0.9062
0.3152 30.0 60 0.7286 0.9062
0.3152 31.0 62 0.7145 0.9062
0.3152 32.0 64 0.7007 0.9062
0.3152 33.0 66 0.6860 0.9062
0.3152 34.0 68 0.6662 0.9062
0.2403 35.0 70 0.6377 0.9062
0.2403 36.0 72 0.5941 0.9062
0.2403 37.0 74 0.5458 0.8906
0.2403 38.0 76 0.4985 0.8906
0.2403 39.0 78 0.4676 0.9219
0.1021 40.0 80 0.4598 0.9219
0.1021 41.0 82 0.4572 0.9375
0.1021 42.0 84 0.4521 0.9375
0.1021 43.0 86 0.4493 0.9375
0.1021 44.0 88 0.4420 0.9375
0.016 45.0 90 0.4264 0.9375
0.016 46.0 92 0.4104 0.9375
0.016 47.0 94 0.4008 0.9375
0.016 48.0 96 0.4056 0.9062
0.016 49.0 98 0.4256 0.9219
0.0016 50.0 100 0.4450 0.9062
0.0016 51.0 102 0.4667 0.9062
0.0016 52.0 104 0.4946 0.9062
0.0016 53.0 106 0.5189 0.9062
0.0016 54.0 108 0.5347 0.9062
0.0008 55.0 110 0.5434 0.9062
0.0008 56.0 112 0.5500 0.9062
0.0008 57.0 114 0.5545 0.9062
0.0008 58.0 116 0.5557 0.9062
0.0008 59.0 118 0.5535 0.9062
0.0005 60.0 120 0.5492 0.9062
0.0005 61.0 122 0.5389 0.9062
0.0005 62.0 124 0.5249 0.9062
0.0005 63.0 126 0.5044 0.9062
0.0005 64.0 128 0.4804 0.9062
0.0008 65.0 130 0.4611 0.9219
0.0008 66.0 132 0.4474 0.9375
0.0008 67.0 134 0.4373 0.9375
0.0008 68.0 136 0.4299 0.9375
0.0008 69.0 138 0.4246 0.9219
0.0003 70.0 140 0.4213 0.9219
0.0003 71.0 142 0.4191 0.9219
0.0003 72.0 144 0.4177 0.9219
0.0003 73.0 146 0.4283 0.9219
0.0003 74.0 148 0.4393 0.9375
0.0011 75.0 150 0.4489 0.9375
0.0011 76.0 152 0.4577 0.9375
0.0011 77.0 154 0.4659 0.9375
0.0011 78.0 156 0.4734 0.9219
0.0011 79.0 158 0.4803 0.9219
0.0003 80.0 160 0.4866 0.9219
0.0003 81.0 162 0.4924 0.9062
0.0003 82.0 164 0.4845 0.9219
0.0003 83.0 166 0.4663 0.9375
0.0003 84.0 168 0.4532 0.9375
0.0072 85.0 170 0.4429 0.9375
0.0072 86.0 172 0.4352 0.9375
0.0072 87.0 174 0.4297 0.9375
0.0072 88.0 176 0.4255 0.9219
0.0072 89.0 178 0.4223 0.9219
0.0002 90.0 180 0.4201 0.9219
0.0002 91.0 182 0.4184 0.9219
0.0002 92.0 184 0.4171 0.9219
0.0002 93.0 186 0.4163 0.9219
0.0002 94.0 188 0.4231 0.9219
0.0002 95.0 190 0.4306 0.9375
0.0002 96.0 192 0.4377 0.9375
0.0002 97.0 194 0.4440 0.9375
0.0002 98.0 196 0.4494 0.9375
0.0002 99.0 198 0.4542 0.9375
0.0002 100.0 200 0.4582 0.9375
0.0002 101.0 202 0.4617 0.9375
0.0002 102.0 204 0.4646 0.9375
0.0002 103.0 206 0.4676 0.9375
0.0002 104.0 208 0.4705 0.9375
0.0002 105.0 210 0.4729 0.9375
0.0002 106.0 212 0.4749 0.9375
0.0002 107.0 214 0.4769 0.9375
0.0002 108.0 216 0.4788 0.9375
0.0002 109.0 218 0.4803 0.9375
0.0002 110.0 220 0.4810 0.9375
0.0002 111.0 222 0.4817 0.9375
0.0002 112.0 224 0.4825 0.9375
0.0002 113.0 226 0.4837 0.9375
0.0002 114.0 228 0.4849 0.9375
0.0002 115.0 230 0.4857 0.9219
0.0002 116.0 232 0.4679 0.9375
0.0002 117.0 234 0.4374 0.9375
0.0002 118.0 236 0.4225 0.9375
0.0002 119.0 238 0.4275 0.9375
0.0004 120.0 240 0.4352 0.9375
0.0004 121.0 242 0.4423 0.9375
0.0004 122.0 244 0.4481 0.9375
0.0004 123.0 246 0.4509 0.9375
0.0004 124.0 248 0.4527 0.9375
0.0002 125.0 250 0.4528 0.9375
0.0002 126.0 252 0.4530 0.9375
0.0002 127.0 254 0.4531 0.9375
0.0002 128.0 256 0.4531 0.9375
0.0002 129.0 258 0.4530 0.9375
0.0014 130.0 260 0.4188 0.9375
0.0014 131.0 262 0.4099 0.9531
0.0014 132.0 264 0.4306 0.9219
0.0014 133.0 266 0.4583 0.9219
0.0014 134.0 268 0.4801 0.9219
0.0001 135.0 270 0.4951 0.9219
0.0001 136.0 272 0.5056 0.9219
0.0001 137.0 274 0.5134 0.9062
0.0001 138.0 276 0.5179 0.9062
0.0001 139.0 278 0.5215 0.9062
0.0001 140.0 280 0.5243 0.9062
0.0001 141.0 282 0.5255 0.9062
0.0001 142.0 284 0.5258 0.9062
0.0001 143.0 286 0.5261 0.9062
0.0001 144.0 288 0.5262 0.9062
0.0001 145.0 290 0.5261 0.9062
0.0001 146.0 292 0.5236 0.9062
0.0001 147.0 294 0.5214 0.9062
0.0001 148.0 296 0.5194 0.9219
0.0001 149.0 298 0.5177 0.9219
0.0001 150.0 300 0.5168 0.9219

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3