haryoaw's picture
Initial Commit
26a8297 verified
|
raw
history blame
3.51 kB
metadata
library_name: transformers
license: mit
base_model: haryoaw/scenario-MDBT-TCR_data-cl-cardiff_cl_only
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: scenario-NON-KD-PO-COPY-CDF-CL-D2_data-cl-cardiff_cl_only66
    results: []

scenario-NON-KD-PO-COPY-CDF-CL-D2_data-cl-cardiff_cl_only66

This model is a fine-tuned version of haryoaw/scenario-MDBT-TCR_data-cl-cardiff_cl_only on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 5.9979
  • Accuracy: 0.4498
  • F1: 0.4497

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 66
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.0870 250 1.2493 0.4568 0.4501
0.812 2.1739 500 1.5854 0.4637 0.4628
0.812 3.2609 750 1.8772 0.4614 0.4605
0.4271 4.3478 1000 2.3694 0.4414 0.4339
0.4271 5.4348 1250 2.6689 0.4537 0.4473
0.1992 6.5217 1500 3.0050 0.4537 0.4527
0.1992 7.6087 1750 3.1201 0.4468 0.4406
0.1147 8.6957 2000 3.9025 0.4360 0.4298
0.1147 9.7826 2250 4.0949 0.4390 0.4331
0.0816 10.8696 2500 4.3006 0.4306 0.4218
0.0816 11.9565 2750 4.5881 0.4606 0.4569
0.0558 13.0435 3000 4.4255 0.4576 0.4577
0.0558 14.1304 3250 5.1150 0.4606 0.4600
0.0388 15.2174 3500 4.6378 0.4568 0.4571
0.0388 16.3043 3750 5.2331 0.4498 0.4458
0.0269 17.3913 4000 5.3200 0.4491 0.4481
0.0269 18.4783 4250 5.2543 0.4599 0.4583
0.0175 19.5652 4500 5.3747 0.4552 0.4548
0.0175 20.6522 4750 5.4521 0.4460 0.4448
0.0181 21.7391 5000 5.3489 0.4606 0.4604
0.0181 22.8261 5250 5.8017 0.4552 0.4543
0.0093 23.9130 5500 5.6669 0.4560 0.4560
0.0093 25.0 5750 5.5959 0.4529 0.4517
0.0076 26.0870 6000 5.8141 0.4576 0.4554
0.0076 27.1739 6250 5.8656 0.4560 0.4556
0.006 28.2609 6500 5.9365 0.4583 0.4577
0.006 29.3478 6750 5.9979 0.4498 0.4497

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1