Edit model card

output

This model is a fine-tuned version of avsolatorio/GIST-large-Embedding-v0 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5549
  • F1: 0.6828
  • Roc Auc: 0.9255
  • Accuracy: 0.1053

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 70
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
0.4089 1.0 50 0.3415 0.1336 0.7999 0.0702
0.3111 2.0 100 0.3084 0.2771 0.8501 0.0526
0.2544 3.0 150 0.2851 0.4233 0.8679 0.0526
0.209 4.0 200 0.2893 0.4545 0.8868 0.0526
0.1688 5.0 250 0.2560 0.5307 0.9137 0.1053
0.1335 6.0 300 0.2679 0.4982 0.9001 0.0702
0.1043 7.0 350 0.2689 0.5758 0.9070 0.1053
0.0813 8.0 400 0.2786 0.5994 0.9112 0.1228
0.0686 9.0 450 0.2742 0.6150 0.9119 0.1053
0.0553 10.0 500 0.2751 0.6498 0.9076 0.1404
0.0463 11.0 550 0.2905 0.5894 0.9156 0.1228
0.0401 12.0 600 0.2786 0.6313 0.9189 0.1579
0.0319 13.0 650 0.3090 0.6502 0.9127 0.1053
0.0277 14.0 700 0.2876 0.6024 0.9072 0.0877
0.0248 15.0 750 0.2991 0.6546 0.9275 0.0702
0.02 16.0 800 0.3128 0.6345 0.9217 0.0526
0.0176 17.0 850 0.3139 0.6782 0.9239 0.0877
0.0147 18.0 900 0.3128 0.6739 0.9232 0.1053
0.0128 19.0 950 0.3035 0.6718 0.9217 0.1228
0.0108 20.0 1000 0.3298 0.6531 0.9155 0.1053
0.0098 21.0 1050 0.3470 0.6596 0.9183 0.1053
0.0084 22.0 1100 0.3471 0.6674 0.9170 0.1404
0.0071 23.0 1150 0.3483 0.6756 0.9123 0.1228
0.0064 24.0 1200 0.3600 0.6734 0.9158 0.1053
0.0058 25.0 1250 0.3636 0.6734 0.9172 0.1228
0.0051 26.0 1300 0.3687 0.6826 0.9216 0.1053
0.0043 27.0 1350 0.3859 0.6627 0.9215 0.0877
0.0038 28.0 1400 0.3724 0.6759 0.9299 0.1053
0.0034 29.0 1450 0.4112 0.6869 0.9195 0.1228
0.0029 30.0 1500 0.3952 0.6985 0.9207 0.1404
0.0026 31.0 1550 0.4265 0.6762 0.9204 0.1228
0.0023 32.0 1600 0.4360 0.6861 0.9195 0.1053
0.002 33.0 1650 0.4182 0.6735 0.9271 0.0877
0.0018 34.0 1700 0.4394 0.6678 0.9211 0.0877
0.0016 35.0 1750 0.4406 0.6890 0.9288 0.0877
0.0014 36.0 1800 0.4398 0.6771 0.9240 0.1053
0.0013 37.0 1850 0.4394 0.6849 0.9226 0.0877
0.0012 38.0 1900 0.4642 0.6712 0.9147 0.0702
0.0011 39.0 1950 0.4667 0.6744 0.9223 0.0877
0.001 40.0 2000 0.4570 0.6662 0.9222 0.1053
0.0009 41.0 2050 0.4608 0.6871 0.9257 0.1053
0.0008 42.0 2100 0.4586 0.6771 0.9290 0.1053
0.0007 43.0 2150 0.4737 0.6903 0.9208 0.1228
0.0006 44.0 2200 0.4784 0.6812 0.9251 0.1053
0.0006 45.0 2250 0.4752 0.7063 0.9188 0.1404
0.0006 46.0 2300 0.4852 0.6938 0.9261 0.1053
0.0005 47.0 2350 0.4978 0.6881 0.9276 0.1053
0.0005 48.0 2400 0.5036 0.6664 0.9243 0.0877
0.0005 49.0 2450 0.5029 0.6782 0.9241 0.0877
0.0004 50.0 2500 0.5160 0.6713 0.9268 0.0877
0.0004 51.0 2550 0.5217 0.6789 0.9253 0.1053
0.0004 52.0 2600 0.5203 0.6842 0.9254 0.1228
0.0003 53.0 2650 0.5242 0.6773 0.9197 0.1228
0.0003 54.0 2700 0.5248 0.6887 0.9261 0.1053
0.0003 55.0 2750 0.5309 0.6796 0.9256 0.1053
0.0003 56.0 2800 0.5356 0.6827 0.9251 0.1228
0.0003 57.0 2850 0.5360 0.6693 0.9234 0.1053
0.0003 58.0 2900 0.5420 0.6866 0.9272 0.1053
0.0003 59.0 2950 0.5517 0.6793 0.9245 0.1053
0.0002 60.0 3000 0.5482 0.6855 0.9249 0.0877
0.0002 61.0 3050 0.5514 0.6798 0.9239 0.1053
0.0002 62.0 3100 0.5580 0.6824 0.9240 0.1053
0.0002 63.0 3150 0.5566 0.6821 0.9258 0.1053
0.0002 64.0 3200 0.5582 0.6776 0.9253 0.1053
0.0002 65.0 3250 0.5574 0.6816 0.9264 0.1053
0.0002 66.0 3300 0.5607 0.6767 0.9251 0.1053
0.0002 67.0 3350 0.5523 0.6851 0.9244 0.1053
0.0002 68.0 3400 0.5572 0.6804 0.9255 0.1053
0.0002 69.0 3450 0.5537 0.6828 0.9252 0.1053
0.0002 70.0 3500 0.5549 0.6828 0.9255 0.1053

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2
Downloads last month
12
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for thanavut/output

Finetuned
(7)
this model