Edit model card

best_model-yelp_polarity-16-87

This model is a fine-tuned version of albert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0012
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.3437 0.875
No log 2.0 2 0.3444 0.875
No log 3.0 3 0.3459 0.875
No log 4.0 4 0.3481 0.875
No log 5.0 5 0.3509 0.875
No log 6.0 6 0.3542 0.875
No log 7.0 7 0.3577 0.875
No log 8.0 8 0.3605 0.875
No log 9.0 9 0.3609 0.875
1.0043 10.0 10 0.3571 0.875
1.0043 11.0 11 0.3490 0.875
1.0043 12.0 12 0.3367 0.875
1.0043 13.0 13 0.3202 0.875
1.0043 14.0 14 0.2996 0.875
1.0043 15.0 15 0.2751 0.875
1.0043 16.0 16 0.2470 0.9375
1.0043 17.0 17 0.2159 0.9375
1.0043 18.0 18 0.1832 0.9375
1.0043 19.0 19 0.1516 0.9375
0.6554 20.0 20 0.1241 0.9688
0.6554 21.0 21 0.1018 0.9688
0.6554 22.0 22 0.0818 0.9688
0.6554 23.0 23 0.0611 0.9688
0.6554 24.0 24 0.0378 0.9688
0.6554 25.0 25 0.0170 1.0
0.6554 26.0 26 0.0093 1.0
0.6554 27.0 27 0.0077 1.0
0.6554 28.0 28 0.0073 1.0
0.6554 29.0 29 0.0072 1.0
0.1962 30.0 30 0.0072 1.0
0.1962 31.0 31 0.0071 1.0
0.1962 32.0 32 0.0070 1.0
0.1962 33.0 33 0.0069 1.0
0.1962 34.0 34 0.0068 1.0
0.1962 35.0 35 0.0067 1.0
0.1962 36.0 36 0.0065 1.0
0.1962 37.0 37 0.0063 1.0
0.1962 38.0 38 0.0060 1.0
0.1962 39.0 39 0.0058 1.0
0.0075 40.0 40 0.0056 1.0
0.0075 41.0 41 0.0053 1.0
0.0075 42.0 42 0.0051 1.0
0.0075 43.0 43 0.0050 1.0
0.0075 44.0 44 0.0048 1.0
0.0075 45.0 45 0.0046 1.0
0.0075 46.0 46 0.0045 1.0
0.0075 47.0 47 0.0043 1.0
0.0075 48.0 48 0.0042 1.0
0.0075 49.0 49 0.0041 1.0
0.0019 50.0 50 0.0040 1.0
0.0019 51.0 51 0.0039 1.0
0.0019 52.0 52 0.0038 1.0
0.0019 53.0 53 0.0037 1.0
0.0019 54.0 54 0.0036 1.0
0.0019 55.0 55 0.0035 1.0
0.0019 56.0 56 0.0035 1.0
0.0019 57.0 57 0.0034 1.0
0.0019 58.0 58 0.0033 1.0
0.0019 59.0 59 0.0033 1.0
0.0014 60.0 60 0.0032 1.0
0.0014 61.0 61 0.0032 1.0
0.0014 62.0 62 0.0031 1.0
0.0014 63.0 63 0.0031 1.0
0.0014 64.0 64 0.0030 1.0
0.0014 65.0 65 0.0030 1.0
0.0014 66.0 66 0.0029 1.0
0.0014 67.0 67 0.0029 1.0
0.0014 68.0 68 0.0029 1.0
0.0014 69.0 69 0.0028 1.0
0.0011 70.0 70 0.0028 1.0
0.0011 71.0 71 0.0028 1.0
0.0011 72.0 72 0.0027 1.0
0.0011 73.0 73 0.0027 1.0
0.0011 74.0 74 0.0027 1.0
0.0011 75.0 75 0.0026 1.0
0.0011 76.0 76 0.0026 1.0
0.0011 77.0 77 0.0026 1.0
0.0011 78.0 78 0.0026 1.0
0.0011 79.0 79 0.0025 1.0
0.0009 80.0 80 0.0025 1.0
0.0009 81.0 81 0.0025 1.0
0.0009 82.0 82 0.0024 1.0
0.0009 83.0 83 0.0024 1.0
0.0009 84.0 84 0.0024 1.0
0.0009 85.0 85 0.0023 1.0
0.0009 86.0 86 0.0023 1.0
0.0009 87.0 87 0.0023 1.0
0.0009 88.0 88 0.0022 1.0
0.0009 89.0 89 0.0022 1.0
0.0008 90.0 90 0.0022 1.0
0.0008 91.0 91 0.0021 1.0
0.0008 92.0 92 0.0021 1.0
0.0008 93.0 93 0.0021 1.0
0.0008 94.0 94 0.0020 1.0
0.0008 95.0 95 0.0020 1.0
0.0008 96.0 96 0.0020 1.0
0.0008 97.0 97 0.0019 1.0
0.0008 98.0 98 0.0019 1.0
0.0008 99.0 99 0.0019 1.0
0.0007 100.0 100 0.0019 1.0
0.0007 101.0 101 0.0018 1.0
0.0007 102.0 102 0.0018 1.0
0.0007 103.0 103 0.0018 1.0
0.0007 104.0 104 0.0018 1.0
0.0007 105.0 105 0.0018 1.0
0.0007 106.0 106 0.0017 1.0
0.0007 107.0 107 0.0017 1.0
0.0007 108.0 108 0.0017 1.0
0.0007 109.0 109 0.0017 1.0
0.0006 110.0 110 0.0017 1.0
0.0006 111.0 111 0.0016 1.0
0.0006 112.0 112 0.0016 1.0
0.0006 113.0 113 0.0016 1.0
0.0006 114.0 114 0.0016 1.0
0.0006 115.0 115 0.0016 1.0
0.0006 116.0 116 0.0016 1.0
0.0006 117.0 117 0.0015 1.0
0.0006 118.0 118 0.0015 1.0
0.0006 119.0 119 0.0015 1.0
0.0005 120.0 120 0.0015 1.0
0.0005 121.0 121 0.0015 1.0
0.0005 122.0 122 0.0015 1.0
0.0005 123.0 123 0.0015 1.0
0.0005 124.0 124 0.0015 1.0
0.0005 125.0 125 0.0014 1.0
0.0005 126.0 126 0.0014 1.0
0.0005 127.0 127 0.0014 1.0
0.0005 128.0 128 0.0014 1.0
0.0005 129.0 129 0.0014 1.0
0.0005 130.0 130 0.0014 1.0
0.0005 131.0 131 0.0014 1.0
0.0005 132.0 132 0.0014 1.0
0.0005 133.0 133 0.0014 1.0
0.0005 134.0 134 0.0014 1.0
0.0005 135.0 135 0.0013 1.0
0.0005 136.0 136 0.0013 1.0
0.0005 137.0 137 0.0013 1.0
0.0005 138.0 138 0.0013 1.0
0.0005 139.0 139 0.0013 1.0
0.0004 140.0 140 0.0013 1.0
0.0004 141.0 141 0.0013 1.0
0.0004 142.0 142 0.0013 1.0
0.0004 143.0 143 0.0013 1.0
0.0004 144.0 144 0.0013 1.0
0.0004 145.0 145 0.0013 1.0
0.0004 146.0 146 0.0013 1.0
0.0004 147.0 147 0.0013 1.0
0.0004 148.0 148 0.0012 1.0
0.0004 149.0 149 0.0012 1.0
0.0004 150.0 150 0.0012 1.0

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for simonycl/best_model-yelp_polarity-16-87

Finetuned
(162)
this model