Edit model card

best_model-yelp_polarity-16-100

This model is a fine-tuned version of albert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1862
  • Accuracy: 0.8125

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 1.0619 0.8438
No log 2.0 2 1.0610 0.8438
No log 3.0 3 1.0591 0.8438
No log 4.0 4 1.0563 0.8438
No log 5.0 5 1.0524 0.8438
No log 6.0 6 1.0473 0.8438
No log 7.0 7 1.0408 0.8438
No log 8.0 8 1.0325 0.8438
No log 9.0 9 1.0221 0.8438
0.5215 10.0 10 1.0093 0.8438
0.5215 11.0 11 0.9939 0.8438
0.5215 12.0 12 0.9775 0.8438
0.5215 13.0 13 0.9630 0.8438
0.5215 14.0 14 0.9517 0.8438
0.5215 15.0 15 0.9431 0.8125
0.5215 16.0 16 0.9352 0.7812
0.5215 17.0 17 0.9263 0.7812
0.5215 18.0 18 0.9195 0.7812
0.5215 19.0 19 0.9178 0.7812
0.3945 20.0 20 0.9230 0.8125
0.3945 21.0 21 0.9374 0.8125
0.3945 22.0 22 0.9628 0.8125
0.3945 23.0 23 1.0035 0.8438
0.3945 24.0 24 1.0608 0.8125
0.3945 25.0 25 1.1258 0.8125
0.3945 26.0 26 1.1859 0.8125
0.3945 27.0 27 1.2311 0.8125
0.3945 28.0 28 1.2580 0.8125
0.3945 29.0 29 1.2702 0.8125
0.2334 30.0 30 1.2750 0.8125
0.2334 31.0 31 1.2763 0.8125
0.2334 32.0 32 1.2763 0.8125
0.2334 33.0 33 1.2757 0.8125
0.2334 34.0 34 1.2733 0.8125
0.2334 35.0 35 1.2687 0.8125
0.2334 36.0 36 1.2612 0.8125
0.2334 37.0 37 1.2508 0.8125
0.2334 38.0 38 1.2376 0.8125
0.2334 39.0 39 1.2213 0.8125
0.024 40.0 40 1.2024 0.8125
0.024 41.0 41 1.1803 0.8125
0.024 42.0 42 1.1548 0.8125
0.024 43.0 43 1.1254 0.8125
0.024 44.0 44 1.0929 0.8125
0.024 45.0 45 1.0591 0.8125
0.024 46.0 46 1.0257 0.8125
0.024 47.0 47 0.9942 0.8125
0.024 48.0 48 0.9662 0.8125
0.024 49.0 49 0.9436 0.8125
0.0008 50.0 50 0.9266 0.8125
0.0008 51.0 51 0.9148 0.8125
0.0008 52.0 52 0.9073 0.8125
0.0008 53.0 53 0.9039 0.8125
0.0008 54.0 54 0.9049 0.8125
0.0008 55.0 55 0.9087 0.8125
0.0008 56.0 56 0.9152 0.8125
0.0008 57.0 57 0.9238 0.8125
0.0008 58.0 58 0.9340 0.8125
0.0008 59.0 59 0.9450 0.8125
0.0006 60.0 60 0.9566 0.8438
0.0006 61.0 61 0.9682 0.8438
0.0006 62.0 62 0.9797 0.8438
0.0006 63.0 63 0.9912 0.8438
0.0006 64.0 64 1.0028 0.8438
0.0006 65.0 65 1.0141 0.8438
0.0006 66.0 66 1.0251 0.8438
0.0006 67.0 67 1.0358 0.8438
0.0006 68.0 68 1.0460 0.8438
0.0006 69.0 69 1.0558 0.8438
0.0005 70.0 70 1.0646 0.8438
0.0005 71.0 71 1.0730 0.8438
0.0005 72.0 72 1.0808 0.8438
0.0005 73.0 73 1.0882 0.8438
0.0005 74.0 74 1.0951 0.8438
0.0005 75.0 75 1.1013 0.8125
0.0005 76.0 76 1.1070 0.8125
0.0005 77.0 77 1.1122 0.8125
0.0005 78.0 78 1.1170 0.8125
0.0005 79.0 79 1.1214 0.8125
0.0004 80.0 80 1.1255 0.8125
0.0004 81.0 81 1.1292 0.8125
0.0004 82.0 82 1.1324 0.8125
0.0004 83.0 83 1.1354 0.8125
0.0004 84.0 84 1.1383 0.8125
0.0004 85.0 85 1.1411 0.8125
0.0004 86.0 86 1.1437 0.8125
0.0004 87.0 87 1.1462 0.8125
0.0004 88.0 88 1.1484 0.8125
0.0004 89.0 89 1.1506 0.8125
0.0004 90.0 90 1.1527 0.8125
0.0004 91.0 91 1.1546 0.8125
0.0004 92.0 92 1.1563 0.8125
0.0004 93.0 93 1.1579 0.8125
0.0004 94.0 94 1.1596 0.8125
0.0004 95.0 95 1.1611 0.8125
0.0004 96.0 96 1.1624 0.8125
0.0004 97.0 97 1.1636 0.8125
0.0004 98.0 98 1.1648 0.8125
0.0004 99.0 99 1.1658 0.8125
0.0003 100.0 100 1.1668 0.8125
0.0003 101.0 101 1.1678 0.8125
0.0003 102.0 102 1.1689 0.8125
0.0003 103.0 103 1.1697 0.8125
0.0003 104.0 104 1.1706 0.8125
0.0003 105.0 105 1.1715 0.8125
0.0003 106.0 106 1.1722 0.8125
0.0003 107.0 107 1.1728 0.8125
0.0003 108.0 108 1.1734 0.8125
0.0003 109.0 109 1.1739 0.8125
0.0003 110.0 110 1.1745 0.8125
0.0003 111.0 111 1.1749 0.8125
0.0003 112.0 112 1.1754 0.8125
0.0003 113.0 113 1.1759 0.8125
0.0003 114.0 114 1.1764 0.8125
0.0003 115.0 115 1.1768 0.8125
0.0003 116.0 116 1.1772 0.8125
0.0003 117.0 117 1.1774 0.8125
0.0003 118.0 118 1.1776 0.8125
0.0003 119.0 119 1.1776 0.8125
0.0003 120.0 120 1.1778 0.8125
0.0003 121.0 121 1.1780 0.8125
0.0003 122.0 122 1.1781 0.8125
0.0003 123.0 123 1.1783 0.8125
0.0003 124.0 124 1.1784 0.8125
0.0003 125.0 125 1.1787 0.8125
0.0003 126.0 126 1.1790 0.8125
0.0003 127.0 127 1.1794 0.8125
0.0003 128.0 128 1.1797 0.8125
0.0003 129.0 129 1.1800 0.8125
0.0003 130.0 130 1.1803 0.8125
0.0003 131.0 131 1.1807 0.8125
0.0003 132.0 132 1.1809 0.8125
0.0003 133.0 133 1.1812 0.8125
0.0003 134.0 134 1.1815 0.8125
0.0003 135.0 135 1.1818 0.8125
0.0003 136.0 136 1.1823 0.8125
0.0003 137.0 137 1.1828 0.8125
0.0003 138.0 138 1.1832 0.8125
0.0003 139.0 139 1.1835 0.8125
0.0002 140.0 140 1.1837 0.8125
0.0002 141.0 141 1.1838 0.8125
0.0002 142.0 142 1.1840 0.8125
0.0002 143.0 143 1.1841 0.8125
0.0002 144.0 144 1.1844 0.8125
0.0002 145.0 145 1.1845 0.8125
0.0002 146.0 146 1.1848 0.8125
0.0002 147.0 147 1.1851 0.8125
0.0002 148.0 148 1.1855 0.8125
0.0002 149.0 149 1.1859 0.8125
0.0002 150.0 150 1.1862 0.8125

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for simonycl/best_model-yelp_polarity-16-100

Finetuned
(162)
this model