Edit model card

bert_uncased_L-2_H-128_A-2-finetuned-parsed

This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.2883

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 59 4.6900
No log 2.0 118 4.6347
No log 3.0 177 4.6578
No log 4.0 236 4.5731
No log 5.0 295 4.6258
No log 6.0 354 4.6365
No log 7.0 413 4.7292
No log 8.0 472 4.4789
4.5634 9.0 531 4.3161
4.5634 10.0 590 4.6929
4.5634 11.0 649 4.5543
4.5634 12.0 708 4.3739
4.5634 13.0 767 4.6118
4.5634 14.0 826 4.4036
4.5634 15.0 885 4.3940
4.5634 16.0 944 4.5944
4.0896 17.0 1003 4.3630
4.0896 18.0 1062 4.0447
4.0896 19.0 1121 4.3832
4.0896 20.0 1180 4.0535
4.0896 21.0 1239 4.5213
4.0896 22.0 1298 4.5887
4.0896 23.0 1357 4.5211
4.0896 24.0 1416 4.1876
4.0896 25.0 1475 4.5861
3.9145 26.0 1534 4.3581
3.9145 27.0 1593 4.6545
3.9145 28.0 1652 4.4919
3.9145 29.0 1711 4.1109
3.9145 30.0 1770 4.2736
3.9145 31.0 1829 4.6461
3.9145 32.0 1888 4.3111
3.9145 33.0 1947 4.2909
3.8088 34.0 2006 4.1168
3.8088 35.0 2065 4.2329
3.8088 36.0 2124 4.5285
3.8088 37.0 2183 4.4841
3.8088 38.0 2242 4.2489
3.8088 39.0 2301 4.2384
3.8088 40.0 2360 4.3610
3.8088 41.0 2419 4.2758
3.8088 42.0 2478 4.2895
3.7034 43.0 2537 4.2824
3.7034 44.0 2596 4.4997
3.7034 45.0 2655 4.5091
3.7034 46.0 2714 4.0883
3.7034 47.0 2773 4.2018
3.7034 48.0 2832 4.3701
3.7034 49.0 2891 4.0764
3.7034 50.0 2950 4.6149
3.6455 51.0 3009 4.3629
3.6455 52.0 3068 4.2199
3.6455 53.0 3127 4.3543
3.6455 54.0 3186 4.7006
3.6455 55.0 3245 4.1633
3.6455 56.0 3304 4.5183
3.6455 57.0 3363 4.1918
3.6455 58.0 3422 4.4810
3.6455 59.0 3481 4.1398
3.5468 60.0 3540 3.9632
3.5468 61.0 3599 4.4640
3.5468 62.0 3658 4.0500
3.5468 63.0 3717 4.3956
3.5468 64.0 3776 4.3922
3.5468 65.0 3835 4.2513
3.5468 66.0 3894 4.4475
3.5468 67.0 3953 4.3037
3.4975 68.0 4012 4.1568
3.4975 69.0 4071 4.2253
3.4975 70.0 4130 4.1202
3.4975 71.0 4189 4.4421
3.4975 72.0 4248 4.3548
3.4975 73.0 4307 4.1671
3.4975 74.0 4366 4.4090
3.4975 75.0 4425 4.1064
3.4975 76.0 4484 4.2109
3.44 77.0 4543 4.3244
3.44 78.0 4602 4.1995
3.44 79.0 4661 4.4518
3.44 80.0 4720 4.1991
3.44 81.0 4779 4.4183
3.44 82.0 4838 4.2173
3.44 83.0 4897 4.1721
3.44 84.0 4956 4.1931
3.3916 85.0 5015 4.3280
3.3916 86.0 5074 4.3347
3.3916 87.0 5133 4.3243
3.3916 88.0 5192 4.2708
3.3916 89.0 5251 4.1580
3.3916 90.0 5310 4.0348
3.3916 91.0 5369 4.0605
3.3916 92.0 5428 4.2083
3.3916 93.0 5487 4.2378
3.3817 94.0 5546 4.2171
3.3817 95.0 5605 3.9581
3.3817 96.0 5664 4.1668
3.3817 97.0 5723 4.0394
3.3817 98.0 5782 4.2231
3.3817 99.0 5841 4.1900
3.3817 100.0 5900 4.3041
3.3817 101.0 5959 4.3827
3.3526 102.0 6018 4.0975
3.3526 103.0 6077 4.3543
3.3526 104.0 6136 4.2104
3.3526 105.0 6195 4.2408
3.3526 106.0 6254 4.4281
3.3526 107.0 6313 4.4816
3.3526 108.0 6372 4.1995
3.3526 109.0 6431 4.1844
3.3526 110.0 6490 4.2414
3.3035 111.0 6549 4.3478
3.3035 112.0 6608 3.9579
3.3035 113.0 6667 4.2558
3.3035 114.0 6726 4.0050
3.3035 115.0 6785 4.1944
3.3035 116.0 6844 4.0384
3.3035 117.0 6903 4.5749
3.3035 118.0 6962 4.3816
3.2884 119.0 7021 4.0829
3.2884 120.0 7080 4.1100
3.2884 121.0 7139 4.3181
3.2884 122.0 7198 4.2051
3.2884 123.0 7257 4.1495
3.2884 124.0 7316 4.2398
3.2884 125.0 7375 4.2553
3.2884 126.0 7434 4.0788
3.2884 127.0 7493 4.4999
3.2817 128.0 7552 4.4331
3.2817 129.0 7611 4.3983
3.2817 130.0 7670 4.1597
3.2817 131.0 7729 4.2732
3.2817 132.0 7788 4.1203
3.2817 133.0 7847 4.4417
3.2817 134.0 7906 4.0591
3.2817 135.0 7965 4.0435
3.252 136.0 8024 4.0461
3.252 137.0 8083 4.2521
3.252 138.0 8142 4.2749
3.252 139.0 8201 4.1346
3.252 140.0 8260 4.0411
3.252 141.0 8319 4.0656
3.252 142.0 8378 4.3978
3.252 143.0 8437 4.0533
3.252 144.0 8496 3.9734
3.217 145.0 8555 4.2113
3.217 146.0 8614 4.5480
3.217 147.0 8673 4.1805
3.217 148.0 8732 4.2144
3.217 149.0 8791 4.1457
3.217 150.0 8850 4.3311
3.217 151.0 8909 4.1565
3.217 152.0 8968 4.3584
3.2183 153.0 9027 4.3837
3.2183 154.0 9086 4.0912
3.2183 155.0 9145 4.0785
3.2183 156.0 9204 4.2501
3.2183 157.0 9263 4.1515
3.2183 158.0 9322 4.0559
3.2183 159.0 9381 3.9969
3.2183 160.0 9440 4.0528
3.2183 161.0 9499 3.9618
3.2109 162.0 9558 4.2596
3.2109 163.0 9617 4.0760
3.2109 164.0 9676 4.2589
3.2109 165.0 9735 4.2227
3.2109 166.0 9794 4.3354
3.2109 167.0 9853 4.3471
3.2109 168.0 9912 4.1578
3.2109 169.0 9971 4.4163
3.1868 170.0 10030 4.0754
3.1868 171.0 10089 4.2543
3.1868 172.0 10148 3.9498
3.1868 173.0 10207 4.0863
3.1868 174.0 10266 4.3090
3.1868 175.0 10325 4.2731
3.1868 176.0 10384 4.1997
3.1868 177.0 10443 4.2273
3.1905 178.0 10502 4.3560
3.1905 179.0 10561 4.3330
3.1905 180.0 10620 4.1770
3.1905 181.0 10679 3.8779
3.1905 182.0 10738 4.2199
3.1905 183.0 10797 4.1409
3.1905 184.0 10856 4.3601
3.1905 185.0 10915 4.2380
3.1905 186.0 10974 4.4688
3.1774 187.0 11033 4.2305
3.1774 188.0 11092 3.9129
3.1774 189.0 11151 4.2889
3.1774 190.0 11210 3.8790
3.1774 191.0 11269 4.4458
3.1774 192.0 11328 4.2899
3.1774 193.0 11387 4.4378
3.1774 194.0 11446 4.2316
3.179 195.0 11505 4.0360
3.179 196.0 11564 4.1284
3.179 197.0 11623 4.3879
3.179 198.0 11682 4.0715
3.179 199.0 11741 4.1888
3.179 200.0 11800 4.3268

Framework versions

  • Transformers 4.21.1
  • Pytorch 1.12.0+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for muhtasham/bert-tiny-finetuned-parsed

Finetuned
(36)
this model