andikazf15 commited on
Commit
398ade6
1 Parent(s): 3b5df4f

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: mit
4
- base_model: indobenchmark/indobert-large-p1
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -17,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # indobert-smsa_doc-finetuned
19
 
20
- This model is a fine-tuned version of [indobenchmark/indobert-large-p1](https://huggingface.co/indobenchmark/indobert-large-p1) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3324
23
- - Precision: 0.9413
24
- - F1: 0.9413
25
 
26
  ## Model description
27
 
@@ -41,8 +41,8 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
- - train_batch_size: 64
45
- - eval_batch_size: 64
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
@@ -52,9 +52,9 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Precision | F1 |
54
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
55
- | 0.0285 | 1.0 | 1238 | 0.2476 | 0.9357 | 0.9357 |
56
- | 0.0076 | 2.0 | 2476 | 0.2532 | 0.9397 | 0.9397 |
57
- | 0.001 | 3.0 | 3714 | 0.3324 | 0.9413 | 0.9413 |
58
 
59
 
60
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: mit
4
+ base_model: indobenchmark/indobert-base-p2
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
17
 
18
  # indobert-smsa_doc-finetuned
19
 
20
+ This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.7818
23
+ - Precision: 0.9405
24
+ - F1: 0.9405
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
+ - train_batch_size: 32
45
+ - eval_batch_size: 32
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Precision | F1 |
54
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
55
+ | 0.2174 | 1.0 | 2481 | 0.4282 | 0.9230 | 0.9230 |
56
+ | 0.0955 | 2.0 | 4962 | 0.7811 | 0.9405 | 0.9405 |
57
+ | 0.0435 | 3.0 | 7443 | 0.7818 | 0.9405 | 0.9405 |
58
 
59
 
60
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee9098a64d349a3463a0952b6d0fff43dae311f03355f9627b4391b796eef4c0
3
  size 497798148
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:befd98edc7e42e286c9f2952eb1476d22778d7a6d20d921705cdf0c552e90226
3
  size 497798148
runs/Oct06_09-00-23_c8dd5f04cfbe/events.out.tfevents.1728205224.c8dd5f04cfbe.2765.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:646019c2b88477800b91dee1cdba12f36c271a0a4bbcf6881b6ed37d7bda2afc
3
- size 9357
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ca807e1716e6a06892592711594db103742f156d1b9471c4c92ac9d8b06b92
3
+ size 9711