bwahyuh commited on
Commit
55cee1e
1 Parent(s): 9a596e3

Training complete

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- license: apache-2.0
3
- base_model: indolem/indobertweet-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -18,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # awkokawokawokoaw
20
 
21
- This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.3847
24
- - Accuracy: 0.92
25
- - Precision: 0.9236
26
- - Recall: 0.9206
27
- - F1: 0.9220
28
 
29
  ## Model description
30
 
@@ -55,10 +55,10 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
57
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
58
- | 0.5893 | 1.0 | 169 | 0.2996 | 0.895 | 0.8813 | 0.9055 | 0.8908 |
59
- | 0.1982 | 2.0 | 338 | 0.2367 | 0.9267 | 0.9371 | 0.9225 | 0.9291 |
60
- | 0.0809 | 3.0 | 507 | 0.3422 | 0.9117 | 0.9208 | 0.9136 | 0.9168 |
61
- | 0.0223 | 4.0 | 676 | 0.3847 | 0.92 | 0.9236 | 0.9206 | 0.9220 |
62
 
63
 
64
  ### Framework versions
 
1
  ---
2
+ license: mit
3
+ base_model: indolem/indobert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
18
 
19
  # awkokawokawokoaw
20
 
21
+ This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.5909
24
+ - Accuracy: 0.7917
25
+ - Precision: 0.7547
26
+ - Recall: 0.7583
27
+ - F1: 0.7544
28
 
29
  ## Model description
30
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
57
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
58
+ | 1.0793 | 1.0 | 169 | 1.0677 | 0.4333 | 0.1444 | 0.3333 | 0.2016 |
59
+ | 0.9871 | 2.0 | 338 | 0.9369 | 0.625 | 0.4613 | 0.5010 | 0.4515 |
60
+ | 0.7801 | 3.0 | 507 | 0.6453 | 0.76 | 0.7061 | 0.6986 | 0.7008 |
61
+ | 0.5823 | 4.0 | 676 | 0.5909 | 0.7917 | 0.7547 | 0.7583 | 0.7544 |
62
 
63
 
64
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "indolem/indobertweet-base-uncased",
3
  "architectures": [
4
  "BertForSequenceClassification"
5
  ],
@@ -7,7 +7,6 @@
7
  "bos_token_id": 0,
8
  "classifier_dropout": null,
9
  "eos_token_ids": 0,
10
- "gradient_checkpointing": false,
11
  "hidden_act": "gelu",
12
  "hidden_dropout_prob": 0.1,
13
  "hidden_size": 768,
 
1
  {
2
+ "_name_or_path": "indolem/indobert-base-uncased",
3
  "architectures": [
4
  "BertForSequenceClassification"
5
  ],
 
7
  "bos_token_id": 0,
8
  "classifier_dropout": null,
9
  "eos_token_ids": 0,
 
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
  "hidden_size": 768,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85846b9ee37d7ea8a34f2c7424aecf9de3d1c3c29514ab12b65c60fa125659a6
3
  size 442265596
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71567a3eaed378d56823e9d1cff93a79590f2670b40d116b398eb00974700181
3
  size 442265596
runs/Jun28_14-30-39_957e063444f0/events.out.tfevents.1719585039.957e063444f0.1387.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:230916e9d8ee7c532c07ab2cbdaab222295466a24c2bc7e14bc1717354df5508
3
+ size 8088
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49d05dbb31cc5fd9fc95c5a29cc4e31879d634dee8e18b7e2cef6f36a06c2031
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a73205e0d57b874291b45a89419ceea0ff070000a3e829508fef445acf1f9dc
3
  size 5112
vocab.txt CHANGED
The diff for this file is too large to render. See raw diff