File size: 3,037 Bytes
10fe324
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34e803b
227ec3d
 
34e803b
10fe324
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
efbb324
10fe324
 
 
 
 
 
4d30e99
48c1923
227ec3d
9827ad5
35b834b
bcac24a
6cbba15
c25c0f8
3ea2da1
0199b60
ff7a086
7696f9d
444153b
3160af2
6b21519
6e80a52
047def8
d26d2e9
9fd36b5
f28f467
612ab60
c333605
bf6aace
4131cef
34e803b
10fe324
 
 
 
efbb324
 
 
10fe324
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: apwic/indobert-base-uncased-finetuned-nergrit
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# apwic/indobert-base-uncased-finetuned-nergrit

This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1167
- Validation Loss: 0.1784
- Train Accuracy: 0.9483
- Epoch: 24

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2352, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4507     | 0.1933          | 0.9437         | 0     |
| 0.1708     | 0.1795          | 0.9471         | 1     |
| 0.1295     | 0.1784          | 0.9483         | 2     |
| 0.1169     | 0.1784          | 0.9483         | 3     |
| 0.1172     | 0.1784          | 0.9483         | 4     |
| 0.1180     | 0.1784          | 0.9483         | 5     |
| 0.1176     | 0.1784          | 0.9483         | 6     |
| 0.1172     | 0.1784          | 0.9483         | 7     |
| 0.1168     | 0.1784          | 0.9483         | 8     |
| 0.1174     | 0.1784          | 0.9483         | 9     |
| 0.1174     | 0.1784          | 0.9483         | 10    |
| 0.1178     | 0.1784          | 0.9483         | 11    |
| 0.1175     | 0.1784          | 0.9483         | 12    |
| 0.1175     | 0.1784          | 0.9483         | 13    |
| 0.1179     | 0.1784          | 0.9483         | 14    |
| 0.1176     | 0.1784          | 0.9483         | 15    |
| 0.1165     | 0.1784          | 0.9483         | 16    |
| 0.1179     | 0.1784          | 0.9483         | 17    |
| 0.1169     | 0.1784          | 0.9483         | 18    |
| 0.1170     | 0.1784          | 0.9483         | 19    |
| 0.1175     | 0.1784          | 0.9483         | 20    |
| 0.1177     | 0.1784          | 0.9483         | 21    |
| 0.1161     | 0.1784          | 0.9483         | 22    |
| 0.1174     | 0.1784          | 0.9483         | 23    |
| 0.1167     | 0.1784          | 0.9483         | 24    |


### Framework versions

- Transformers 4.33.0
- TensorFlow 2.12.0
- Datasets 2.14.6
- Tokenizers 0.13.3