apwic commited on
Commit
1b99fab
1 Parent(s): e2a71a1

Model save

Browse files
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- language:
3
- - id
4
  license: apache-2.0
5
  base_model: LazarusNLP/IndoNanoT5-base
6
  tags:
@@ -19,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5351
23
- - Rouge1: 0.3585
24
  - Rouge2: 0.0
25
- - Rougel: 0.3555
26
- - Rougelsum: 0.357
27
  - Gen Len: 1.0
28
 
29
  ## Model description
@@ -43,8 +41,8 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 8
48
  - eval_batch_size: 32
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -55,16 +53,16 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
58
- | 1.2329 | 1.0 | 1787 | 0.5976 | 0.3912 | 0.0 | 0.3916 | 0.392 | 1.0 |
59
- | 0.7952 | 2.0 | 3574 | 0.5580 | 0.3919 | 0.0 | 0.3921 | 0.3921 | 1.0 |
60
- | 0.7407 | 3.0 | 5361 | 0.5366 | 0.3893 | 0.0 | 0.3879 | 0.3866 | 1.0 |
61
- | 0.7152 | 4.0 | 7148 | 0.5402 | 0.354 | 0.0 | 0.3512 | 0.3523 | 1.0 |
62
- | 0.7029 | 5.0 | 8935 | 0.5351 | 0.3585 | 0.0 | 0.3555 | 0.357 | 1.0 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.40.2
68
- - Pytorch 2.3.0+cu121
69
- - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: LazarusNLP/IndoNanoT5-base
4
  tags:
 
17
 
18
  This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4798
21
+ - Rouge1: 0.6568
22
  - Rouge2: 0.0
23
+ - Rougel: 0.6564
24
+ - Rougelsum: 0.6586
25
  - Gen Len: 1.0
26
 
27
  ## Model description
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.001
45
+ - train_batch_size: 16
46
  - eval_batch_size: 32
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | 0.827 | 1.0 | 894 | 0.5171 | 0.6517 | 0.0 | 0.6531 | 0.6496 | 1.0 |
57
+ | 0.6257 | 2.0 | 1788 | 0.4944 | 0.6472 | 0.0 | 0.6472 | 0.6469 | 1.0 |
58
+ | 0.5832 | 3.0 | 2682 | 0.4848 | 0.6329 | 0.0 | 0.6335 | 0.634 | 1.0 |
59
+ | 0.5581 | 4.0 | 3576 | 0.4824 | 0.6575 | 0.0 | 0.6562 | 0.6576 | 1.0 |
60
+ | 0.5411 | 5.0 | 4470 | 0.4798 | 0.6568 | 0.0 | 0.6564 | 0.6586 | 1.0 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.40.2
66
+ - Pytorch 2.3.1+cu121
67
+ - Datasets 2.20.0
68
  - Tokenizers 0.19.1
adapter-summarization/adapter_config.json CHANGED
@@ -12,11 +12,11 @@
12
  "intermediate_lora": false,
13
  "leave_out": [],
14
  "output_lora": false,
15
- "r": 16,
16
  "selfattn_lora": true,
17
  "use_gating": false
18
  },
19
- "config_id": "141b248112091265",
20
  "hidden_size": 768,
21
  "model_class": "T5ForConditionalGeneration",
22
  "model_name": "LazarusNLP/IndoNanoT5-base",
 
12
  "intermediate_lora": false,
13
  "leave_out": [],
14
  "output_lora": false,
15
+ "r": 8,
16
  "selfattn_lora": true,
17
  "use_gating": false
18
  },
19
+ "config_id": "625403edad0bf919",
20
  "hidden_size": 768,
21
  "model_class": "T5ForConditionalGeneration",
22
  "model_name": "LazarusNLP/IndoNanoT5-base",
adapter-summarization/pytorch_adapter.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45872174cc0c9796b2b3049f506b0b7ef08212d2049c731a7b1864838e9968d2
3
- size 7131954
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee48ec888139610253fd546acc9a33b409eb1ac410a5321d022daf843f255e78
3
+ size 3593010