ishikaibm commited on
Commit
189a493
1 Parent(s): d2ed7b2

Model save

Browse files
README.md CHANGED
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # peft_ft_fl_mod_dep
18
 
19
- This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 8.0244
22
- - Accuracy: 24027.0846
23
 
24
  ## Model description
25
 
@@ -53,11 +53,11 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:----------:|
56
- | No log | 1.0 | 1 | 9.2277 | 26511.4904 |
57
- | No log | 2.0 | 2 | 8.4813 | 37946.3871 |
58
- | No log | 3.0 | 3 | 8.1752 | 45811.4303 |
59
- | No log | 4.0 | 4 | 8.4035 | 33938.0300 |
60
- | No log | 5.0 | 5 | 8.0244 | 24027.0846 |
61
 
62
 
63
  ### Framework versions
@@ -65,5 +65,5 @@ The following hyperparameters were used during training:
65
  - PEFT 0.11.1
66
  - Transformers 4.41.1
67
  - Pytorch 2.3.0+cu121
68
- - Datasets 2.19.1
69
  - Tokenizers 0.19.1
 
16
 
17
  # peft_ft_fl_mod_dep
18
 
19
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 9.3394
22
+ - Accuracy: 19962.7013
23
 
24
  ## Model description
25
 
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:----------:|
56
+ | No log | 1.0 | 1 | 10.5249 | 37316.2091 |
57
+ | No log | 2.0 | 2 | 9.8888 | 59610.2323 |
58
+ | No log | 3.0 | 3 | 9.6196 | 17578.0151 |
59
+ | No log | 4.0 | 4 | 9.4279 | 12815.1561 |
60
+ | No log | 5.0 | 5 | 9.3394 | 19962.7013 |
61
 
62
 
63
  ### Framework versions
 
65
  - PEFT 0.11.1
66
  - Transformers 4.41.1
67
  - Pytorch 2.3.0+cu121
68
+ - Datasets 2.19.2
69
  - Tokenizers 0.19.1
adapter_1/adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "query",
24
- "value"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "value",
24
+ "query"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_1/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1d68332f89d184657c025d3d8a6696adb0951670e709bc6422f88b19b350fae2
3
  size 9443984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2846bfe61783ffcd4d6960e59d28c8d5afdac775fc5af39f232bdaf8f7d2f304
3
  size 9443984
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4268bbc7d1756b300a61fb528f4e49794da943e684367ffbb393f392a634319f
3
  size 9443984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a689e3f2172958767673c6fd475ee9f5793d61cb60009c2945354057baf9db5
3
  size 9443984