Pragades commited on
Commit
9ab0058
1 Parent(s): 7536fab

Pragades/LlaMa_3.1_8Billion_instruct(FollowUp Questioner)

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.7330
22
 
23
  ## Model description
24
 
@@ -44,23 +44,29 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
- - training_steps: 200
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 1.9638 | 0.0220 | 50 | 0.9688 |
55
- | 0.8433 | 0.0440 | 100 | 0.7759 |
56
- | 0.7632 | 0.0659 | 150 | 0.7478 |
57
- | 0.7461 | 0.0879 | 200 | 0.7330 |
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
61
 
62
  - PEFT 0.12.0
63
  - Transformers 4.44.2
64
- - Pytorch 2.4.0+cu121
65
  - Datasets 3.0.0
66
  - Tokenizers 0.19.1
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6314
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
+ - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 1.0074 | 0.0440 | 100 | 0.8141 |
55
+ | 0.7582 | 0.0879 | 200 | 0.7417 |
56
+ | 0.7663 | 0.1319 | 300 | 0.7115 |
57
+ | 0.6939 | 0.1758 | 400 | 0.6899 |
58
+ | 0.6787 | 0.2198 | 500 | 0.6792 |
59
+ | 0.6553 | 0.2637 | 600 | 0.6664 |
60
+ | 0.6747 | 0.3077 | 700 | 0.6535 |
61
+ | 0.6614 | 0.3516 | 800 | 0.6404 |
62
+ | 0.6343 | 0.3956 | 900 | 0.6343 |
63
+ | 0.6264 | 0.4396 | 1000 | 0.6314 |
64
 
65
 
66
  ### Framework versions
67
 
68
  - PEFT 0.12.0
69
  - Transformers 4.44.2
70
+ - Pytorch 2.4.1+cu121
71
  - Datasets 3.0.0
72
  - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -21,11 +21,11 @@
21
  "revision": null,
22
  "target_modules": [
23
  "k_proj",
24
- "down_proj",
25
  "q_proj",
 
 
26
  "up_proj",
27
  "o_proj",
28
- "gate_proj",
29
  "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
 
21
  "revision": null,
22
  "target_modules": [
23
  "k_proj",
 
24
  "q_proj",
25
+ "gate_proj",
26
+ "down_proj",
27
  "up_proj",
28
  "o_proj",
 
29
  "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:796b0809410075ffbd57212f684c57fb7a80135e8b2b97c3e7c8cb17b19c9b13
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d6397aaa0aeb991bf1d1381856249767bef9d09b5fa7efcb0b39fb4c5976c7d
3
  size 167832240
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:31742fb04e6103830c1f5f2d8188033798c179430c995a4b82cfbca0049ee007
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a81b8a257e23d2dccbb41d4b10081a088a7723f8bf32bd26e184faa007922b24
3
  size 5432