Kquant03 commited on
Commit
9e7fed6
1 Parent(s): 690dd8c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3
4
+ base_model: meta-llama/Meta-Llama-3-8B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: L3-Pneuma-8B
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: meta-llama/Meta-Llama-3-8B
22
+
23
+ load_in_8bit: false
24
+ load_in_4bit: false
25
+ strict: false
26
+
27
+ load_in_8bit: false
28
+ load_in_4bit: false
29
+ strict: false
30
+
31
+ datasets:
32
+ - path: Kquant03/Sandevistan_Reformat
33
+ type: customllama3_stan
34
+ dataset_prepared_path: last_run_prepared
35
+ val_set_size: 0.05
36
+ output_dir: ./outputs/out
37
+ max_steps: 80000
38
+
39
+ fix_untrained_tokens: true
40
+
41
+ sequence_len: 4096
42
+ sample_packing: true
43
+ pad_to_sequence_len: true
44
+
45
+ wandb_project: Pneuma
46
+ wandb_entity:
47
+ wandb_watch:
48
+ wandb_name:
49
+ wandb_log_model:
50
+
51
+ gradient_accumulation_steps: 16
52
+ micro_batch_size: 8
53
+ num_epochs: 1
54
+ optimizer: paged_adamw_8bit
55
+ lr_scheduler: cosine
56
+ learning_rate: 0.00001
57
+ max_grad_norm: 1
58
+
59
+ train_on_inputs: false
60
+ group_by_length: false
61
+ bf16: auto
62
+ fp16:
63
+ tf32: false
64
+
65
+ gradient_checkpointing: unsloth
66
+ early_stopping_patience:
67
+ resume_from_checkpoint:
68
+ logging_steps: 1
69
+ xformers_attention:
70
+ flash_attention: true
71
+ eval_sample_packing: false
72
+
73
+ plugins:
74
+ - axolotl.integrations.liger.LigerPlugin
75
+ liger_rope: true
76
+ liger_rms_norm: true
77
+ liger_swiglu: true
78
+ liger_fused_linear_cross_entropy: true
79
+
80
+ hub_model_id: Replete-AI/L3-Pneuma-8B
81
+ hub_strategy: every_save
82
+
83
+ warmup_steps: 10
84
+ evals_per_epoch: 3
85
+ eval_table_size:
86
+ saves_per_epoch: 3
87
+ debug:
88
+ deepspeed:
89
+ weight_decay: 0.1
90
+ fsdp:
91
+ fsdp_config:
92
+ special_tokens:
93
+ bos_token: "<|begin_of_text|>"
94
+ eos_token: "<|end_of_text|>"
95
+ pad_token: "<|end_of_text|>"
96
+ tokens:
97
+ ```
98
+
99
+ </details><br>
100
+
101
+ # L3-Pneuma-8B
102
+
103
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
104
+ It achieves the following results on the evaluation set:
105
+ - Loss: 2.7381
106
+
107
+ ## Model description
108
+
109
+ More information needed
110
+
111
+ ## Intended uses & limitations
112
+
113
+ More information needed
114
+
115
+ ## Training and evaluation data
116
+
117
+ More information needed
118
+
119
+ ## Training procedure
120
+
121
+ ### Training hyperparameters
122
+
123
+ The following hyperparameters were used during training:
124
+ - learning_rate: 1e-05
125
+ - train_batch_size: 8
126
+ - eval_batch_size: 8
127
+ - seed: 42
128
+ - gradient_accumulation_steps: 16
129
+ - total_train_batch_size: 128
130
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
131
+ - lr_scheduler_type: cosine
132
+ - lr_scheduler_warmup_steps: 10
133
+ - training_steps: 743
134
+
135
+ ### Training results
136
+
137
+ | Training Loss | Epoch | Step | Validation Loss |
138
+ |:-------------:|:------:|:----:|:---------------:|
139
+ | 1.0378 | 0.0013 | 1 | 3.0437 |
140
+ | 0.6816 | 0.3334 | 248 | 2.7341 |
141
+ | 0.6543 | 0.6667 | 496 | 2.7381 |
142
+
143
+
144
+ ### Framework versions
145
+
146
+ - Transformers 4.45.1
147
+ - Pytorch 2.3.1+cu121
148
+ - Datasets 2.21.0
149
+ - Tokenizers 0.20.1
pytorch_model-00001-of-00004.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d36374bd1331779061f0875e22b3f3e05ba2791f96b7d1b0a436fde77ca45115
3
  size 4976718466
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e93295b68419cc22b7f503432f877a508171aa8d6a069cc517dc464b6887bb6e
3
  size 4976718466
pytorch_model-00002-of-00004.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aaf102d066bf853c1901da7a55276344765eaa9fcd5e19db0a154a016302437c
3
  size 4999827718
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:713def0952dce48b0139cd703ae9d47c9511f0a14ce0a46f5fb2215b5730f626
3
  size 4999827718
pytorch_model-00003-of-00004.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84321de2c848a17260849d63ee2e06ac6d778e9ca5c2358b26639e8ef213849c
3
  size 4915940170
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad1263254d31c1bb28287a64b9a358abd7528c48817748a43eed6ac392898a7b
3
  size 4915940170
pytorch_model-00004-of-00004.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b8f24bfb8f432e980a05cdf460a677c0dadcd3d4167f8f58f8688852dce5f2e
3
  size 1168140873
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf9e48a0e2dc69bbdac2ff39b0ab1eefa4b0354106ec04f89bad42f5921edee1
3
  size 1168140873