xu3kev commited on
Commit
3e73c70
1 Parent(s): b1fff33

End of training

Browse files
README.md CHANGED
@@ -3,9 +3,18 @@ library_name: transformers
3
  license: llama3.1
4
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
5
  tags:
 
6
  - trl
7
  - sft
8
  - generated_from_trainer
 
 
 
 
 
 
 
 
9
  model-index:
10
  - name: l3.1-8b-inst-fft-induction-barc-heavy-200k-old-200k-lr1e-5-ep2
11
  results: []
@@ -16,7 +25,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # l3.1-8b-inst-fft-induction-barc-heavy-200k-old-200k-lr1e-5-ep2
18
 
19
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.2709
22
 
 
3
  license: llama3.1
4
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
5
  tags:
6
+ - alignment-handbook
7
  - trl
8
  - sft
9
  - generated_from_trainer
10
+ - trl
11
+ - sft
12
+ - generated_from_trainer
13
+ datasets:
14
+ - barc0/induction_heavy_100k_jsonl
15
+ - barc0/induction_heavy_suggestfunction_100k_jsonl
16
+ - barc0/induction_100k-gpt4-description-gpt4omini-code_generated_problems_messages_format_0.3
17
+ - barc0/induction_100k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3
18
  model-index:
19
  - name: l3.1-8b-inst-fft-induction-barc-heavy-200k-old-200k-lr1e-5-ep2
20
  results: []
 
25
 
26
  # l3.1-8b-inst-fft-induction-barc-heavy-200k-old-200k-lr1e-5-ep2
27
 
28
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the barc0/induction_heavy_100k_jsonl, the barc0/induction_heavy_suggestfunction_100k_jsonl, the barc0/induction_100k-gpt4-description-gpt4omini-code_generated_problems_messages_format_0.3 and the barc0/induction_100k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 datasets.
29
  It achieves the following results on the evaluation set:
30
  - Loss: 0.2709
31
 
all_results.json CHANGED
@@ -1,5 +1,10 @@
1
  {
2
  "epoch": 2.0,
 
 
 
 
 
3
  "total_flos": 2071187864551424.0,
4
  "train_loss": 0.28312984947270664,
5
  "train_runtime": 26606.5881,
 
1
  {
2
  "epoch": 2.0,
3
+ "eval_loss": 0.27092963457107544,
4
+ "eval_runtime": 181.5485,
5
+ "eval_samples": 20178,
6
+ "eval_samples_per_second": 111.144,
7
+ "eval_steps_per_second": 0.87,
8
  "total_flos": 2071187864551424.0,
9
  "train_loss": 0.28312984947270664,
10
  "train_runtime": 26606.5881,
config.json CHANGED
@@ -35,6 +35,6 @@
35
  "tie_word_embeddings": false,
36
  "torch_dtype": "bfloat16",
37
  "transformers_version": "4.45.0.dev0",
38
- "use_cache": false,
39
  "vocab_size": 128256
40
  }
 
35
  "tie_word_embeddings": false,
36
  "torch_dtype": "bfloat16",
37
  "transformers_version": "4.45.0.dev0",
38
+ "use_cache": true,
39
  "vocab_size": 128256
40
  }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "eval_loss": 0.27092963457107544,
4
+ "eval_runtime": 181.5485,
5
+ "eval_samples": 20178,
6
+ "eval_samples_per_second": 111.144,
7
+ "eval_steps_per_second": 0.87
8
+ }
runs/Oct26_18-43-24_4e9b761ee88c/events.out.tfevents.1729996181.4e9b761ee88c.18848.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9dc3845909ea12a47dc87fdde094c48478fdf5b96b008b585740ca1291755a
3
+ size 359