andrewAmani commited on
Commit
5f6df44
1 Parent(s): e6fe74c

Model save

Browse files
Files changed (2) hide show
  1. README.md +1 -7
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  base_model: hivaze/ParaLex-Llama-3-8B-SFT
3
- datasets:
4
- - generator
5
  library_name: peft
6
  tags:
7
  - generated_from_trainer
@@ -15,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # results_packing
17
 
18
- This model is a fine-tuned version of [hivaze/ParaLex-Llama-3-8B-SFT](https://huggingface.co/hivaze/ParaLex-Llama-3-8B-SFT) on the generator dataset.
19
 
20
  ## Model description
21
 
@@ -44,10 +42,6 @@ The following hyperparameters were used during training:
44
  - lr_scheduler_type: cosine
45
  - num_epochs: 32
46
 
47
- ### Training results
48
-
49
-
50
-
51
  ### Framework versions
52
 
53
  - PEFT 0.11.1
 
1
  ---
2
  base_model: hivaze/ParaLex-Llama-3-8B-SFT
 
 
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
 
13
 
14
  # results_packing
15
 
16
+ This model is a fine-tuned version of [hivaze/ParaLex-Llama-3-8B-SFT](https://huggingface.co/hivaze/ParaLex-Llama-3-8B-SFT) on the None dataset.
17
 
18
  ## Model description
19
 
 
42
  - lr_scheduler_type: cosine
43
  - num_epochs: 32
44
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - PEFT 0.11.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ea3c05f444fdb2f5282c7933dbfa678e6bb0336e60ec1c7f819354050ed05a5
3
  size 27280152
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55040153b4fa612df34870e1c7fa93b9b6a47ff920ce827f279bf931f35c4be7
3
  size 27280152