dyang415 commited on
Commit
7206b2d
1 Parent(s): ef1185c

Training in progress, step 384

Browse files
Files changed (2) hide show
  1. README.md +1 -6
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - axolotl
6
  - generated_from_trainer
7
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
8
  model-index:
@@ -105,7 +104,7 @@ fsdp_config:
105
 
106
  # mixtral-fc-w-resp-new-format-4e-no-negative
107
 
108
- This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
109
 
110
  ## Model description
111
 
@@ -151,10 +150,6 @@ The following hyperparameters were used during training:
151
  - lr_scheduler_warmup_steps: 10
152
  - num_epochs: 4
153
 
154
- ### Training results
155
-
156
-
157
-
158
  ### Framework versions
159
 
160
  - PEFT 0.7.0
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - generated_from_trainer
6
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
7
  model-index:
 
104
 
105
  # mixtral-fc-w-resp-new-format-4e-no-negative
106
 
107
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
108
 
109
  ## Model description
110
 
 
150
  - lr_scheduler_warmup_steps: 10
151
  - num_epochs: 4
152
 
 
 
 
 
153
  ### Framework versions
154
 
155
  - PEFT 0.7.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2fa64e9769784239b4af961418f101e01b73816a849d9f03c450221a9ab301bc
3
  size 109086416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd661e04d867e2fc8e9cc38d6536b36db7e0018c2a3d375139650562128fdc23
3
  size 109086416