dyang415 commited on
Commit
b274074
1 Parent(s): c5d53d6

End of training

Browse files
Files changed (2) hide show
  1. README.md +6 -1
  2. adapter_model.bin +3 -0
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - generated_from_trainer
6
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
7
  model-index:
@@ -104,7 +105,7 @@ fsdp_config:
104
 
105
  # mixtral-fc-w-resp-new-format-4e-no-negative
106
 
107
- This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
108
 
109
  ## Model description
110
 
@@ -150,6 +151,10 @@ The following hyperparameters were used during training:
150
  - lr_scheduler_warmup_steps: 10
151
  - num_epochs: 4
152
 
 
 
 
 
153
  ### Framework versions
154
 
155
  - PEFT 0.7.0
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
+ - axolotl
6
  - generated_from_trainer
7
  base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
8
  model-index:
 
105
 
106
  # mixtral-fc-w-resp-new-format-4e-no-negative
107
 
108
+ This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
109
 
110
  ## Model description
111
 
 
151
  - lr_scheduler_warmup_steps: 10
152
  - num_epochs: 4
153
 
154
+ ### Training results
155
+
156
+
157
+
158
  ### Framework versions
159
 
160
  - PEFT 0.7.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56cd5da4273ab66c3d8821ff23c189f5690625908ad57f5c656cf20b753c93db
3
+ size 109144269