dyang415 commited on
Commit
87ed089
1 Parent(s): 8022d8b

Training in progress, step 5

Browse files
Files changed (2) hide show
  1. README.md +8 -18
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - axolotl
6
  - generated_from_trainer
7
  base_model: mistralai/Mistral-7B-Instruct-v0.2
8
  model-index:
@@ -56,8 +55,8 @@ wandb_project: nohto
56
  wandb_name: nohto-v0
57
  wandb_log_model: end
58
 
59
- gradient_accumulation_steps: 4
60
- micro_batch_size: 2
61
  num_epochs: 1
62
  optimizer: paged_adamw_8bit
63
  lr_scheduler: cosine
@@ -93,9 +92,7 @@ fsdp_config:
93
 
94
  # nohto-v0-1e
95
 
96
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
97
- It achieves the following results on the evaluation set:
98
- - Loss: 2.9476
99
 
100
  ## Model description
101
 
@@ -115,26 +112,19 @@ More information needed
115
 
116
  The following hyperparameters were used during training:
117
  - learning_rate: 0.0002
118
- - train_batch_size: 2
119
- - eval_batch_size: 2
120
  - seed: 42
121
  - distributed_type: multi-GPU
122
  - num_devices: 2
123
- - gradient_accumulation_steps: 4
124
- - total_train_batch_size: 16
125
- - total_eval_batch_size: 4
126
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
127
  - lr_scheduler_type: cosine
128
  - lr_scheduler_warmup_steps: 10
129
  - num_epochs: 1
130
 
131
- ### Training results
132
-
133
- | Training Loss | Epoch | Step | Validation Loss |
134
- |:-------------:|:-----:|:----:|:---------------:|
135
- | 2.0421 | 0.8 | 1 | 2.9476 |
136
-
137
-
138
  ### Framework versions
139
 
140
  - PEFT 0.7.0
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - generated_from_trainer
6
  base_model: mistralai/Mistral-7B-Instruct-v0.2
7
  model-index:
 
55
  wandb_name: nohto-v0
56
  wandb_log_model: end
57
 
58
+ gradient_accumulation_steps: 2
59
+ micro_batch_size: 1
60
  num_epochs: 1
61
  optimizer: paged_adamw_8bit
62
  lr_scheduler: cosine
 
92
 
93
  # nohto-v0-1e
94
 
95
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
 
 
96
 
97
  ## Model description
98
 
 
112
 
113
  The following hyperparameters were used during training:
114
  - learning_rate: 0.0002
115
+ - train_batch_size: 1
116
+ - eval_batch_size: 1
117
  - seed: 42
118
  - distributed_type: multi-GPU
119
  - num_devices: 2
120
+ - gradient_accumulation_steps: 2
121
+ - total_train_batch_size: 4
122
+ - total_eval_batch_size: 2
123
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
124
  - lr_scheduler_type: cosine
125
  - lr_scheduler_warmup_steps: 10
126
  - num_epochs: 1
127
 
 
 
 
 
 
 
 
128
  ### Framework versions
129
 
130
  - PEFT 0.7.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bd918f8e28692f41aab4a8c5210d8f284fe3817818c0199a5f29b5c8cd7394ec
3
  size 102820600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:861ce3b2a2552f19f21c782fbf5435f8136b9e9148c867a4755278047864f6de
3
  size 102820600