Merge-A-MoE/Model
Collection
A collection of danube2 models to mix and merge together.
•
13 items
•
Updated
•
1
This model was first fine-tuned with BAdam on migtissera/Synthia-v1.3 using LLama-Factory.
Big ups, mradermacher!
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
### model
model_name_or_path: danube2-base-chatml
### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 16
seed: 1080
### dataset
dataset: synthia
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: synthia-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00001
num_train_epochs: 1
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.01
bf16: true
flash_attn: fa2
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7895 | 0.1360 | 1000 | 0.7205 |
0.7153 | 0.2720 | 2000 | 0.7015 |
0.7024 | 0.4080 | 3000 | 0.7024 |
0.752 | 0.5440 | 4000 | 0.6977 |
0.727 | 0.6800 | 5000 | 0.6922 |
0.6616 | 0.8160 | 6000 | 0.6922 |
0.7628 | 0.9519 | 7000 | 0.6913 |