Pythia Alpaca LoRA
Collection
4 items
•
Updated
axolotl version: 0.4.1
base_model: EleutherAI/pythia-160m-deduped
load_in_8bit: false
datasets:
- path: teknium/GPT4-LLM-Cleaned
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter: lora
lora_model_dir:
sequence_len: 512
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- query_key_value
- dense
- dense_h_to_4h
- dense_4h_to_h
lora_target_linear:
lora_fan_in_fan_out: true # pythia/GPTNeoX lora specific
output_dir: ./outputs/lora-alpaca-pythia
gradient_accumulation_steps: 1
micro_batch_size: 16
num_epochs: 4
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: false
tf32: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
weight_decay: 0.1
evals_per_epoch: 4
logging_steps: 1
push_to_hub: tommyp111/pythia-160m-deduped-alpaca-lora
wandb_project: pythia-alpaca-lora
wandb_name: pythia-160m-grad-norm
optimizer: adamw_torch
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
gradient_checkpointing: true
warmup_steps: 10000
This model is a fine-tuned version of EleutherAI/pythia-160m-deduped on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
8.3121 | 0.0003 | 1 | 28.8947 |
8.51 | 0.2502 | 798 | 28.8493 |
7.1252 | 0.5003 | 1596 | 28.6938 |
11.0054 | 0.7505 | 2394 | 27.8628 |
2.7374 | 1.0006 | 3192 | 5.7286 |
3.3225 | 1.2508 | 3990 | 3.8328 |
2.8093 | 1.5009 | 4788 | 3.0960 |
2.5311 | 1.7511 | 5586 | 2.7825 |
1.9888 | 2.0013 | 6384 | 2.6022 |
2.1802 | 2.2514 | 7182 | 2.4945 |
2.3964 | 2.5016 | 7980 | 2.3910 |
2.1141 | 2.7517 | 8778 | 2.3618 |
2.7874 | 3.0019 | 9576 | 2.3030 |
2.2354 | 3.2520 | 10374 | 2.2600 |
2.0795 | 3.5022 | 11172 | 2.2918 |
2.2697 | 3.7524 | 11970 | 2.2758 |
Base model
EleutherAI/pythia-160m-deduped