llama7bit-lora-sql
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.3666
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1399
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2114 | 0.06 | 20 | 0.8331 |
0.6838 | 0.12 | 40 | 0.5232 |
0.5107 | 0.17 | 60 | 0.4600 |
0.4618 | 0.23 | 80 | 0.4289 |
0.4412 | 0.29 | 100 | 0.4153 |
0.425 | 0.35 | 120 | 0.4067 |
0.4182 | 0.41 | 140 | 0.3956 |
0.4137 | 0.47 | 160 | 0.3912 |
0.4047 | 0.52 | 180 | 0.3865 |
0.4034 | 0.58 | 200 | 0.3834 |
0.3968 | 0.64 | 220 | 0.3833 |
0.3954 | 0.7 | 240 | 0.3782 |
0.3921 | 0.76 | 260 | 0.3756 |
0.3877 | 0.82 | 280 | 0.3730 |
0.3849 | 0.87 | 300 | 0.3722 |
0.3831 | 0.93 | 320 | 0.3714 |
0.3787 | 0.99 | 340 | 0.3702 |
0.3677 | 1.05 | 360 | 0.3692 |
0.3632 | 1.11 | 380 | 0.3686 |
0.3611 | 1.17 | 400 | 0.3677 |
0.3588 | 1.22 | 420 | 0.3669 |
0.3579 | 1.28 | 440 | 0.3666 |
0.3551 | 1.34 | 460 | 0.3666 |
0.3586 | 1.4 | 480 | 0.3665 |
0.3581 | 1.46 | 500 | 0.3666 |
Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
- Downloads last month
- 4
Model tree for Liu-Xiang/llama7bit-lora-sql
Base model
meta-llama/Llama-2-7b-hf