6b-gpteacher-role-play-chatml-10epoch
This model is a fine-tuned version of PygmalionAI/pygmalion-6b on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.9141
- Accuracy: 0.1596
- Entropy: 1.7788
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 99
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
---|---|---|---|---|---|
2.0504 | 1.0 | 238 | 2.0176 | 0.1563 | 1.9978 |
1.8932 | 2.0 | 476 | 1.9707 | 0.1584 | 1.9182 |
1.8611 | 3.0 | 714 | 1.9473 | 0.1602 | 1.8831 |
1.8206 | 4.0 | 952 | 1.9307 | 0.1604 | 1.8725 |
1.7936 | 5.0 | 1190 | 1.9238 | 0.1613 | 1.8354 |
1.7823 | 6.0 | 1428 | 1.9189 | 0.1618 | 1.8175 |
1.7742 | 7.0 | 1666 | 1.9150 | 0.1615 | 1.8082 |
1.762 | 8.0 | 1904 | 1.9141 | 0.1605 | 1.8145 |
1.7437 | 9.0 | 2142 | 1.9160 | 0.1604 | 1.7750 |
1.7358 | 10.0 | 2380 | 1.9141 | 0.1596 | 1.7788 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.3