File size: 308 Bytes
aa4f91d
 
 
6ee341c
 
 
 
 
 
1
2
3
4
5
6
7
8
9
---
license: llama2
---

Trained using TRL, it didn't fit properly on my 3090 without significantly dropping batch size and applying 4-bit quantization.

It didn't exactly converge.

![training_run.png](https://cdn-uploads.huggingface.co/production/uploads/64075c834dc5f2846c96bc98/b-Tn5IDcRubZp_AyfLNg7.png)