File size: 1,423 Bytes
4e5c718 8c7abcb 73eece8 4e5c718 73eece8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
datasets:
- yahma/alpaca-cleaned
---
# Uploaded model
- **Developed by:** NotAiLOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
# Details
This model is fine tuned from unsloth/gemma-2-9b-bnb-4bit on the alpaca-cleaned dataset using the **QLoRA** method.
This model achieved a loss of 0.923800 on the alpaca-cleaned dataset after step 120.
This model follows the alpaca prompt:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
## Training
This model is trained on a single Tesla T4 GPU.
- 1254.1115 seconds used for training.
- 20.9 minutes used for training.
- Peak reserved memory = 9.383 GB.
- Peak reserved memory for training = 2.807 GB.
- Peak reserved memory % of max memory = 63.622 %.
- Peak reserved memory for training % of max memory = 19.033 %.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |