File size: 760 Bytes
eb83b21
70cbd15
 
 
eb83b21
70cbd15
 
88e3256
 
eb83b21
 
 
 
70cbd15
eb83b21
 
 
70cbd15
 
 
eb83b21
 
70cbd15
eb83b21
 
 
70cbd15
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- HuggingFaceH4/deita-10k-v0-sft
tags:
- unsloth
---

# Model Card for Model ID

This adapter for Gemma 7B has been fine-tuned (SFT) using unsloth on the instruction dataset HuggingFaceH4/deita-10k-v0-sft.

## Model Details

The model was created using a recipe detailed in this article:
[Fine-tune a Better Google Gemma with Unsloth and Distilled DPO
](https://kaitchup.substack.com/p/fine-tune-a-better-google-gemma-with)


<!-- Provide a longer summary of what this model is. -->



- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal 
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)