File size: 2,101 Bytes
e8eb769
 
9bc9065
 
e8eb769
9bc9065
 
 
 
 
 
 
 
 
 
 
 
 
 
a52d806
9bc9065
 
 
 
 
 
 
066c126
 
 
 
 
 
9bc9065
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: cc-by-nc-4.0
language:
- tr
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

gemma-2b-tr fine-tuned with Turkish Instruction-Response pairs.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Language(s) (NLP):** Turkish, English
- **License:** Creative Commons Attribution Non Commercial 4.0
- **Finetuned from model [optional]:** gemma-2b-tr (https://huggingface.co/Metin/gemma-2b-tr)


## Uses

The model is designed for Turkish instruction following and question answering. Its current response quality is limited, likely due to the small instruction set and model size. It is not recommended for real-world applications at this stage.

## Restrictions

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
Please refer to the gemma use restrictions before start using the model.
https://ai.google.dev/gemma/terms#3.2-use

## How to Get Started with the Model

```Python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr-inst")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr-inst")

system_prompt = "You are a helpful assistant. Always reply in Turkish."
instruction = "Ankara hangi ülkenin başkentidir?"
prompt = f"{system_prompt} [INST] {instruction} [/INST]"
input_ids = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```

As it can be seen from the above example instructions should be framed within the following structure:

SYSTEM_PROMPT [INST] \<Your instruction here\> [/INST]

## Training Details

### Training Data

- Dataset: Turkish instructions from the Aya dataset (https://huggingface.co/datasets/CohereForAI/aya_dataset)
- Dataset size: ~550K Token or ~5K instruction-response pair.

### Training Procedure

#### Training Hyperparameters

- **Adapter:** QLoRA
- **Epochs:** 1
- **Context length:** 1024
- **LoRA Rank:** 32
- **LoRA Alpha:** 32
- **LoRA Dropout:** 0.05