File size: 4,245 Bytes
e85e352
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e06db99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
---
license: gemma
language:
- tr
base_model:
- google/gemma-2-9b-it
pipeline_tag: text-generation

model-index:
- name: Gemma-2-9b-it-TR-DPO-V1
  results:
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: MMLU_TR_V0.2
    metrics:
    - name: 5-shot
      type: 5-shot
      value: 0.5169
      verified: false
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: Truthful_QA_V0.2
    metrics:
    - name: 0-shot
      type: 0-shot
      value: 0.5472
      verified: false
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: ARC_TR_V0.2
    metrics:
    - name: 25-shot
      type: 25-shot
      value: 0.5282
      verified: false
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: HellaSwag_TR_V0.2
    metrics:
    - name: 10-shot
      type: 10-shot
      value: 0.5116
      verified: false
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: GSM8K_TR_V0.2
    metrics:
    - name: 5-shot
      type: 5-shot
      value: 0.6507
      verified: false
  - task:
      type: multiple-choice
    dataset:
      type: multiple-choice
      name: Winogrande_TR_V0.2
    metrics:
    - name: 5-shot
      type: 5-shot
      value: 0.5529
      verified: false
---

<img src="https://huggingface.co/Metin/Gemma-2-9b-it-TR-DPO-V1/resolve/main/gemma2_9b_it_dpo_tr_v1.png"
alt="Logo of Gemma and country code 'TR' in the bottom right corner" width="420"/>

# Gemma-2-9b-it-TR-DPO-V1

Gemma-2-9b-it-TR-DPO-V1 is a finetuned version of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It is trained on a preference dataset which is generated synthetically.

## Training Info

- **Base Model**: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- **Training Data**: A synthetically generated preference dataset consisting of 10K samples was used. No proprietary data was utilized.
- **Training Time**: 2 hours on a single NVIDIA H100

- **QLoRA Configs**:
  - lora_r: 64
  - lora_alpha: 32
  - lora_dropout: 0.05
  - lora_target_linear: true

The aim was to finetune the model to enhance the output format and content quality for the Turkish language. It is not necessarily smarter than the base model, but its outputs are more likable and preferable.

Compared to the base model, Gemma-2-9b-it-TR-DPO-V1 is more fluent and coherent in Turkish. It can generate more informative and detailed answers for a given instruction.

It should be noted that the model will still generate incorrect or nonsensical outputs, so please verify the outputs before using them.

## How to use

You can use the below code snippet to use the model:

```python
from transformers import BitsAndBytesConfig
import transformers
import torch

bnb_config = BitsAndBytesConfig(
            load_in_4bit=True,
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type="nf4",
            bnb_4bit_compute_dtype=torch.bfloat16
)

model_id = "Metin/Gemma-2-9b-it-TR-DPO-V1"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16 ,'quantization_config': bnb_config},
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Python'da bir öğenin bir listede geçip geçmediğini nasıl kontrol edebilirim?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=512,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.2,
    top_p=0.9,
)

print(outputs[0]["generated_text"][len(prompt):])
```

## OpenLLMTurkishLeaderboard_v0.2 benchmark results

- **MMLU_TR_V0.2**: 51.69%
- **Truthful_QA_TR_V0.2**: 54.72%
- **ARC_TR_V0.2**: 52.82%
- **HellaSwag_TR_V0.2**: 51.16%
- **GSM8K_TR_V0.2**: 65.07%
- **Winogrande_TR_V0.2**: 55.29%
- **Average**: 55.13%

These scores may differ from what you will get when you run the same benchmarks, as I did not use any inference engine (vLLM, TensorRT-LLM, etc.)