File size: 1,652 Bytes
0eb13e3
 
f3be38d
 
0eb13e3
f3be38d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c32f43
 
 
 
 
 
f3be38d
 
 
 
 
 
 
 
1776ce3
f3be38d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: cc-by-nc-4.0
language:
- tr
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

gemma-2b fine-tuned for the task of Turkish text generation.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Language(s) (NLP):** Turkish, English
- **License:** Creative Commons Attribution Non Commercial 4.0 (Chosen due to the use of restricted/gated datasets.)
- **Finetuned from model [optional]:** gemma-2b (https://huggingface.co/google/gemma-2b)


## Uses

The model is specifically designed for Turkish text generation. It is not suitable for instruction-following or question-answering tasks.

## Restrictions

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
Please refer to the gemma use restrictions before start using the model.
https://ai.google.dev/gemma/terms#3.2-use

## How to Get Started with the Model

```Python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr")

prompt = "Bugün sinemaya gidemedim çünkü"
input_ids = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```

## Training Details

### Training Data

- Dataset size: ~190 Million Token or 100K Document
- Dataset content: Web crawl data

### Training Procedure


#### Training Hyperparameters

- **Adapter:** QLoRA
- **Epochs:** 1
- **Context length:** 1024
- **LoRA Rank:** 32
- **LoRA Alpha:** 32
- **LoRA Dropout:** 0.05