Metin commited on
Commit
9bc9065
1 Parent(s): 1388a53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ language:
4
+ - tr
5
  ---
6
+
7
+ # Model Card for Model ID
8
+
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+
11
+ gemma-2b-tr fine-tuned with Turkish Instruction-Response pairs.
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+ - **Language(s) (NLP):** Turkish, English
20
+ - **License:** Creative Commons Attribution Non Commercial 4.0 (Chosen due to the use of restricted/gated datasets.)
21
+ - **Finetuned from model [optional]:** gemma-2b-tr (https://huggingface.co/Metin/gemma-2b-tr)
22
+
23
+
24
+ ## Uses
25
+
26
+ The model is designed for Turkish instruction following and question answering. Its current response quality is limited, likely due to the small instruction set and model size. It is not recommended for real-world applications at this stage.
27
+
28
+ ## How to Get Started with the Model
29
+
30
+ ```Python
31
+ from transformers import AutoTokenizer, AutoModelForCausalLM
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr-inst")
34
+ model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr-inst")
35
+
36
+ system_prompt = "You are a helpful assistant. Always reply in Turkish."
37
+ instruction = "Ankara hangi ülkenin başkentidir?"
38
+ prompt = f"{system_prompt} [INST] {instruction} [/INST]"
39
+ input_ids = tokenizer(prompt, return_tensors="pt")
40
+
41
+ outputs = model.generate(**input_ids)
42
+ print(tokenizer.decode(outputs[0]))
43
+ ```
44
+
45
+ As it can be seen from the above example instructions should be framed within the following structure:
46
+
47
+ SYSTEM_PROMPT [INST] \<Your instruction here\> [/INST]
48
+
49
+ ## Training Details
50
+
51
+ ### Training Data
52
+
53
+ - Dataset: Turkish instructions from the Aya dataset (https://huggingface.co/datasets/CohereForAI/aya_dataset)
54
+ - Dataset size: ~550K Token or ~5K instruction-response pair.
55
+
56
+ ### Training Procedure
57
+
58
+ #### Training Hyperparameters
59
+
60
+ - **Adapter:** QLoRA
61
+ - **Epochs:** 1
62
+ - **Context length:** 1024
63
+ - **LoRA Rank:** 32
64
+ - **LoRA Alpha:** 32
65
+ - **LoRA Dropout:** 0.05