Telugu-LLM-Labs
commited on
Commit
•
b2f3abb
1
Parent(s):
5bca8a1
Update README.md
Browse files
README.md
CHANGED
@@ -3,3 +3,70 @@ license: other
|
|
3 |
license_name: gemma
|
4 |
license_link: LICENSE
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
license_name: gemma
|
4 |
license_link: LICENSE
|
5 |
---
|
6 |
+
|
7 |
+
# Telugu-gemma-2b-finetuned-sft
|
8 |
+
|
9 |
+
This model is based on [google/gemma-7b](https://huggingface.co/google/gemma-7b) and hase been finetuned on instruction datasets:
|
10 |
+
1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
|
11 |
+
2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
|
12 |
+
|
13 |
+
The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference.
|
14 |
+
|
15 |
+
The model is finetuned only on native telugu SFT data from above datasets and we will update the model with transliteration in upcoming days.
|
16 |
+
|
17 |
+
# Input Text Format
|
18 |
+
|
19 |
+
```
|
20 |
+
### Instruction: {instruction}
|
21 |
+
|
22 |
+
### Input: {input}
|
23 |
+
|
24 |
+
## Response: {response}
|
25 |
+
```
|
26 |
+
|
27 |
+
# Usage
|
28 |
+
|
29 |
+
```python3
|
30 |
+
from unsloth import FastLanguageModel
|
31 |
+
import torch
|
32 |
+
max_seq_length = 2048
|
33 |
+
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
|
34 |
+
load_in_4bit = False
|
35 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
36 |
+
model_name = "Telugu-LLM-Labs/Telugu-gemma-2b-finetuned-sft",
|
37 |
+
max_seq_length = max_seq_length,
|
38 |
+
dtype = dtype,
|
39 |
+
load_in_4bit = load_in_4bit,
|
40 |
+
device_map="auto"
|
41 |
+
)
|
42 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
43 |
+
|
44 |
+
input_prompt = """
|
45 |
+
### Instruction:
|
46 |
+
{}
|
47 |
+
|
48 |
+
### Input:
|
49 |
+
{}
|
50 |
+
|
51 |
+
### Response:
|
52 |
+
{}"""
|
53 |
+
|
54 |
+
input_text = input_prompt.format(
|
55 |
+
"కింది వచనాన్ని రెండు పాయింట్లలో సంగ్రహించండి.", # instruction
|
56 |
+
"Google వార్తలు అనేది Google ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది. Google వార్తలు Android, iOS మరియు వెబ్లో యాప్గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ను మరియు జనవరి 2006లో అధికారిక యాప్ను విడుదల చేసింది.", # input
|
57 |
+
"", # output - leave this blank for generation!
|
58 |
+
)
|
59 |
+
|
60 |
+
inputs = tokenizer([input_text], return_tensors = "pt").to("cuda")
|
61 |
+
|
62 |
+
outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True)
|
63 |
+
response = tokenizer.batch_decode(outputs)
|
64 |
+
```
|
65 |
+
|
66 |
+
# Developers:
|
67 |
+
|
68 |
+
The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
|
69 |
+
|
70 |
+
# Note:
|
71 |
+
|
72 |
+
The model has demonstrated robust capabilities in our testing. If it does not meet your expectations, it may benefit from fine-tuning with suitable SFT datasets. Please do not hesitate to contact us for assistance; we are eager to support you.
|